aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
cs0703138 | 2949715023 | Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem. | Wolpert, Tumer and Frank @cite_1 construct a formalism for the so-called Collective Intelligence ( coin )neural net applied to Internet traffic routing. The approach involves automatically initializing and updating the local utility functions of individual rl agents (nodes) from the global utility and observed local dynamics. Their simulation outperforms a Full Knowledge Shortest Path Algorithm on a sample network of seven nodes. Coin networks employ a method similar in spirit to the research presented here. They rely on a distributed rl algorithm that converges on local optima without endowing each agent node with explicit knowledge of network topology. However, coin differs form our approach in requiring the introduction of preliminary structure into the network by dividing it into semi-autonomous neighborhoods that share a local utility function and encourage cooperation. In contrast, all the nodes in our network update their algorithms directly from the global reward. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2126306851"
],
"abstract": [
"A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function. We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing. These experiments indicate that COINs outperform all previously investigated RL-based, shortest path routing algorithms."
]
} |
cs0703138 | 2949715023 | Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem. | Applying reinforcement learning to communication often involves optimizing performance with respect to multiple criteria. For a recent discussion on this challenging issue see Shelton @cite_11 . In the context of wireless communication it was addressed by Brown @cite_17 who considers the problem of finding a power management policy that simultaneously maximizes the revenue earned by providing communication while minimizing battery usage. The problem is defined as a stochastic shortest path with discounted infinite horizon, where discount factor varies to model power loss. This approach resulted in significant ( @math 100 @math 6$ computers. | {
"cite_N": [
"@cite_17",
"@cite_11"
],
"mid": [
"2157758004",
"2075914509"
],
"abstract": [
"This paper examines the application of reinforcement learning to a wireless communication problem. The problem requires that channel utility be maximized while simultaneously minimizing battery usage. We present a solution to this multi-criteria problem that is able to significantly reduce power consumption. The solution uses a variable discount factor to capture the effects of battery usage.",
"This paper considers a two-user Gaussian interference channel with energy harvesting transmitters. Different than conventional battery powered wireless nodes, energy harvesting transmitters have to adapt transmission to availability of energy at a particular instant. In this setting, the optimal power allocation problem to maximize the sum throughput with a given deadline is formulated. The convergence of the proposed iterative coordinate descent method for the problem is proved and the short-term throughput maximizing offline power allocation policy is found. Examples for interference regions with known sum capacities are given with directional waterfilling interpretations. Next, stochastic data arrivals are addressed. Finally, online and or distributed near-optimal policies are proposed. Performance of the proposed algorithms are demonstrated through simulations."
]
} |
cs0703138 | 2949715023 | Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem. | Subramanian, Druschel and Chen @cite_12 adopt an approach from ant colonies that is very similar in spirit. The individual hosts in their network keep routing tables with the associated costs of sending a packet to other hosts (such as which routers it has to traverse and how expensive they are). These tables are periodically updated by "ants"-messages whose function is to assess the cost of traversing links between hosts. The ants are directed probabilistically along available paths. They inform the hosts along the way of the costs associated with their travel. The hosts use this information to alter their routing tables according to an update rule. There are two types of ants. Regular ants use the routing tables of the hosts to alter the probability of being directed along a certain path. After a number of trials, all regular ants on the same mission start using the same routes. Their function is to allow the host tables to converge on the correct cost figure in case the network is stable. Uniform ants take any path with equal probability. They are the ones who continue exploring the network and assure successful adaptation to changes in link status or link cost. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2107472392"
],
"abstract": [
"Pheromone trails laid by foraging ants serve as a positive feedback mechanism for the sharing of information about food sources. This feedback is nonlinear, in that ants do not react in a proportionate manner to the amount of pheromone deposited. Instead, strong trails elicit disproportionately stronger responses than weak trails. Such nonlinearity has important implications for how a colony distributes its workforce, when confronted with a choice of food sources. We investigated how colonies of the Pharaoh's ant, Monomorium pharaonis, distribute their workforce when offered a choice of two food sources of differing energetic value. By developing a nonlinear differential equation model of trail foraging, and comparing model with experiments, we examined how the ants allocate their workforce between the two food sources. In this allocation, the most profitable feeder (i.e. the feeder with the highest concentration of sugar syrup) was usually exploited by the majority of ants. The particular form of the nonlinear feedback in trail foraging means that when we offered the ants a choice between two feeders of equal profitability, foraging was biased to the feeder with the highest initial number of visitors. Taken together, our experiments illuminate how pheromones provide a mechanism whereby ants can efficiently allocate their workforce among the available food sources without centralized control."
]
} |
cs0703156 | 2950341706 | In case-based reasoning, the adaptation of a source case in order to solve the target problem is at the same time crucial and difficult to implement. The reason for this difficulty is that, in general, adaptation strongly depends on domain-dependent knowledge. This fact motivates research on adaptation knowledge acquisition (AKA). This paper presents an approach to AKA based on the principles and techniques of knowledge discovery from databases and data-mining. It is implemented in CABAMAKA, a system that explores the variations within the case base to elicit adaptation knowledge. This system has been successfully tested in an application of case-based reasoning to decision support in the domain of breast cancer treatment. | @cite_0 , the idea of @cite_12 is reused to extend the approach of @cite_4 : some learning algorithms (in particular, C4.5) are applied to the adaptation cases of @math , to induce general adaptation knowledge. | {
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_12"
],
"mid": [
"2146059992",
"1498118703",
"2115403315"
],
"abstract": [
"Case-Based Reasoning systems retrieve and reuse solutions for previously solved problems that have been encountered and remembered as cases. In some domains, particularly where the problem solving is a classification task, the retrieved solution can be reused directly. But for design tasks it is common for the retrieved solution to be regarded as an initial solution that should be refined to reflect the differences between the new and retrieved problems. The acquisition of adaptation knowledge to achieve this refinement can be demanding, despite the fact that the knowledge source of stored cases captures a substantial part of the problem-solving expertise. This paper describes an introspective learning approach where the case knowledge itself provides a source from which training data for the adaptation task can be assembled. Different learning algorithms are explored and the effect of the learned adaptations is demonstrated for a demanding component-based pharmaceutical design task, tablet formulation. The evaluation highlights the incremental nature of adaptation as a further reasoning step after nearest-neighbour retrieval. A new property-based classification to adapt symbolic values is proposed, and an ensemble of these property-based adaptation classifiers has been particularly successful for the most difficult of the symbolic adaptation tasks in tablet formulation.",
"A major challenge for case-based reasoning (CBR) is to overcome the knowledge-engineering problems incurred by developing adaptation knowledge. This paper describes an approach to automating the acquisition of adaptation knowledge overcoming many of the associated knowledge-engineering costs. This approach makes use of inductive techniques, which learn adaptation knowledge from case comparison. We also show how this adaptation knowledge can be usefully applied. The method has been tested in a property-evaluation CBR system and the technique is illustrated by examples taken from this domain. In addition, we examine how any available domain knowledge might be exploited in such an adaptation-rule learning-system.",
"Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification."
]
} |
cs0702151 | 1670047909 | A streaming model is one where data items arrive over long period of time, either one item at a time or in bursts. Typical tasks include computing various statistics over a sliding window of some fixed time-horizon. What makes the streaming model interesting is that as the time progresses, old items expire and new ones arrive. One of the simplest and central tasks in this model is sampling. That is, the task of maintaining up to @math uniformly distributed items from a current time-window as old items expire and new ones arrive. We call sampling algorithms succinct if they use provably optimal (up to constant factors) worst-case memory to maintain @math items (either with or without replacement). We stress that in many applications structures that have expected succinct representation as the time progresses are not sufficient, as small probability events eventually happen with probability 1. Thus, in this paper we ask the following question: are Succinct Sampling on Streams (or @math -algorithms)possible, and if so for what models? Perhaps somewhat surprisingly, we show that @math -algorithms are possible for all variants of the problem mentioned above, i.e. both with and without replacement and both for one-at-a-time and bursty arrival models. Finally, we use @math algorithms to solve various problems in sliding windows model, including frequency moments, counting triangles, entropy and density estimations. For these problems we present solutions with provable worst-case memory guarantees. | Datar, Gionis, Indyk and Motwani @cite_19 pioneered the research in this area, presenting exponential histograms, effective and simple solutions for a wide class of functions over sliding windows. In particular, they gave a memory-optimal algorithm for count, sum, average, @math and other functions. Gibbons and Tirthapura @cite_46 improved the results for sum and count, providing memory and time-optimal algorithms. Feigenbaum, Kannan and Zhang @cite_33 addressed the problem of computing diameter. Lee and Ting in @cite_48 gave a memory-optimal solution for the relaxed version of the count problem. Chi, Wang, Yu and Muntz @cite_2 addressed a problem of frequent itemsets. Algorithms for frequency counts and quantiles were proposed by Arasu and Manku @cite_18 . Further improvement for counts was reported by Lee and Ting @cite_41 . Babcock, Datar, Motwani and O'Callaghan @cite_23 provided an effective solution of variance and @math -medians problems. Algorithms for rarity and similarity were proposed by Datar and Muthukrishnan @cite_39 . Golab, DeHaan, Demaine, Lopez-Ortiz and Munro @cite_43 provided an effective algorithm for finding frequent elements. Detailed surveys of recent results can be found in @cite_29 @cite_24 . | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_41",
"@cite_48",
"@cite_29",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_43",
"@cite_23",
"@cite_2",
"@cite_46"
],
"mid": [
"2000931246",
"2172028873",
"2141957180",
"2620416450",
"2051580875",
"2963986064",
"2043148321",
"2089135543",
"2290182846",
"2032980094",
"1989921461",
"2952056844"
],
"abstract": [
"Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253",
"We analyze a sublinear RA@?SFA (randomized algorithm for Sparse Fourier analysis) that finds a near-optimal B-term Sparse representation R for a given discrete signal S of length N, in time and space poly(B,log(N)), following the approach given in [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002]. Its time cost poly(log(N)) should be compared with the superlinear @W(NlogN) time requirement of the Fast Fourier Transform (FFT). A straightforward implementation of the RA@?SFA, as presented in the theoretical paper [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002], turns out to be very slow in practice. Our main result is a greatly improved and practical RA@?SFA. We introduce several new ideas and techniques that speed up the algorithm. Both rigorous and heuristic arguments for parameter choices are presented. Our RA@?SFA constructs, with probability at least 1-@d, a near-optimal B-term representation R in time poly(B)log(N)log(1 @d) @e^2log(M) such that @?S-R@?\"2^2=<(1+@e)@?S-R\"o\"p\"t@?\"2^2. Furthermore, this RA@?SFA implementation already beats the FFTW for not unreasonably large N. We extend the algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N 70,000 in one dimension, and at N 900 for data on a NxN grid in two dimensions for small B signals where there is noise.",
"Given a set @math of @math strings of total length @math , our task is to report the \"most relevant\"strings for a given query pattern @math . This involves somewhat more advanced query functionality than the usual pattern matching, as some notion of \"most relevant\" is involved. In information retrieval literature, this task is best achieved by using inverted indexes. However, inverted indexes work only for some predefined set of patterns. In the pattern matching community, the most popular pattern-matching data structures are suffix trees and suffix arrays. However, a typical suffix tree search involves going through all the occurrences of the pattern over the entire string collection, which might be a lot more than the required relevant documents. The first formal framework to study such kind of retrieval problems was given by [Muthukrishnan, 2002]. He considered two metrics for relevance: frequency and proximity. He took a threshold-based approach on these metrics and gave data structures taking @math words of space. We study this problem in a slightly different framework of reporting the top @math most relevant documents (in sorted order) under similar and more general relevance metrics. Our framework gives linear space data structure with optimal query times for arbitrary score functions. As a corollary, it improves the space utilization for the problems in [Muthukrishnan, 2002] while maintaining optimal query performance. We also develop compressed variants of these data structures for several specific relevance metrics.",
"Indexing highly repetitive texts --- such as genomic databases, software repositories and versioned text collections --- has become an important problem since the turn of the millennium. A relevant compressibility measure for repetitive texts is @math , the number of runs in their Burrows-Wheeler Transform (BWT). One of the earliest indexes for repetitive collections, the Run-Length FM-index, used @math space and was able to efficiently count the number of occurrences of a pattern of length @math in the text (in loglogarithmic time per pattern symbol, with current techniques). However, it was unable to locate the positions of those occurrences efficiently within a space bounded in terms of @math . Since then, a number of other indexes with space bounded by other measures of repetitiveness --- the number of phrases in the Lempel-Ziv parse, the size of the smallest grammar generating the text, the size of the smallest automaton recognizing the text factors --- have been proposed for efficiently locating, but not directly counting, the occurrences of a pattern. In this paper we close this long-standing problem, showing how to extend the Run-Length FM-index so that it can locate the @math occurrences efficiently within @math space (in loglogarithmic time each), and reaching optimal time @math within @math space, on a RAM machine of @math bits. Within @math space, our index can also count in optimal time @math . Raising the space to @math , we support count and locate in @math and @math time, which is optimal in the packed setting and had not been obtained before in compressed space. We also describe a structure using @math space that replaces the text and extracts any text substring of length @math in almost-optimal time @math . (...continues...)",
"We consider the performance of two algorithms, GUC and SC studied by M. T. Chao and J. Franco SIAM J. Comput.15(1986), 1106?1118;Inform. Sci.51(1990), 289?314 and V. Chvatal and B. Reed in“Proceedings of the 33rd IEEE Symposium on Foundations of Computer Science, 1992,” pp. 620?627, when applied to a random instance ? of a boolean formula in conjunctive normal form withnvariables and ?cn? clauses of sizekeach. For the case wherek=3, we obtain the exact limiting probability that GUC succeeds. We also consider the situation when GUC is allowed to have limited backtracking, and we improve an existing threshold forcbelow which almost all ? is satisfiable. Fork?4, we obtain a similar result regarding SC with limited backtracking.",
"We consider positive covering integer programs, which generalize set cover and which have attracted a long line of research developing (randomized) approximation algorithms. Srinivasan (2006) gave a rounding algorithm based on the FKG inequality for systems which are \"column-sparse.\" This algorithm may return an integer solution in which the variables get assigned large (integral) values; Kolliopoulos & Young (2005) modified this algorithm to limit the solution size, at the cost of a worse approximation ratio. We develop a new rounding scheme based on the Partial Resampling variant of the Lovasz Local Lemma developed by Harris & Srinivasan (2013). This achieves an approximation ratio of 1 + ln([EQUATION]), where amin is the minimum covering constraint and Δ1 is the maximum e1-norm of any column of the covering matrix (whose entries are scaled to lie in [0, 1]); we also show nearly-matching inapproximability and integrality-gap lower bounds. Our approach improves asymptotically, in several different ways, over known results. First, it replaces Δ0, the maximum number of nonzeroes in any column (from the result of Srinivasan) by Δ1 which is always - and can be much - smaller than Δ0; this is the first such result in this context. Second, our algorithm automatically handles multi-criteria programs; we achieve improved approximation ratios compared to the algorithm of Srinivasan, and give, for the first time when the number of objective functions is large, polynomial-time algorithms with good multi-criteria approximations. We also significantly improve upon the upper-bounds of Kolliopoulos & Young when the integer variables are required to be within (1 + e) of some given upper-bounds, and show nearly-matching inapproximability.",
"In this paper we settle several longstanding open problems in theory of indexability and external orthogonal range searching. In the rst part of the paper, we apply the theory of indexability to the problem of two-dimensional range searching. We show that the special case of 3-sided querying can be solved with constant redundancy and access overhead. From this, we derive indexing schemes for general 4-sided range queries that exhibit an optimal tradeo between redundancy and access overhead. In the second part of the paper, we develop dynamic external memory data structures for the two query types. Our structure for 3-sided queries occupies O(N=B) disk blocks, and it supports insertions and deletions in O(log B N) I Os and queries in O(log B N + T=B) I Os, where B is the disk block size, N is the number of points, and T is the query output size. These bounds are optimal. Our structure for general (4-sided) range searching occupies O (N=B)(log(N=B))= log log B N disk blocks and answers queries in O(log B N + T=B) I Os, which are optimal. It also supports updates in O (log B N)(log(N=B))= log log B N I Os. Center for Geometric Computing, Department of Computer Science, Duke University, Box 90129, Durham, NC 27708 0129. Supported in part by the U.S. Army Research O ce through MURI grant DAAH04 96 1 0013 and by the National Science Foundation through ESS grant EIA 9870734. Part of this work was done while visiting BRICS, Department of Computer Science, University of Aarhus, Denmark. Email: large@cs.duke.edu. yDepartment of Computer Sciences, University of Texas at Austin, Austin, TX 78712-1188. Email vsam@cs.utexas.edu zCenter for Geometric Computing, Department of Computer Science, Duke University, Box 90129, Durham, NC 27708 0129. Supported in part by the U.S. Army Research O ce through MURI grant DAAH04 96 1 0013 and by the National Science Foundation through grants CCR 9522047 and EIA 9870734. Part of this work was done while visiting BRICS, Department of Computer Science, University of Aarhus, Denmark and I.N.R.I.A., Sophia Antipolis, France. Email: jsv@cs.duke.edu.",
"We give efficient algorithms for volume sampling, i.e., for picking @math -subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). In other words, we can efficiently sample @math -subsets of @math with probabilities proportional to the corresponding @math by @math principal minors of any given @math by @math positive semi definite matrix. This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section @math of KV , also implicit in BDM, DRVW ). Our first algorithm for volume sampling @math -subsets of rows from an @math -by- @math matrix runs in @math arithmetic operations (where @math is the exponent of matrix multiplication) and a second variant of it for @math -approximate volume sampling runs in @math arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small @math . Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: (1) Given @math , in @math arithmetic operations we can find @math of its rows such that projecting onto their span gives a @math -approximation to the matrix of rank @math closest to @math under the Frobenius norm. This improves the @math -approximation of Boutsidis, Drineas and Mahoney BDM and matches the lower bound shown in DRVW . The method of conditional expectations gives a algorithm with the same complexity. The running time can be improved to @math at the cost of losing an extra @math in the approximation factor. (2) The same rows and projection as in the previous point give a @math -approximation to the matrix of rank @math closest to @math under the spectral norm. In this paper, we show an almost matching lower bound of @math , even for @math .",
"In PODS 2003, Babcock, Datar, Motwani and O'Callaghan gave the first streaming solution for the k-median problem on sliding windows using O(frack k tau^4 W^2tau log^2 W) space, with a O(2^O(1 tau)) approximation factor, where W is the window size and tau in (0,1 2) is a user-specified parameter. They left as an open question whether it is possible to improve this to polylogarithmic space. Despite much progress on clustering and sliding windows, this question has remained open for more than a decade. In this paper, we partially answer the main open question posed by Babcock, Datar, Motwani and O'Callaghan. We present an algorithm yielding an exponential improvement in space compared to the previous result given in Babcock, et al In particular, we give the first polylogarithmic space (alpha,beta)-approximation for metric k-median clustering in the sliding window model, where alpha and beta are constants, under the assumption, also made by , that the optimal k-median cost on any given window is bounded by a polynomial in the window size. We justify this assumption by showing that when the cost is exponential in the window size, no sublinear space approximation is possible. Our main technical contribution is a simple but elegant extension of smooth functions as introduced by Braverman and Ostrovsky, which allows us to apply well-known techniques for solving problems in the sliding window model to functions that are not smooth, such as the k-median cost.",
"Recently, Frumkin (9) pointed out that none of the well-known algorithms that transform an integer matrix into Smith (16) or Hermite (12) normal form is known to be polynomially bounded in its running time. In fact, Blankinship (3) noticed--as an empirical fact--that intermediate numbersmaybecome quite large during standard calculations Of these canonical forms. Here we present new algorithms in which both the number of algebraic operations and the number of (binary) digits of all intermediate numbers are bounded by polynomials in the length of the input data (assumed to be encoded in binary). These algorithms also find the multiplier-matrices K, U' and K' such thatAK and U'AK' are the Hermite and Smith normal forms of the given matrix A. This provides the first proof that multipliers with small enough entries exist. 1. Introduction. Every nonsingular integer matrix can be transformed into a lower triangular integer matrix using elementary column operations. This was shown by Hermite ((12), Theorem 1 below). Smith ((16), Theorem 3 below) proved that any integer matrix can be diagonalized using elementary row and column operations. The Smith and Hermite normal forms play an important role in the study of rational matrices (calculating their characteristic equations), polynomial matrices (determining the latent roots), algebraic group theory (Newman 15)), system theory (Heymann and Thorpe (13)) and integer programming (Garfi!akel and Nemhauser (10)). Algorithms that compute Smith andHermite normal forms of an integer matrix are given (among others) by Barnette and Pace (1), Bodewig (5), Bradley (7), Frumkin (9) and Hu (14). The methods of Hu, Bodewig and Bradley are based on the explicit calculation of the greatestcommon divisor (GCD) and a set of multiplierswhereas other algorithms ((1)) perform GCD calculations implicitly. As Frumkin (9) pointed out, none of these algorithms is known to be polynomial. In transforming an integer matrix into Smith or Hermite normal form using known techniques, the number of digits of intermediate numbers does not appear to be bounded by a polynomial in the length of the input data as was pointed out by Blankinship (3), (4-1 and Frumkin (9).",
"Given @math elements with non-negative integer weights @math and an integer capacity @math , we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most @math . We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error @math ). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using read-once branching programs. Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows.",
"We derive new time-space tradeoff lower bounds and algorithms for exactly computing statistics of input data, including frequency moments, element distinctness, and order statistics, that are simple to calculate for sorted data. We develop a randomized algorithm for the element distinctness problem whose time T and space S satisfy T in O (n^ 3 2 S^ 1 2 ), smaller than previous lower bounds for comparison-based algorithms, showing that element distinctness is strictly easier than sorting for randomized branching programs. This algorithm is based on a new time and space efficient algorithm for finding all collisions of a function f from a finite set to itself that are reachable by iterating f from a given set of starting points. We further show that our element distinctness algorithm can be extended at only a polylogarithmic factor cost to solve the element distinctness problem over sliding windows, where the task is to take an input of length 2n-1 and produce an output for each window of length n, giving n outputs in total. In contrast, we show a time-space tradeoff lower bound of T in Omega(n^2 S) for randomized branching programs to compute the number of distinct elements over sliding windows. The same lower bound holds for computing the low-order bit of F_0 and computing any frequency moment F_k, k neq 1. This shows that those frequency moments and the decision problem F_0 mod 2 are strictly harder than element distinctness. We complement this lower bound with a T in O(n^2 S) comparison-based deterministic RAM algorithm for exactly computing F_k over sliding windows, nearly matching both our lower bound for the sliding-window version and the comparison-based lower bounds for the single-window version. We further exhibit a quantum algorithm for F_0 over sliding windows with T in O(n^ 3 2 S^ 1 2 ). Finally, we consider the computations of order statistics over sliding windows."
]
} |
cs0702030 | 1616730662 | This paper addresses the following question, which is of interest in the design and deployment of a multiuser decentralized network. Given a total system bandwidth of W Hz and a fixed data rate constraint of R bps for each transmission, how many frequency slots N of size W N should the band be partitioned into to maximize the number of simultaneous transmissions in the network? In an interference-limited ad-hoc network, dividing the available spectrum results in two competing effects: on the positive side, it reduces the number of users on each band and therefore decreases the interference level which leads to an increased SINR, while on the negative side the SINR requirement for each transmission is increased because the same information rate must be achieved over a smaller bandwidth. Exploring this tradeoff between bandwidth and SINR and determining the optimum value of N in terms of the system parameters is the focus of the paper. Using stochastic geometry, we analytically derive the optimal SINR threshold (which directly corresponds to the optimal spectral efficiency) on this tradeoff curve and show that it is a function of only the path loss exponent. Furthermore, the optimal SINR point lies between the low-SINR (power-limited) and high-SINR (bandwidth-limited) regimes. In order to operate at this optimal point, the number of frequency bands (i.e., the reuse factor) should be increased until the threshold SINR, which is an increasing function of the reuse factor, is equal to the optimal value. | The transmission capacity framework introduced in @cite_3 is used to quantify the throughput of such a network, since this metric captures notions of spatial density, data rate, and outage probability, and is more amenable to analysis than the more popular transport capacity @cite_5 . Using tools from stochastic geometry @cite_6 , the distribution of interference from other concurrent transmissions at a reference receiving node The randomness in interference is only due to the random positions of the interfering nodes and fading. is characterized as a function of the spatial density of transmitters, the path-loss exponent, and possibly the fading distribution. The distribution of SINR at the receiving node can then be computed, and an outage occurs whenever the SINR falls below some threshold @math . The outage probability is clearly an increasing function of the density of transmissions, and the transmission capacity is defined to be the maximum density of successful transmissions such that the outage probability is no larger than some prescribed constant @math . | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_3"
],
"mid": [
"2127242306",
"2151792936",
"2963847582"
],
"abstract": [
"In CSMA CA-based, multi-hop, multi-rate wireless networks, spatial reuse can be increased by tuning the carrier-sensing threshold (Tcs) to reduce the carrier sense range (dcs). While reducing dcs enables more concurrent transmissions, the transmission quality suffers from the increased accumulative interference contributed by concurrent transmissions outside dcs. As a result, the data rate at which the transmission can sustain may decrease. How to balance the interplay of spatial reuse and transmission quality (and hence the sustainable data rate) so as to achieve high network capacity is thus an important issue. In this paper, we investigate this issue by extending Cali's model and devising an analytical model that characterizes the transmission activities as governed by IEEE 802.11 DCF in a single-channel, multi-rate, multi-hop wireless network. The systems throughput is derived as a function of Tcs, SINR, beta, and other PHY MAC systems parameters. We incorporate the effect of varying the degree of spatial reuse by tuning the Tcs. Based on the physical radio propagation model, we theoretically estimate the potential accumulated interference contributed by concurrent transmissions and the corresponding SINR. For a given SINR value, we then determine an appropriate data rate at which a transmission can sustain. To the best of our knowledge, this is perhaps the first effort that considers tuning of PHY characteristics (transmit power and data rates) and MAC parameters (contention backoff timer) jointly in an unified framework in order to optimize the overall network throughput. Analytical results indicate that the systems throughput is not a monotonically increasing decreasing function of Tcs, but instead exhibits transitional points where several possible choices of Tcs can be made. In addition, the network capacity can be further improved by choosing the backoff timer values appropriately.",
"This paper surveys and unifies a number of recent contributions that have collectively developed a metric for decentralized wireless network analysis known as transmission capacity. Although it is notoriously difficult to derive general end-to-end capacity results for multi-terminal or adhoc networks, the transmission capacity (TC) framework allows for quantification of achievable single-hop rates by focusing on a simplified physical MAC-layer model. By using stochastic geometry to quantify the multi-user interference in the network, the relationship between the optimal spatial density and success probability of transmissions in the network can be determined, and expressed-often fairly simply-in terms of the key network parameters. The basic model and analytical tools are first discussed and applied to a simple network with path loss only and we present tight upper and lower bounds on transmission capacity (via lower and upper bounds on outage probability). We then introduce random channels (fading shadowing) and give TC and outage approximations for an arbitrary channel distribution, as well as exact results for the special cases of Rayleigh and Nakagami fading. We then apply these results to show how TC can be used to better understand scheduling, power control, and the deployment of multiple antennas in a decentralized network. The paper closes by discussing shortcomings in the model as well as future research directions.",
"Transmission capacity (TC) is a performance metric for wireless networks that measures the spatial intensity of successful transmissions per unit area, subject to a constraint on the permissible outage probability (where outage occurs when the signal to interference plus noise ratio (SINR) at a receiver is below a threshold). This volume gives a unified treatment of the TC framework that has been developed by the authors and their collaborators over the past decade. The mathematical framework underlying the analysis (reviewed in Section 2) is stochastic geometry: Poisson point processes model the locations of interferers, and (stable) shot noise processes represent the aggregate interference seen at a receiver. Section 3 presents TC results (exact, asymptotic, and bounds) on a simple model in order to illustrate a key strength of the framework: analytical tractability yields explicit performance dependence upon key model parameters. Section 4 presents enhancements to this basic model — channel fading, variable link distances (VLD), and multihop. Section 5 presents four network design case studies well-suited to TC: (i) spectrum management, (ii) interference cancellation, (iii) signal threshold transmission scheduling, and (iv) power control. Section 6 studies the TC when nodes have multiple antennas, which provides a contrast vs. classical results that ignore interference."
]
} |
cs0702030 | 1616730662 | This paper addresses the following question, which is of interest in the design and deployment of a multiuser decentralized network. Given a total system bandwidth of W Hz and a fixed data rate constraint of R bps for each transmission, how many frequency slots N of size W N should the band be partitioned into to maximize the number of simultaneous transmissions in the network? In an interference-limited ad-hoc network, dividing the available spectrum results in two competing effects: on the positive side, it reduces the number of users on each band and therefore decreases the interference level which leads to an increased SINR, while on the negative side the SINR requirement for each transmission is increased because the same information rate must be achieved over a smaller bandwidth. Exploring this tradeoff between bandwidth and SINR and determining the optimum value of N in terms of the system parameters is the focus of the paper. Using stochastic geometry, we analytically derive the optimal SINR threshold (which directly corresponds to the optimal spectral efficiency) on this tradeoff curve and show that it is a function of only the path loss exponent. Furthermore, the optimal SINR point lies between the low-SINR (power-limited) and high-SINR (bandwidth-limited) regimes. In order to operate at this optimal point, the number of frequency bands (i.e., the reuse factor) should be increased until the threshold SINR, which is an increasing function of the reuse factor, is equal to the optimal value. | The problem studied in this work is essentially the optimization of frequency reuse in uncoordinated spatial (ad hoc) networks, which is a well studied problem in the context of cellular networks (see for example @cite_1 and references therein). In both settings the tradeoff is between the bandwidth utilized per cell transmission, which is inversely proportional to the frequency reuse factor, and the achieved SINR per transmission. A key difference is that in cellular networks, regular frequency reuse patterns can be planned and implemented, whereas in an ad hoc network this is impossible and so the best that can be hoped for is uncoordinated frequency reuse. Another crucial difference is in terms of analytical tractability. Although there has been a tremendous amount of work on optimization of frequency reuse for cellular networks, these efforts do not, to the best of our knowledge, lend themselves to clean analytical results. On the contrary, in this work we are able to derive very simple analytical results in the random network setting that very cleanly show the dependence of the optimal reuse factor on system parameters such as path loss exponent and rate. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2127242306"
],
"abstract": [
"In CSMA CA-based, multi-hop, multi-rate wireless networks, spatial reuse can be increased by tuning the carrier-sensing threshold (Tcs) to reduce the carrier sense range (dcs). While reducing dcs enables more concurrent transmissions, the transmission quality suffers from the increased accumulative interference contributed by concurrent transmissions outside dcs. As a result, the data rate at which the transmission can sustain may decrease. How to balance the interplay of spatial reuse and transmission quality (and hence the sustainable data rate) so as to achieve high network capacity is thus an important issue. In this paper, we investigate this issue by extending Cali's model and devising an analytical model that characterizes the transmission activities as governed by IEEE 802.11 DCF in a single-channel, multi-rate, multi-hop wireless network. The systems throughput is derived as a function of Tcs, SINR, beta, and other PHY MAC systems parameters. We incorporate the effect of varying the degree of spatial reuse by tuning the Tcs. Based on the physical radio propagation model, we theoretically estimate the potential accumulated interference contributed by concurrent transmissions and the corresponding SINR. For a given SINR value, we then determine an appropriate data rate at which a transmission can sustain. To the best of our knowledge, this is perhaps the first effort that considers tuning of PHY characteristics (transmit power and data rates) and MAC parameters (contention backoff timer) jointly in an unified framework in order to optimize the overall network throughput. Analytical results indicate that the systems throughput is not a monotonically increasing decreasing function of Tcs, but instead exhibits transitional points where several possible choices of Tcs can be made. In addition, the network capacity can be further improved by choosing the backoff timer values appropriately."
]
} |
cs0702078 | 2953373266 | We present a local algorithm for finding dense subgraphs of bipartite graphs, according to the definition of density proposed by Kannan and Vinay. Our algorithm takes as input a bipartite graph with a specified starting vertex, and attempts to find a dense subgraph near that vertex. We prove that for any subgraph S with k vertices and density theta, there are a significant number of starting vertices within S for which our algorithm produces a subgraph S' with density theta O(log n) on at most O(D k^2) vertices, where D is the maximum degree. The running time of the algorithm is O(D k^2), independent of the number of vertices in the graph. | The closely related densest @math -subgraph problem is to identify the subgraph with the largest number of edges among all subgraphs of exactly @math vertices. This problem is considerably more difficult, and there is a large gap between the best approximation algorithms and hardness results known for the problem (see @cite_2 @cite_5 ). | {
"cite_N": [
"@cite_5",
"@cite_2"
],
"mid": [
"2266714125",
"2296110048"
],
"abstract": [
"Numerous graph mining applications rely on detecting subgraphs which are large near-cliques. Since formulations that are geared towards finding large near-cliques are hard and frequently inapproximable due to connections with the Maximum Clique problem, the poly-time solvable densest subgraph problem which maximizes the average degree over all possible subgraphs \"lies at the core of large scale data mining\" [10]. However, frequently the densest subgraph problem fails in detecting large near-cliques in networks. In this work, we introduce the k-clique densest subgraph problem, k ≥ 2. This generalizes the well studied densest subgraph problem which is obtained as a special case for k=2. For k=3 we obtain a novel formulation which we refer to as the triangle densest subgraph problem: given a graph G(V,E), find a subset of vertices S* such that τ(S*)=max limitsS ⊆ V t(S) |S|, where t(S) is the number of triangles induced by the set S. On the theory side, we prove that for any k constant, there exist an exact polynomial time algorithm for the k-clique densest subgraph problem . Furthermore, we propose an efficient 1 k-approximation algorithm which generalizes the greedy peeling algorithm of Asahiro and Charikar [8,18] for k=2. Finally, we show how to implement efficiently this peeling framework on MapReduce for any k ≥ 3, generalizing the work of Bahmani, Kumar and Vassilvitskii for the case k=2 [10]. On the empirical side, our two main findings are that (i) the triangle densest subgraph is consistently closer to being a large near-clique compared to the densest subgraph and (ii) the peeling approximation algorithms for both k=2 and k=3 achieve on real-world networks approximation ratios closer to 1 rather than the pessimistic 1 k guarantee. An interesting consequence of our work is that triangle counting, a well-studied computational problem in the context of social network analysis can be used to detect large near-cliques. Finally, we evaluate our proposed method on a popular graph mining application.",
"The densest subgraph problem, which asks for a subgraph with the maximum edges-to-vertices ratio d∗, is solvable in polynomial time. We discuss algorithms for this problem and the computation of a graph orientation with the lowest maximum indegree, which is equal to ⌈d∗⌉. This value also equals the pseudoarboricity of the graph. We show that it can be computed in O(|E| √ log log d∗) time, and that better estimates can be given for graph classes where d∗ satisfies certain asymptotic bounds. These runtimes are achieved by accelerating a binary search with an approximation scheme, and a runtime analysis of Dinitz’s algorithm on flow networks where all arcs, except the source and sink arcs, have unit capacity. We experimentally compare implementations of various algorithms for the densest subgraph and pseudoarboricity problems. In flow-based algorithms, Dinitz’s algorithm performs significantly better than push-relabel algorithms on all instances tested."
]
} |
cs0702113 | 2950657197 | We describe a new sampling-based method to determine cuts in an undirected graph. For a graph (V, E), its cycle space is the family of all subsets of E that have even degree at each vertex. We prove that with high probability, sampling the cycle space identifies the cuts of a graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and cut pairs (a set of 2 edges that form a cut) of a graph. In the model of distributed computing in a graph G=(V, E) with O(log V)-bit messages, our approach yields faster algorithms for several problems. The diameter of G is denoted by Diam, and the maximum degree by Delta. We obtain simple O(Diam)-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving upon previous time bounds. Under natural conditions these new algorithms are universally optimal --- i.e. a Omega(Diam)-time lower bound holds on every graph. We obtain a O(Diam+Delta log V)-time distributed algorithm for finding cut vertices; this is faster than the best previous algorithm when Delta, Diam = O(sqrt(V)). A simple extension of our work yields the first distributed algorithm with sub-linear time for 3-edge-connected components. The basic distributed algorithms are Monte Carlo, but they can be made Las Vegas without increasing the asymptotic complexity. In the model of parallel computing on the EREW PRAM our approach yields a simple algorithm with optimal time complexity O(log V) for finding cut pairs and 3-edge-connected components. | Randomized algorithms appear in other literature related to the cut and cycle spaces. For example, @cite_7 computes the genus of an embedded graph @math while observing" part of it. They use random perturbation and balancing steps to compute a on @math and the dual graph of @math . Their computational model is quite different from the one here, e.g. they allow a face to modify the values of all its incident edges in a single time step. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2279830512"
],
"abstract": [
"Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast @math -coloring of trees requires random bits: Building on the recent lower bounds of , we prove that the randomized complexity of @math -coloring a tree with maximum degree @math is @math , whereas its deterministic complexity is @math for any @math . This also establishes a large separation between the deterministic complexity of @math -coloring and @math -coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in @math rounds can be transformed to run in @math rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires @math time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size @math is at least its deterministic complexity on instances of size @math . This shows that a deterministic @math lower bound for any problem implies a randomized @math lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model."
]
} |
cs0701037 | 2951856549 | DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package. | DejaVu @cite_2 (whose development overlapped that of DMTCP) also provides transparent user-level checkpointing of distributed process based on sockets. However, DejaVu appears to be much slower than DMTCP. For example, in the Chombo benchmark, Ruscio et al report executing ten checkpoints per hour with 45 checkpoints in 2 seconds, with essentially zero overhead between checkpoints. Nevertheless, DejaVu is also able to checkpoint InfiniBand connections by using a customized version of MVAPICH. DejaVu takes a more invasive approach than DMTCP, by logging all communication and by using page protection to detect modification of memory pages between checkpoints. This accounts for additional overhead during normal program execution that is not present in DMTCP. Since DejaVu was not publicly available at the time of this writing, a direct timing comparison on a common benchmark was not possible. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2105039796"
],
"abstract": [
"Global Computing platforms, large scale clusters and future TeraGRID systems gather thousands of nodes for computing parallel scientific applications. At this scale, node failures or disconnections are frequent events. This Volatility reduces the MTBF of the whole system in the range of hours or minutes. We present MPICH-V, an automatic Volatility tolerant MPI environment based on uncoordinated checkpoint roll-back and distributed message logging. MPICH-V architecture relies on Channel Memories, Checkpoint servers and theoretically proven protocols to execute existing or new, SPMD and Master-Worker MPI applications on volatile nodes. To evaluate its capabilities, we run MPICH-V within a framework for which the number of nodes, Channels Memories and Checkpoint Servers can be completely configured as well as the node Volatility. We present a detailed performance evaluation of every component of MPICH-V and its global performance for non-trivial parallel applications. Experimental results demonstrate good scalability and high tolerance to node volatility."
]
} |
cs0701037 | 2951856549 | DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package. | The remaining work on distributed transparent checkpointing can be divided into two categories: User-level MPI libraries for checkpointing @cite_10 @cite_6 @cite_5 @cite_30 @cite_15 @cite_0 @cite_9 @cite_27 @cite_12 : works for distributed processes, but only if they communicate exclusively through MPI (Message Passing Interface). Typically restricted to a particular dialect of MPI. Kernel-level (system-level) checkpointing @cite_7 @cite_19 @cite_34 @cite_25 @cite_20 @cite_4 @cite_8 : modification of kernel; requirements on matching package version to kernel version. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_20",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_34",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2069891156",
"2045879521",
"1520339130",
"2139835701",
"2115367411",
"2077783617",
"2114035455",
"2014594876",
"2105039796",
"2109017398",
"2095487435",
"2155662278",
"2120383635",
"2740001873",
"2143300065",
"1565978235"
],
"abstract": [
"As high-performance clusters continue to grow in size and popularity, issues of fault tolerance and reliability are becoming limiting factors on application scalability. We integrated one user-level checkpointing and rollback recovery (CRR) library to LAM MPI, a high performance implementation of the Message Passing Interface (MPI), to improve its availability. Compared with the current CRR implementation of LAM MPI, our work supports file checkpointing and own higher portability, which can run on more platforms including IA32 and IA64 Linux. In addition, the test shows that less than 15 performance overhead is introduced by the CRR mechanism of our implementation.",
"As high performance clusters continue to grow in size and popularity, issues of fault tolerance and reliability are becoming limiting factors on application scalability. To address these issues, we present the design and implementation of a system for providing coordinated checkpointing and rollback recovery for MPI-based parallel applications. Our approach integrates the Berkeley Lab BLCR kernel-level process checkpoint system with the LAM implementation of MPI through a defined checkpoint restart interface. Checkpointing is transparent to the application, allowing the system to be used for cluster maintenance and scheduling reasons as well as for fault tolerance. Experimental results show negligible communication performance impact due to the incorporation of the checkpoint support capabilities into LAM MPI.",
"Multiple threads running in a single, shared address space is a simple model for writing parallel programs for symmetric multiprocessor (SMP) machines and for overlapping I O and computation in programs run on either SMP or single processor machines. Often a long running program’s user would like the program to save its state periodically in a checkpoint from which it can recover in case of a failure. This paper introduces the first system to provide checkpointing support for multithreaded programs that use LinuxThreads, the POSIX based threads library for Linux. The checkpointing library is simple to use, automatically takes checkpoint, is flexible, and efficient. Virtually all of the overhead of the checkpointing system comes from saving the checkpoint to disk. The checkpointing library added no measurable overhead to tested application programs when they took no checkpoints. Checkpoint file size is approximately the same size as the checkpointed process’s address space. On the current implementation WATER-SPATIAL from the SPLASH2 benchmark suite saved a 2.8 MB checkpoint in about 0.18 seconds for local disk or about 21.55 seconds for an NFS mounted disk. The overhead of saving state to disk can be minimized through various techniques including varying the checkpoint interval and excluding regions of the address space from checkpoints.",
"As computational clusters increase in size, their mean time to failure reduces drastically. Typically, checkpointing is used to minimize the loss of computation. Most checkpointing techniques, however, require central storage for storing checkpoints. This results in a bottleneck and severely limits the scalability of checkpointing, while also proving to be too expensive for dedicated checkpointing networks and storage systems. We propose a scalable replication-based MPI checkpointing facility. Our reference implementation is based on LAM MPI; however, it is directly applicable to any MPI implementation. We extend the existing state of fault-tolerant MPI with asynchronous replication, eliminating the need for central or network storage. We evaluate centralized storage, a Sun-X4500-based solution, an EMC storage area network (SAN), and the Ibrix commercial parallel file system and show that they are not scalable, particularly after 64 CPUs. We demonstrate the low overhead of our checkpointing and replication scheme with the NAS Parallel Benchmarks and the High-Performance LINPACK benchmark with tests up to 256 nodes while demonstrating that checkpointing and replication can be achieved with a much lower overhead than that provided by current techniques. Finally, we show that the monetary cost of our solution is as low as 25 percent of that of a typical SAN parallel-file-system-equipped storage system.",
"We present a new distributed checkpoint-restart mechanism, Cruz, that works without requiring application, library, or base kernel modifications. This mechanism provides comprehensive support for checkpointing and restoring application state, both at user level and within the OS. Our implementation builds on Zap, a process migration mechanism, implemented as a Linux kernel module, which operates by interposing a thin layer between applications and the OS. In particular, we enable support for networked applications by adding migratable IP and MAC addresses, and checkpoint-restart of socket buffer state, socket options, and TCP state. We leverage this capability to devise a novel method for coordinated checkpoint-restart that is simpler than prior approaches. For instance, it eliminates the need to flush communication channels by exploiting the packet re-transmission behavior of TCP and existing OS support for packet filtering. Our experiments show that the overhead of coordinating checkpoint-restart is negligible, demonstrating the scalability of this approach.",
"Application development for distributed-computing \"Grids\" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.",
"The running times of many computational science applications, such as protein-folding using ab initio methods, are much longer than the mean-time-to-failure of high-performance computing platforms. To run to completion, therefore, these applications must tolerate hardware failures.In this paper, we focus on the stopping failure model in which a faulty process hangs and stops responding to the rest of the system. We argue that tolerating such faults is best done by an approach called application-level coordinated non-blocking checkpointing, and that existing fault-tolerance protocols in the literature are not suitable for implementing this approach.We then present a suitable protocol, which is implemented by a co-ordination layer that sits between the application program and the MPI library. We show how this protocol can be used with a precompiler that instruments C MPI programs to save application and MPI library state. An advantage of our approach is that it is independent of the MPI implementation. We present experimental results that argue that the overhead of using our system can be small.",
"The ability to produce malleable parallel applications that can be stopped and reconfigured during the execution can offer attractive benefits for both the system and the applications. The reconfiguration can be in terms of varying the parallelism for the applications, changing the data distributions during the executions or dynamically changing the software components involved in the application execution. In distributed and Grid computing systems, migration and reconfiguration of such malleable applications across distributed heterogeneous sites which do not share common file systems provides flexibility for scheduling and resource management in such distributed environments. The present reconfiguration systems do not support migration of parallel applications to distributed locations. In this paper, we discuss a framework for developing malleable and migratable MPI message-passing parallel applications for distributed systems. The framework includes a user-level checkpointing library called SRS and a runtime support system that manages the checkpointed data for distribution to distributed locations. Our experiments and results indicate that the parallel applications, with instrumentation to SRS library, were able to achieve reconfigurability incurring about 15-35 overhead.",
"Global Computing platforms, large scale clusters and future TeraGRID systems gather thousands of nodes for computing parallel scientific applications. At this scale, node failures or disconnections are frequent events. This Volatility reduces the MTBF of the whole system in the range of hours or minutes. We present MPICH-V, an automatic Volatility tolerant MPI environment based on uncoordinated checkpoint roll-back and distributed message logging. MPICH-V architecture relies on Channel Memories, Checkpoint servers and theoretically proven protocols to execute existing or new, SPMD and Master-Worker MPI applications on volatile nodes. To evaluate its capabilities, we run MPICH-V within a framework for which the number of nodes, Channels Memories and Checkpoint Servers can be completely configured as well as the node Volatility. We present a detailed performance evaluation of every component of MPICH-V and its global performance for non-trivial parallel applications. Experimental results demonstrate good scalability and high tolerance to node volatility.",
"Fault tolerant MPI (FTMPI) enables fault tolerance to the MPICH, an open source GPL licensed implementation of MPI standard by Argonne National Laboratory's Mathematics and Computer Science Division. FTMPI is a transparent fault-tolerant environment, based on synchronous checkpointing and restarting mechanism. FTMPI relies on non-multithreaded single process checkpointing library to synchronously checkpoint an application process. Global replicated system controller and cluster node specific node controller monitors and controls check pointing and recovery activities of all MPI applications within the cluster. This work details the architecture to provide fault tolerance mechanism for MPI based applications running on clusters and the performance of NAS parallel benchmarks and parallelized medium range weather forecasting models, P-T80 and P-TI26. The architecture addresses the following issues also: Replicating system controller to avoid single point of failure. Ensuring consistency of checkpoint files based on distributed two phase commit protocol, and robust fault detection hierarchy.",
"A long-term trend in high-performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Fault tolerant programming environments should be used to guarantee the safe execution of critical applications. Research in fault tolerant MPI has led to the development of several fault tolerant MPI environments. Different approaches are being proposed using a variety of fault tolerant message passing protocols based on coordinated checkpointing or message logging. The most popular approach is with coordinated checkpointing. In the literature, two different concepts of coordinated checkpointing have been proposed: blocking and nonblocking. However they have never been compared quantitatively and their respective scalability remains unknown. The contribution of this paper is to provide the first comparison between these two approaches and a study of their scalability. We have implemented the two approaches within the MPICH environments and evaluate their performance using the NAS parallel benchmarks.",
"To be able to fully exploit ever larger computing platforms, modern HPC applications and system software must be able to tolerate inevitable faults. Historically, MPI implementations that incorporated fault tolerance capabilities have been limited by lack of modularity, scalability and usability. This paper presents the design and implementation of an infrastructure to support checkpoint restart fault tolerance in the Open MPI project. We identify the general capabilities required for distributed checkpoint restart and realize these capabilities as extensible frameworks within Open MPI's modular component architecture. Our design features an abstract interface for providing and accessing fault tolerance services without sacrificing performance, robustness, or flexibility. Although our implementation includes support for some initial checkpoint restart mechanisms, the framework is meant to be extensible and to encourage experimentation of alternative techniques within a production quality MPI implementation.",
"The running times of large-scale computational science and engineering parallel applications, executed on clusters or grid platforms, are usually longer than the mean-time-between-failures (MTBF). Hardware failures must be tolerated by the parallel applications to ensure that no all computation done is lost on machine failures. Checkpointing and rollback recovery is a very useful technique to implement fault-tolerant applications. Although extensive research has been carried out in this field, there are few available tools to help parallel programmers to enhance with fault tolerant capability their applications. This work presents two different approaches to endow with fault tolerance the MPI version of an air quality simulation. A segment-level solution has been implemented by means of the extension of a checkpointing library for sequential codes. A variable-level solution has been implemented manually in the code. The main differences between both approaches are portability, transparency-level and checkpointing overheads. Experimental results comparing both strategies on a cluster of PCs are shown in the paper",
"Traditionally, MPI runtimes have been designed for clusters with a large number of nodes. However, with the advent of MPI+CUDA applications and dense multi-GPU systems, it has become important to design efficient communication schemes. This coupled with new application workloads brought forward by Deep Learning frameworks like Caffe and Microsoft CNTK pose additional design constraints due to very large message communication of GPU buffers during the training phase. In this context, special-purpose libraries like NCCL have been proposed. In this paper, we propose a pipelined chain (ring) design for the MPI_Bcast collective operation along with an enhanced collective tuning framework in MVAPICH2-GDR that enables efficient intra- internode multi-GPU communication. We present an in-depth performance landscape for the proposed MPI_Bcast schemes along with a comparative analysis of NCCL Broadcast and NCCL-based MPI_Bcast. The proposed designs for MVAPICH2-GDR enable up to 14X and 16.6X improvement, compared to NCCL-based solutions, for intra- and internode broadcast latency, respectively. In addition, the proposed designs provide up to 7 improvement over NCCL-based solutions for data parallel training of the VGG network on 128 GPUs using Microsoft CNTK. The proposed solutions outperform the recently introduced NCCL2 library for small and medium message sizes and offer comparable better performance for very large message sizes.",
"A message passing interface (MPI) collective operation such as broadcast involves multiple processes. The process arrival pattern denotes the timing when each process arrives at a collective operation. It can have a profound impact on the performance since it decides the time when each process can start participating in the operation. In this paper, we investigate the broadcast operation with different process arrival patterns. We analyze commonly used broadcast algorithms and show that they cannot guarantee high performance for different process arrival patterns. We develop two process arrival pattern aware algorithms for broadcasting large messages. The performance of proposed algorithms is theoretically within a constant factor of the optimal for any given process arrival pattern. Our experimental evaluation confirms the analytical results: existing broadcast algorithms cannot achieve high performance for many process arrival patterns while the proposed algorithms are robust and efficient across different process arrival patterns.",
"The Message-Passing Interface (MPI) is large and complex. Therefore, programming MPI is error prone. Several MPI runtime correctness tools address classes of usage errors, such as deadlocks or non-portable constructs. To our knowledge none of these tools scales to more than about 100 processes. However, some of the current HPC systems use more than 100,000 cores and future systems are expected to use far more. Since errors often depend on the task count used, we need correctness tools that scale to the full system size. We present a novel framework for scalable MPI correctness tools to address this need. Our fine-grained, module-based approach supports rapid prototyping and allows correctness tools built upon it to adapt to different architectures and use cases. The design uses P n MPI to instantiate a tool from a set of individual modules. We present an overview of our design, along with first performance results for a proof of concept implementation."
]
} |
cs0701037 | 2951856549 | DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package. | A crossover between these two categories is the kernel level checkpointer BLCR @cite_7 @cite_19 . BLCR is particularly notable because of its widespread usage. BLCR itself can only checkpoint processes on a single machine. However some MPI libraries (including some versions of OpenMPI, LAM MPI, MVAPICH2, and MPICH-V) are able to integrate with BLCR to provide distributed checkpointing. | {
"cite_N": [
"@cite_19",
"@cite_7"
],
"mid": [
"2045879521",
"2116115793"
],
"abstract": [
"As high performance clusters continue to grow in size and popularity, issues of fault tolerance and reliability are becoming limiting factors on application scalability. To address these issues, we present the design and implementation of a system for providing coordinated checkpointing and rollback recovery for MPI-based parallel applications. Our approach integrates the Berkeley Lab BLCR kernel-level process checkpoint system with the LAM implementation of MPI through a defined checkpoint restart interface. Checkpointing is transparent to the application, allowing the system to be used for cluster maintenance and scheduling reasons as well as for fault tolerance. Experimental results show negligible communication performance impact due to the incorporation of the checkpoint support capabilities into LAM MPI.",
"This article describes the motivation, design and implementation of Berkeley Lab Checkpoint Restart (BLCR), a system-level checkpoint restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd."
]
} |
cs0701037 | 2951856549 | DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package. | Much MPI-specific work has been based on coordinated checkpointing and the use of hooks into communication by the MPI library @cite_10 @cite_6 . In contrast, our goal is to support more general distributed scientific software. | {
"cite_N": [
"@cite_10",
"@cite_6"
],
"mid": [
"2114035455",
"2545968212"
],
"abstract": [
"The running times of many computational science applications, such as protein-folding using ab initio methods, are much longer than the mean-time-to-failure of high-performance computing platforms. To run to completion, therefore, these applications must tolerate hardware failures.In this paper, we focus on the stopping failure model in which a faulty process hangs and stops responding to the rest of the system. We argue that tolerating such faults is best done by an approach called application-level coordinated non-blocking checkpointing, and that existing fault-tolerance protocols in the literature are not suitable for implementing this approach.We then present a suitable protocol, which is implemented by a co-ordination layer that sits between the application program and the MPI library. We show how this protocol can be used with a precompiler that instruments C MPI programs to save application and MPI library state. An advantage of our approach is that it is independent of the MPI implementation. We present experimental results that argue that the overhead of using our system can be small.",
"Most high-performance, scientific libraries have adopted hybrid parallelization schemes - such as the popular MPI+OpenMP hybridization - to benefit from the capacities of modern distributed-memory machines. While these approaches have shown to achieve high performance, they require a lot of effort to design and maintain sophisticated synchronization communication strategies. On the other hand, task-based programming paradigms aim at delegating this burden to a runtime system for maximizing productivity. In this article, we assess the potential of task-based fast multipole methods (FMM) on clusters of multicore processors. We propose both a hybrid MPI+task FMM parallelization and a pure task-based parallelization where the MPI communications are implicitly handled by the runtime system. The latter approach yields a very compact code following a sequential task-based programming model. We show that task-based approaches can compete with a hybrid MPI+OpenMP highly optimized code and that furthermore the compact task-based scheme fully matches the performance of the sophisticated, hybrid MPI+task version, ensuring performance while maximizing productivity. We illustrate our discussion with the ScalFMM FMM library and the StarPU runtime system."
]
} |
cs0701037 | 2951856549 | DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes semaphores, TCP IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package. | In addition to distributed checkpointing, many packages exist which perform single-process checkpointing @cite_11 @cite_16 @cite_35 @cite_3 @cite_14 @cite_28 @cite_26 @cite_13 @cite_22 @cite_17 . | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_17",
"@cite_3",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"1520339130",
"2115367411",
"2010439775",
"2139835701",
"2048894106",
"1537929875",
"2105526656",
"2225938006",
"2165022815",
"2014594876"
],
"abstract": [
"Multiple threads running in a single, shared address space is a simple model for writing parallel programs for symmetric multiprocessor (SMP) machines and for overlapping I O and computation in programs run on either SMP or single processor machines. Often a long running program’s user would like the program to save its state periodically in a checkpoint from which it can recover in case of a failure. This paper introduces the first system to provide checkpointing support for multithreaded programs that use LinuxThreads, the POSIX based threads library for Linux. The checkpointing library is simple to use, automatically takes checkpoint, is flexible, and efficient. Virtually all of the overhead of the checkpointing system comes from saving the checkpoint to disk. The checkpointing library added no measurable overhead to tested application programs when they took no checkpoints. Checkpoint file size is approximately the same size as the checkpointed process’s address space. On the current implementation WATER-SPATIAL from the SPLASH2 benchmark suite saved a 2.8 MB checkpoint in about 0.18 seconds for local disk or about 21.55 seconds for an NFS mounted disk. The overhead of saving state to disk can be minimized through various techniques including varying the checkpoint interval and excluding regions of the address space from checkpoints.",
"We present a new distributed checkpoint-restart mechanism, Cruz, that works without requiring application, library, or base kernel modifications. This mechanism provides comprehensive support for checkpointing and restoring application state, both at user level and within the OS. Our implementation builds on Zap, a process migration mechanism, implemented as a Linux kernel module, which operates by interposing a thin layer between applications and the OS. In particular, we enable support for networked applications by adding migratable IP and MAC addresses, and checkpoint-restart of socket buffer state, socket options, and TCP state. We leverage this capability to devise a novel method for coordinated checkpoint-restart that is simpler than prior approaches. For instance, it eliminates the need to flush communication channels by exploiting the packet re-transmission behavior of TCP and existing OS support for packet filtering. Our experiments show that the overhead of coordinating checkpoint-restart is negligible, demonstrating the scalability of this approach.",
"We have developed and implemented a checkpointing and restart algorithm for parallel programs running on commercial uniprocessors and shared-memory multiprocessors. The algorithm runs concurrently with the target program, interrupts the target program for small, fixed amounts of time and is transparent to the checkpointed program and its compiler. The algorithm achieves its efficiency through a novel use of address translation hardware that allows the most time-consuming operations of the checkpoint to be overlapped with the running of the program being checkpointed.",
"As computational clusters increase in size, their mean time to failure reduces drastically. Typically, checkpointing is used to minimize the loss of computation. Most checkpointing techniques, however, require central storage for storing checkpoints. This results in a bottleneck and severely limits the scalability of checkpointing, while also proving to be too expensive for dedicated checkpointing networks and storage systems. We propose a scalable replication-based MPI checkpointing facility. Our reference implementation is based on LAM MPI; however, it is directly applicable to any MPI implementation. We extend the existing state of fault-tolerant MPI with asynchronous replication, eliminating the need for central or network storage. We evaluate centralized storage, a Sun-X4500-based solution, an EMC storage area network (SAN), and the Ibrix commercial parallel file system and show that they are not scalable, particularly after 64 CPUs. We demonstrate the low overhead of our checkpointing and replication scheme with the NAS Parallel Benchmarks and the High-Performance LINPACK benchmark with tests up to 256 nodes while demonstrating that checkpointing and replication can be achieved with a much lower overhead than that provided by current techniques. Finally, we show that the monetary cost of our solution is as low as 25 percent of that of a typical SAN parallel-file-system-equipped storage system.",
"Presents the results of an implementation of several algorithms for checkpointing and restarting parallel programs on shared-memory multiprocessors. The algorithms are compared according to the metrics of overall checkpointing time, overhead imposed by the checkpointer on the target program, and amount of time during which the checkpointer interrupts the target program. The best algorithm measured achieves its efficiency through a variation of copy-on-write, which allows the most time-consuming operations of the checkpoint to be overlapped with the running of the program being checkpointed. >",
"Checkpointing is a simple technique for rollback recovery: the state of an executing program is periodically saved to a disk file from which it can be recovered after a failure. While recent research has developed a collection of powerful techniques for minimizing the overhead of writing checkpoint files, checkpointing remains unavailable to most application developers. In this paper we describe libckpt, a portable checkpointing tool for Unix that implements all applicable performance optimizations which are reported in the literature. While libckpt can be used in a mode which is almost totally transparent to the programmer, it also supports the incorporation of user directives into the creation of checkpoints. This user-directed checkpointing is an innovation which is unique to our work.",
"We introduce transactors, a fault-tolerant programming model for composing loosely-coupled distributed components running in an unreliable environment such as the internet into systems that reliably maintain globally consistent distributed state. The transactor model incorporates certain elements of traditional transaction processing, but allows these elements to be composed in different ways without the need for central coordination, thus facilitating the study of distributed fault-tolerance from a semantic point of view. We formalize our approach via the τ-calculus, an extended lambda-calculus based on the actor model, and illustrate its usage through a number of examples. The τ-calculus incorporates constructs which distributed processes can use to create globally-consistent checkpoints. We provide an operational semantics for the τ-calculus, and formalize the following safety and liveness properties: first, we show that globally-consistent checkpoints have equivalent execution traces without any node failures or application-level failures, and second, we show that it is possible to reach globally-consistent checkpoints provided that there is some bounded failure-free interval during which checkpointing can occur.",
"In order to cope with the ever-increasing data volume, distributed stream processing systems have been proposed. To ensure scalability most distributed systems partition the data and distribute the workload among multiple machines. This approach does, however, raise the question how the data and the workload should be partitioned and distributed. A uniform scheduling strategy--a uniform distribution of computation load among available machines--typically used by stream processing systems, disregards network-load as one of the major bottlenecks for throughput resulting in an immense load in terms of intermachine communication. In this paper we propose a graph-partitioning based approach for workload scheduling within stream processing systems. We implemented a distributed triple-stream processing engine on top of the Storm realtime computation framework and evaluate its communication behavior using two real-world datasets. We show that the application of graph partitioning algorithms can decrease inter-machine communication substantially (by 40 to 99 ) whilst maintaining an even workload distribution, even using very limited data statistics. We also find that processing RDF data as single triples at a time rather than graph fragments (containing multiple triples), may decrease throughput indicating the usefulness of semantics.",
"Given the scale of massively parallel systems, occurrence of faults is no longer an exception but a regular event. Periodic checkpointing is becoming increasingly important in these systems. However, huge memory footprints of parallel applications place severe limitations on scalability of normal checkpointing techniques. Incremental checkpointing is a well researched technique that addresses scalability concerns, but most of the implementations require paging support from hardware and the underlying operating system, which may not be always available. In this paper, we propose a software based adaptive incremental checkpoint technique which uses a secure hash function to uniquely identify changed blocks in memory. Our algorithm is the first self-optimizing algorithm that dynamically computes the optimal block boundaries, based on the history of changed blocks. This provides better opportunities for minimizing checkpoint file size. Since the hash is computed in software, we do not need any system support for this. We have implemented and tested this mechanism on the BlueGene L system. Our results on several well-known benchmarks are encouraging, both in terms of reduction in average checkpoint file size and adaptivity towards application's memory access patterns.",
"The ability to produce malleable parallel applications that can be stopped and reconfigured during the execution can offer attractive benefits for both the system and the applications. The reconfiguration can be in terms of varying the parallelism for the applications, changing the data distributions during the executions or dynamically changing the software components involved in the application execution. In distributed and Grid computing systems, migration and reconfiguration of such malleable applications across distributed heterogeneous sites which do not share common file systems provides flexibility for scheduling and resource management in such distributed environments. The present reconfiguration systems do not support migration of parallel applications to distributed locations. In this paper, we discuss a framework for developing malleable and migratable MPI message-passing parallel applications for distributed systems. The framework includes a user-level checkpointing library called SRS and a runtime support system that manages the checkpointed data for distribution to distributed locations. Our experiments and results indicate that the parallel applications, with instrumentation to SRS library, were able to achieve reconfigurability incurring about 15-35 overhead."
]
} |
cs0701001 | 2950643428 | Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms. | The concept of STDMA for multihop wireless ad hoc networks was formalized in @cite_7 . Centralized algorithms @cite_12 @cite_9 as well as distributed algorithms @cite_11 @cite_3 have been proposed for generating reuse schedules. The problem of determining an optimal minimum-length STDMA schedule for a general multihop ad hoc network is NP-complete for both link and broadcast scheduling @cite_14 . In fact, this is closely related to the problem of determining the minimum number of colors to color all the edges (or vertices) of a graph under certain adjacency constraints. However, most wireless ad hoc networks can be modeled by planar or close-to-planar graphs and thus near-optimal edge coloring algorithms can be developed for these restricted classes of graphs. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_12",
"@cite_11"
],
"mid": [
"2107421266",
"2142928312",
"2181222287",
"1976011215",
"1981996334",
"2115009661"
],
"abstract": [
"A comprehensive study of the problem of scheduling broadcast transmissions in a multihop, mobile packet radio network is provided that is based on throughput optimization subject to freedom from interference. It is shown that the problem is NP complete. A centralized algorithm that runs in polynomial time and results in efficient (maximal) schedules is proposed. A distributed algorithm that achieves the same schedules is then proposed. The algorithm results in a maximal broadcasting zone in every slot. >",
"We consider the problem of adjusting the transmit powers of nodes in a multihop wireless network (also called an ad hoc network) to create a desired topology. We formulate it as a constrained optimization problem with two constraints-connectivity and biconnectivity, and one optimization objective-maximum power used. We present two centralized algorithms for use in static networks, and prove their optimality. For mobile networks, we present two distributed heuristics that adaptively adjust node transmit powers in response to topological changes and attempt to maintain a connected topology using minimum power. We analyze the throughput, delay, and power consumption of our algorithms using a prototype software implementation, an emulation of a power-controllable radio, and a detailed channel model. Our results show that the performance of multihop wireless networks in practice can be substantially increased with topology control.",
"In multi-hop radio networks, such as wireless ad-hoc networks and wireless sensor networks, nodes employ a MAC (Medium Access Control) protocol such as TDMA to coordinate accesses to the shared medium and to avoid interference of close-by transmissions. These protocols can be implemented using standard node coloring. The ( Δ + 1 ) -coloring problem is to color all nodes in as few timeslots as possible using at most Δ + 1 colors such that any two nodes within distance R are assigned different colors, where R is a given parameter and Δ is the maximum degree of the modeled unit disk graph using R as a scaling factor. Being one of the most fundamental problems in distributed computing, this problem is well studied and there is a long chain of algorithms prescribed for it. However, all previous works are based on abstract models, such as message passing models and graph based interference models, which limit the utility of these algorithms in practice. In this paper, for the first time, we consider the distributed ( Δ + 1 ) -coloring problem under the more practical SINR interference model. In particular, without requiring any knowledge about the neighborhood, we propose a novel randomized ( Δ + 1 ) -coloring algorithm with time complexity O ( Δ log ? n + log 2 ? n ) . For the case where nodes cannot adjust their transmission power, we give an O ( Δ log 2 ? n ) randomized algorithm, which only incurs a logarithmic multiplicative factor overhead.",
"During and immediately after their deployment, ad hoc and sensor networks lack an efficient communication scheme rendering even the most basic network coordination problems difficult. Before any reasonable communication can take place, nodes must come up with an initial structure that can serve as a foundation for more sophisticated algorithms. In this paper, we consider the problem of obtaining a vertex coloring as such an initial structure. We propose an algorithm that works in the unstructured radio network model. This model captures the characteristics of newly deployed ad hoc and sensor networks, i.e. asynchronous wake-up, no collision-detection, and scarce knowledge about the network topology. When modeling the network as a graph with bounded independence, our algorithm produces a correct coloring with O(Δ) colors in time O(Δ log n) with high probability, where n and Δ are the number of nodes in the network and the maximum degree, respectively. Also, the number of locally used colors depends only on the local node density. Graphs with bounded independence generalize unit disk graphs as well as many other well-known models for wireless multi-hop networks. They allow us to capture aspects such as obstacles, fading, or irregular signal-propagation.",
"Wireless mesh networks are expected to be widely used to provide Internet access in the near future. In order to fulfill the expectations, these networks should provide high throughput simultaneously to many users. Recent research has indicated that, due to its conservative CSMA CA channel access scheme and RTS CTS mechanism, 802.11 is not suitable to achieve this goal.In this paper, we investigate throughput improvements achievable by replacing CSMA CA with an STDMA scheme where transmissions are scheduled according to the physical interference model. To this end, we present a computationally efficient heuristic for computing a feasible schedule under the physical interference model and we prove, under uniform random node distribution, an approximation factor for the length of this schedule relative to the shortest schedule possible with physical interference. This represents the first known polynomial-time algorithm for this problem with a proven approximation factor.We also evaluate the throughput and execution time of this algorithm on representative wireless mesh network scenarios through packet-level simulations. The results show that throughput with STDMA and physical-interference-based scheduling can be up to three times higher than 802.11 for the parameter values simulated. The results also show that our scheduling algorithm can schedule networks with 2000 nodes in about 2.5 minutes.",
"Several multihop applications developed for vehicular ad hoc networks use broadcast as a means to either discover nearby neighbors or propagate useful traffic information to other vehicles located within a certain geographical area. However, the conventional broadcast mechanism may lead to the so-called broadcast storm problem, a scenario in which there is a high level of contention and collisions at the link layer due to an excessive number of broadcast packets. While this is a well-known problem in mobile ad hoc wireless networks, only a few studies have addressed this issue in the VANET context, where mobile hosts move along the roads in a certain limited set of directions as opposed to randomly moving in arbitrary directions within a bounded area. Unlike other existing works, we quantify the impact of broadcast storms in VANETs in terms of message delay and packet loss rate in addition to conventional metrics such as message reachability and overhead. Given that VANET applications are currently confined to using the DSRC protocol at the data link layer, we propose three probabilistic and timer-based broadcast suppression techniques: weighted p-persistence, slotted 1-persistence, and slotted p-persistence schemes, to be used at the network layer. Our simulation results show that the proposed schemes can significantly reduce contention at the MAC layer by achieving up to 70 percent reduction in packet loss rate while keeping end-to-end delay at acceptable levels for most VANET applications."
]
} |
cs0701001 | 2950643428 | Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms. | A significant work in STDMA link scheduling is reported in @cite_14 , in which the authors show that tree networks can be scheduled optimally, oriented graphs can be scheduled near-optimally and arbitrary networks can be scheduled such that the schedule is bounded by a length proportional to the graph thickness The thickness of a graph is the minimum number of planar graphs into which the given graph can be partitioned. times the optimum number of colors. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2294674552"
],
"abstract": [
"This paper introduces the concept of low-congestion shortcuts for (near-)planar networks, and demonstrates their power by using them to obtain near-optimal distributed algorithms for problems such as Minimum Spanning Tree (MST) or Minimum Cut, in planar networks. Consider a graph G = (V, E) and a partitioning of V into subsets of nodes S1, . . ., SN, each inducing a connected subgraph G[Si]. We define an α-congestion shortcut with dilation β to be a set of subgraphs H1, . . ., HN ⊆ G, one for each subset Si, such that 1. For each i ∈ [1, N], the diameter of the subgraph G[Si] + Hi is at most β. 2. For each edge e ∈ E, the number of subgraphs G[Si] + Hi containing e is at most α. We prove that any partition of a D-diameter planar graph into individually-connected parts admits an O(D log D)-congestion shortcut with dilation O(D log D), and we also present a distributed construction of it in O(D) rounds. We moreover prove these parameters to be near-optimal; i.e., there are instances in which, unavoidably, max α, β = Ω(D[EQUATION]). Finally, we use low-congestion shortcuts, and their efficient distributed construction, to derive O(D)-round distributed algorithms for MST and Min-Cut, in planar networks. This complexity nearly matches the trivial lower bound of Ω(D). We remark that this is the first result bypassing the well-known Ω(D + [EQUATION]) existential lower bound of general graphs (see Peleg and Rubinovich [FOCS'99]; Elkin [STOC'04]; and Das [STOC'11]) in a family of graphs of interest."
]
} |
cs0701001 | 2950643428 | Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms. | A probabilistic analysis of the throughput performance of graph-based scheduling algorithms under the physical interference model is derived in @cite_0 . The authors determine the optimal number of simultaneous transmissions by maximizing a lower bound on the physical throughput and subsequently propose a truncated graph-based scheduling algorithm that provides probabilistic guarantees for network throughput. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2154570806"
],
"abstract": [
"Many published algorithms used for scheduling transmissions in packet radio networks are based on finding maximal independent sets in an underlying graph. Such algorithms are developed under the assumptions of variations of the protocol interference model, which does not take the aggregated effect of interference into consideration. We provide a probabilistic analysis for the throughput performance of such graph based scheduling algorithms under the physical interference model. We show that in many scenarios a significant portion of transmissions scheduled based on the protocol interference model result in unacceptable signal-to-interference and noise ratio (SINR) at intended receivers. Our analytical as well as simulation results indicate that, counter intuitively, maximization of the cardinality of independent sets does not necessarily increase the throughput of a network. We introduce the truncated graph based scheduling algorithm (TGSA) that provides probabilistic guarantees for the throughput performance of the network."
]
} |
cs0701001 | 2950643428 | Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms. | In @cite_20 , the authors consider wireless mesh networks with half duplex and full duplex orthogonal channels, wherein each node can transmit to at most one node and or receive from at most @math nodes ( @math ) during any time slot. They investigate the joint problem of routing flows and scheduling link transmissions to analyze the achievability of a given rate vector between multiple source-destination pairs. The scheduling problem is solved as an edge-coloring problem on a multi-graph and the necessary conditions from scheduling problem lead to constraints on the routing problem, which is then formulated as a linear optimization problem. Correspondingly, the authors present a greedy coloring algorithm to obtain a 2-approximate solution to the chromatic index problem and describe a polynomial time approximation algorithm to obtain an @math -optimal solution of the routing problem using the primal dual approach. Finally, they evaluate the performance of their algorithms via simulations. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2106117595"
],
"abstract": [
"This paper considers the problem of determining the achievable rates in multi-hop wireless mesh networks with orthogonal channels. We classify wireless networks with orthogonal channels into two types, half duplex and full duplex, and consider the problem of jointly routing the flows and scheduling transmissions to achieve a given rate vector. We develop tight necessary and sufficient conditions for the achievability of the rate vector. We develop efficient and easy to implement Fully Polynomial Time Approximation Schemes for solving the routing problem. The scheduling problem is a solved as a graph edge-coloring problem. We show that this approach guarantees that the solution obtained is within 50 of the optimal solution in the worst case (within 67 of the optimal solution in a common special case) and, in practice, is close to 90 of the optimal solution on the average. The approach that we use is quite flexible and can be extended to handle more sophisticated interference conditions, and routing with diversity requirements."
]
} |
cs0701082 | 2949509156 | In this paper we introduce a class of constraint logic programs such that their termination can be proved by using affine level mappings. We show that membership to this class is decidable in polynomial time. | Recently, decidability of classes of imperative programs has been studied in @cite_19 @cite_10 @cite_1 . Tiwari considers real-valued programs with no nested loops and no branching inside a loop @cite_1 . Such programs correspond to one-binary-rule CLP( @math ). The author provides decidability results for subclasses of these programs. Our approach does not restrict nesting of loops and it allows internal branching . While in general termination of such programs is undecidable @cite_1 , we identified a subclass of programs with decidable termination property. Termination of the following CLP( @math ) program and its imperative equivalent can be shown by our method but not by the one proposed in @cite_1 . ] [ ] | {
"cite_N": [
"@cite_1",
"@cite_19",
"@cite_10"
],
"mid": [
"1561261246",
"2110820023",
"637774"
],
"abstract": [
"We show that termination of a simple class of linear loops over the integers is decidable. Namely we show that termination of deterministic linear loops is decidable over the integers in the homogeneous case, and over the rationals in the general case. This is done by analyzing the powers of a matrix symbolically using its eigenvalues. Our results generalize the work of Tiwari [Tiw04], where similar results were derived for termination over the reals. We also gain some insights into termination of non-homogeneous integer programs, that are very common in practice.",
"Analysis of termination and other liveness properties of an imperative program can be reduced to termination proof synthesis for simple loops, i.e., loops with only variable updates in the loop body. Among simple loops, the subset of Linear Simple Loops (LSLs) is particular interesting because it is common in practice and expressive in theory. Existing techniques can successfully synthesize a linear ranking function for an LSL if there exists one. However, when a terminating LSL does not have a linear ranking function, these techniques fail. In this paper we describe an automatic method that generates proofs of universal termination for LSLs based on the synthesis of disjunctive ranking relations. The method repeatedly finds linear ranking functions on parts of the state space and checks whether the transitive closure of the transition relation is included in the union of the ranking relations. Our method extends the work of Podelski and Rybalchenko [27]. We have implemented a prototype of the method and have shown experimental evidence of the effectiveness of our method.",
"Enriching answer set programming with function symbols makes modeling easier, increases the expressive power, and allows us to deal with infinite domains. However, this comes at a cost: common inference tasks become undecidable. To cope with this issue, recent research has focused on finding trade-offs between expressivity and decidability by identifying classes of logic programs that impose limitations on the use of function symbols but guarantee decidability of common inference tasks. Despite the significant body of work in this area, current approaches do not include many simple practical programs whose evaluation terminates. In this paper, we present the novel class of rule-bounded programs. While current techniques perform a limited analysis of how terms are propagated from an individual argument to another, our technique is able to perform a more global analysis, thereby overcoming several limitations of current approaches. We also present a further class of cycle-bounded programs where groups of rules are analyzed together. We show different results on the correctness and the expressivity of the proposed techniques."
]
} |
cs0701082 | 2949509156 | In this paper we introduce a class of constraint logic programs such that their termination can be proved by using affine level mappings. We show that membership to this class is decidable in polynomial time. | Similarly to @cite_1 , Podelski and Rybalchenko have considered programs with no nested loops and no branching inside a loop. However, they focused on integer programs and provide a polynomial time decidability technique for a subclass of such programs. In case of general programs their technique can be applied to provide a sufficient condition for liveness. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1889773849"
],
"abstract": [
"According to a folk theorem, every program can be transformed into a program that produces the same output and only has one loop. We generalize this to a form where the resulting program has one loop and no other branches than the one associated with the loop control. For this branch, branch prediction is easy even for a static branch predictor. If the original program is of length κ, measured in the number of assembly-language instructions, and runs in t(n) time for an input of size n, the transformed program is of length O(κ) and runs in O(κt(n)) time. Normally sorting programs are short, but still κ may be too large for practical purposes. Therefore, we provide more efficient hand-tailored heapsort and mergesort programs. Our programs retain most features of the original programs--e.g. they perform the same number of element comparisons--and they induce O(1) branch mispredictions. On computers where branch mispredictions were expensive, some of our programs were, for integer data and small instances, faster than the counterparts in the GNU implementation of the C++ standard library."
]
} |
cs0701094 | 2949115009 | It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors. | Among all these solutions, we have chosen to focus on the multipoint relay protocol ( ) described in @cite_4 for several reasons: | {
"cite_N": [
"@cite_4"
],
"mid": [
"1543191305"
],
"abstract": [
"Multipoint relays offer an optimized way of flooding packets in a radio network. However, this technique requires the last hop knowledge: to decide wether or not a flooding packet is retransmitted, a node needs to know from which node the packet was received. When considering broadcasting at IP level, this information may be difficult to obtain. We thus propose a scheme for computing an optimized connected dominating set from multipoint relays. This set allows to efficiently broadcast packets without the last hop information with performances close to multipoint relay flooding."
]
} |
cs0701094 | 2949115009 | It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors. | It is efficient using the unit disk graph model. It is used in the well-known standardized routing protocol @cite_10 . It can be used for other miscellaneous purposes (, computing connected dominating sets @cite_6 ). | {
"cite_N": [
"@cite_10",
"@cite_6"
],
"mid": [
"2065248207",
"2109977785"
],
"abstract": [
"In this paper we study a model for ad-hoc networks close enough to reality as to represent existing networks, being at the same time concise enough to promote strong theoretical results. The Quasi Unit Disk Graph model contains all edges shorter than a parameter d between 0 and 1 and no edges longer than 1.We show that .in comparison to the cost known on Unit Disk Graphs .the complexity results in this model contain the additional factor 1 d2. We prove that in Quasi Unit Disk Graphs flooding is an asymptotically message-optimal routing technique, provide a geometric routing algorithm being more efficient above all in dense networks, and show that classic geometric routing is possible with the same performance guarantees as for Unit Disk Graphs if d = 1 v2.",
"In this paper, we presented a fully distributed algorithm to compute a planar subgraph of the underlying wireless connectivity graph. We considered the idealized unit disk graph model in which nodes are assumed to be connected if and only if nodes are within their transmission range. The main contribution of this work is a fully distributed algorithm to extract the connected, planar graph for routing in the wireless networks. The communication cost of the proposed algorithm is O(d log d) bits, where d is the degree of anode. In addition, this paper also presented a geometric routing algorithm. The algorithm is fully distributed and nodes know only the position of other nodes and can communicate with neighboring nodes in their transmission range"
]
} |
cs0701094 | 2949115009 | It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors. | Obviously, the tricky part of this protocol lies in the selection of the set of relays @math within the @math -hop neighbors of a node @math : the smaller this set is, the smaller the number of retransmissions is and the more efficient the broadcast is. Unfortunately, finding such a set so that it is the smallest possible one is a -complete problem, so a greedy heuristic is proposed by , which can be found in @cite_12 . Considering a node @math , it can be described as follows: | {
"cite_N": [
"@cite_12"
],
"mid": [
"2122027153"
],
"abstract": [
"Our work is motivated by impromptu (or ''as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen."
]
} |
cs0701094 | 2949115009 | It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors. | Being the broadcast protocol used in , has been the subject of miscellaneous studies since its publication. For example in @cite_3 , authors analyze how relays are selected and conclude that almost $75 | {
"cite_N": [
"@cite_3"
],
"mid": [
"2161605375"
],
"abstract": [
"In order to achieve cooperative driving in vehicular ad hoc networks (VANET), broadcast transmission is usually used for disseminating safety-related information among vehicles. Nevertheless, broadcast over multihop wireless networks poses many challenges due to link unreliability, hidden terminal, message redundancy, and broadcast storm, etc., which greatly degrade the network performance. In this paper, we propose a cross layer broadcast protocol (CLBP) for multihop emergency message dissemination in inter-vehicle communication systems. We first design a novel composite relaying metric for relaying node selection, by jointly considering the geographical locations, physical layer channel conditions, moving velocities of vehicles. Based on the designed metric, we then propose a distributed relay selection scheme to guarantee that a unique relay is selected to reliably forward the emergency message in the desired propagation direction.We further apply IEEE802.11e EDCA to guarantee QoS performance of safety related services. Finally, simulation results are given to demonstrate that CLBP can not only minimize the broadcast message redundancy, but also quickly and reliably disseminate emergency messages in a VANET."
]
} |
cs0701190 | 1614193546 | The distribution of files using decentralized, peer-to-peer (P2P) systems, has significant advantages over centralized approaches. It is however more difficult to settle on the best approach for file sharing. Most file sharing systems are based on query string searches, leading to a relatively simple but inefficient broadcast or to an efficient but relatively complicated index in a structured environment. In this paper we use a browsable peer-to-peer file index consisting of files which serve as directory nodes, interconnecting to form a directory network. We implemented the system based on BitTorrent and Kademlia. The directory network inherits all of the advantages of decentralization and provides browsable, efficient searching. To avoid conflict between users in the P2P system while also imposing no additional restrictions, we allow multiple versions of each directory node to simultaneously exist -- using popularity as the basis for default browsing behavior. Users can freely add files and directory nodes to the network. We show, using a simulation of user behavior and file quality, that the popularity based system consistently leads users to a high quality directory network; above the average quality of user updates. Q | @cite_8 is a P2P file sharing system that provides a global namespace and automatic availability management. It allows any user to modify any portion of the namespace by modifying, adding, and deleting files and directories. Wayfinder's global namespace is constructed by the system automatically merging the local namespaces of individual nodes. @cite_24 is a server less distributed file system. Farsite logically functions as a centralized file server but its physical realization is dispersed among a network of untrusted workstations. @cite_4 is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers. @cite_27 is a global distributed Internet file system that also focuses on scalability. @cite_6 is a distributed file system that focuses on allowing multiple concurrent writers to files. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_6",
"@cite_24",
"@cite_27"
],
"mid": [
"1488732870",
"2157240622",
"1558940048",
"1751683294",
"1979734411"
],
"abstract": [
"Social networks offering unprecedented content sharing are rapidly developing over the Internet. Unfortunately, it is often difficult to both locate and manage content in these networks, particularly when they are implemented on current peer-to-peer technologies. In this paper, we describe Wayfinder, a peer-to-peer file system that targets the needs of medium-sized content sharing communities. Wayfinder seeks to advance the state-of-the-art by providing three synergistic abstractions: a global namespace that is uniformly accessible across connected and disconnected operation, content-based queries that can be persistently embedded into the global namespace, and automatic availability management. Interestingly, Wayfinder achieves much of its functionality through the use of a peer-to-peer indexed data storage system called PlanetP: essentially, Wayfinder constructs the global namespace, locates specific files, and performs content searches by posing appropriate queries to PlanetP. We describe this query-based design and present preliminary performance measurements of a prototype implementation.",
"The Farsite distributed file system provides availability by replicating each file onto multiple desktop computers. Since this replication consumes significant storage space, it is important to reclaim used space where possible. Measurement of over 500 desktop file systems shows that nearly half of all consumed space is occupied by duplicate files. We present a mechanism to reclaim space from this incidental duplication to make it available for controlled file replication. Our mechanism includes: (1) convergent encryption, which enables duplicate files to be coalesced into the space of a single file, even if the files are encrypted with different users' keys; and (2) SALAD, a Self-Arranging Lossy Associative Database for aggregating file content and location information in a decentralized, scalable, fault-tolerant manner. Large-scale simulation experiments show that the duplicate-file coalescing system is scalable, highly effective, and fault-tolerant.",
"In this paper, we address the problem of designing a scalable, accurate query processor for peer-to-peer filesharing and similar distributed keyword search systems. Using a globally-distributed monitoring infrastructure, we perform an extensive study of the Gnutella filesharing network, characterizing its topology, data and query workloads. We observe that Gnutella's query processing approach performs well for popular content, but quite poorly for rare items with few replicas. We then consider an alternate approach based on Distributed Hash Tables (DHTs). We describe our implementation of PIERSearch, a DHT-based system, and propose a hybrid system where Gnutella is used to locate popular items, and PIERSearch for handling rare items. We develop an analytical model of the two approaches, and use it in concert with our Gnutella traces to study the trade-off between query recall and system overhead of the hybrid system. We evaluate a variety of localized schemes for identifying items that are rare and worth handling via the DHT. Lastly, we show in a live deployment on fifty nodes on two continents that it nicely complements Gnutella in its ability to handle rare items.",
"Existing file systems, even the most scalable systems that store hundreds of petabytes (or more) of data across thousands of machines, store file metadata on a single server or via a shared-disk architecture in order to ensure consistency and validity of the metadata. This paper describes a completely different approach for the design of replicated, scalable file systems, which leverages a high-throughput distributed database system for metadata management. This results in improved scalability of the metadata layer of the file system, as file metadata can be partitioned (and replicated) across a (shared-nothing) cluster of independent servers, and operations on file metadata transformed into distributed transactions. In addition, our file system is able to support standard file system semantics--including fully linearizable random writes by concurrent users to arbitrary byte offsets within the same file--across wide geographic areas. Such high performance, fully consistent, geographically distributed files systems do not exist today. We demonstrate that our approach to file system design can scale to billions of files and handle hundreds of thousands of updates and millions of reads per second-- while maintaining consistently low read latencies. Furthermore, such a deployment can survive entire datacenter outages with only small performance hiccups and no loss of availability.",
"Metadata for the World Wide Web is important, but metadata for Peer-to-Peer (P2P) networks is absolutely crucial. In this paper we discuss the open source project Edutella which builds upon metadata standards defined for the WWW and aims to provide an RDF-based metadata infrastructure for P2P applications, building on the recently announced JXTA Framework. We describe the goals and main services this infrastructure will provide and the architecture to connect Edutella Peers based on exchange of RDF metadata. As the query service is one of the core services of Edutella, upon which other services are built, we specify in detail the Edutella Common Data Model (ECDM) as basis for the Edutella query exchange language (RDF-QEL-i) and format implementing distributed queries over the Edutella network. Finally, we shortly discuss registration and mediation services, and introduce the prototype and application scenario for our current Edutella aware peers."
]
} |
cs0612043 | 2950828832 | We analyze the ability of peer to peer networks to deliver a complete file among the peers. Early on we motivate a broad generalization of network behavior organizing it into one of two successive phases. According to this view the network has two main states: first centralized - few sources (roots) hold the complete file, and next distributed - peers hold some parts (chunks) of the file such that the entire network has the whole file, but no individual has it. In the distributed state we study two scenarios, first, when the peers are patient'', i.e, do not leave the system until they obtain the complete file; second, peers are impatient'' and almost always leave the network before obtaining the complete file. | A lot of work has been devoted to the area of file sharing in P2P networks. Many experimental papers provide practical strategies and preliminary results concerning the behavior of these kind of networks. @cite_10 for instance, the authors essentially describe properties like liveness and downloading rate by means of extended experiments and simulations under several assumptions. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2005936383"
],
"abstract": [
"Peer-to-Peer (P2P) file sharing is one of key technologies for achieving attractive P2P multimedia social networking. In P2P file-sharing systems, file availability is improved by cooperative users who cache and share files. Note that file caching carries costs such as storage consumption and processing load. In addition, users have different degrees of cooperativity in file caching and they are in different surrounding environments arising from the topological structure of P2P networks. With evolutionary game theory, this paper evaluates the performance of P2P file sharing systems in such heterogeneous environments. Using micro-macro dynamics, we analyze the impact of the heterogeneity of user selfishness on the file availability and system stability. Further, through simulation experiments with agent-based dynamics, we reveal how other aspects, for example, synchronization among nodes and topological structure, affect the system performance. Both analytical and simulation results show that the environmental heterogeneity contributes to the file availability and system stability."
]
} |
cs0612072 | 2949346966 | Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard. | Together, our results represent a new theoretical study of stochastic versions of budget optimization problems in search-related advertising. The budget optimization problem was studied recently @cite_4 in the fixed model, when @math 's are known. On one hand, our study is more general, with the emphasis on the uncertainty in modeling @math 's and the stochastic models we have formulated. We do not know of prior work in this area that formulates and uses our stochastic models. On the other hand, our study is less general as it does not consider the interaction between keywords that occurs when a user's search query matches two or more keywords, which is studied in @cite_4 . | {
"cite_N": [
"@cite_4"
],
"mid": [
"2742049221"
],
"abstract": [
"The multi-armed bandit (MAB) problem features the classical tradeoff between exploration and exploitation. The input specifies several stochastic arms which evolve with each pull, and the goal is to maximize the expected reward after a fixed budget of pulls. The celebrated work of , surveyed in [8], presumes a condition on the arms called the martingale assumption. In [9], A. obtained an LP-based 1 48-approximation for the problem with the martingale assumption removed. We improve the algorithm to a 4 27-approximation, with simpler analysis. Our algorithm also generalizes to the case of MAB superprocesses with (stochastic) multi-period actions. This generalization captures the framework introduced by Guha and Munagala in [11], and yields new results for their budgeted learning problems. Also, we obtain a (1 2 -- e)-approximation for the variant of MAB where preemption (playing an arm, switching to another arm, then coming back to the first arm) is not allowed. This contains the stochastic knapsack problem of Dean, Goemans, and Vondrak in [6] with correlated rewards, where we are given a knapsack of fixed size, and a set of jobs each with a joint distribution for its size and reward. The actual size and reward of a job can only be discovered in real-time as it is being scheduled, and the objective is to maximize expected reward before the knapsack size is exhausted. Our (1 2 -- e)-approximation improves the 1 16 and 1 8 approximations in [9] for correlated stochastic knapsack with cancellation and no cancellation, respectively, providing to our knowledge the first tight algorithm for these problems that matches the integrality gap of 2. We sample probabilities from an exponential-sized dynamic programming solution, whose existence is guaranteed by an LP projection argument. We hope this technique can also be applied to other dynamic programming problems which can be projected down onto a small LP."
]
} |
cs0612072 | 2949346966 | Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard. | Recently, @cite_9 considered an online knapsack problem with the assumption of small element sizes, and @cite_2 considered an online knapsack problem with a random order of element arrival, both motivated by bidding in advertising auctions. The difference with our work is that these authors consider the problem in the online algorithms framework, and analyze the competitive ratios of the obtained algorithms. In contrast, our algorithms make decisions offline, and we analyze the obtained approximation ratios for the expected value of the objective. Also, our algorithms base their decisions on the probability distributions of the clicks, whereas the authors of @cite_2 and @cite_9 do not assume any advance knowledge of these distributions. The two approaches are in some sense complementary: online algorithms have the disadvantage that in practice it may not be possible to make new decisions about bidding every time that a query arrives, and stochastic optimization has the disadvantage of requiring the knowledge of the probability distributions. | {
"cite_N": [
"@cite_9",
"@cite_2"
],
"mid": [
"2164792208",
"2742049221"
],
"abstract": [
"We consider situations in which a decision-maker with a fixed budget faces a sequence of options, each with a cost and a value, and must select a subset of them online so as to maximize the total value. Such situations arise in many contexts, e.g., hiring workers, scheduling jobs, and bidding in sponsored search auctions. This problem, often called the online knapsack problem, is known to be inapproximable. Therefore, we make the enabling assumption that elements arrive in a randomorder. Hence our problem can be thought of as a weighted version of the classical secretary problem, which we call the knapsack secretary problem. Using the random-order assumption, we design a constant-competitive algorithm for arbitrary weights and values, as well as a e-competitive algorithm for the special case when all weights are equal (i.e., the multiple-choice secretary problem). In contrast to previous work on online knapsack problems, we do not assume any knowledge regarding the distribution of weights and values beyond the fact that the order is random.",
"The multi-armed bandit (MAB) problem features the classical tradeoff between exploration and exploitation. The input specifies several stochastic arms which evolve with each pull, and the goal is to maximize the expected reward after a fixed budget of pulls. The celebrated work of , surveyed in [8], presumes a condition on the arms called the martingale assumption. In [9], A. obtained an LP-based 1 48-approximation for the problem with the martingale assumption removed. We improve the algorithm to a 4 27-approximation, with simpler analysis. Our algorithm also generalizes to the case of MAB superprocesses with (stochastic) multi-period actions. This generalization captures the framework introduced by Guha and Munagala in [11], and yields new results for their budgeted learning problems. Also, we obtain a (1 2 -- e)-approximation for the variant of MAB where preemption (playing an arm, switching to another arm, then coming back to the first arm) is not allowed. This contains the stochastic knapsack problem of Dean, Goemans, and Vondrak in [6] with correlated rewards, where we are given a knapsack of fixed size, and a set of jobs each with a joint distribution for its size and reward. The actual size and reward of a job can only be discovered in real-time as it is being scheduled, and the objective is to maximize expected reward before the knapsack size is exhausted. Our (1 2 -- e)-approximation improves the 1 16 and 1 8 approximations in [9] for correlated stochastic knapsack with cancellation and no cancellation, respectively, providing to our knowledge the first tight algorithm for these problems that matches the integrality gap of 2. We sample probabilities from an exponential-sized dynamic programming solution, whose existence is guaranteed by an LP projection argument. We hope this technique can also be applied to other dynamic programming problems which can be projected down onto a small LP."
]
} |
cs0612072 | 2949346966 | Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard. | There has been a lot of other work on search-related auctions in the presence of budgets, but it has primarily focused on the game-theoretic aspects @cite_7 @cite_11 , strategy-proof mechanisms @cite_16 @cite_0 , and revenue maximization @cite_3 @cite_13 . | {
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"1484450013",
"2952244728",
"2964216027",
"2021734699",
"2119914577",
"1680626568"
],
"abstract": [
"Motivated by sponsored search auctions with hard budget constraints given by the advertisers, we study multi-unit auctions of a single item. An important example is a sponsored result slot for a keyword, with many units representing its inventory in a month, say. In this single-item multi-unit auction, each bidder has a private value for each unit, and a private budget which is the total amount of money she can spend in the auction. A recent impossibility result [, FOCS’08] precludes the existence of a truthful mechanism with Paretooptimal allocations in this important setting. We propose Sort-Cut, a mechanism which does the next best thing from the auctioneer’s point of view, that we term semi-truthful. While we are unable to give a complete characterization of equilibria for our mechanism, we prove that some equilibrium of the proposed mechanism optimizes the revenue over all Pareto-optimal mechanisms, and that this equilibrium is the unique one resulting from a natural rational bidding strategy (where every losing bidder bids at least her true value). Perhaps even more significantly, we show that the revenue of every equilibrium of our mechanism differs by at most the budget of one bidder from the optimum revenue (under some mild assumptions).",
"We consider prior-free auctions for revenue and welfare maximization when agents have a common budget. The abstract environments we consider are ones where there is a downward-closed and symmetric feasibility constraint on the probabilities of service of the agents. These environments include position auctions where slots with decreasing click-through rates are auctioned to advertisers. We generalize and characterize the envy-free benchmark from Hartline and Yan (2011) to settings with budgets and characterize the optimal envy-free outcomes for both welfare and revenue. We give prior-free mechanisms that approximate these benchmarks. A building block in our mechanism is a clinching auction for position auction environments. This auction is a generalization of the multi-unit clinching auction of (2008) and a special case of the polyhedral clinching auction of (2012). For welfare maximization, we show that this clinching auction is a good approximation to the envy-free optimal welfare for position auction environments. For profit maximization, we generalize the random sampling profit extraction auction from (2002) for digital goods to give a 10.0-approximation to the envy-free optimal revenue in symmetric, downward-closed environments. The profit maximization question is of interest even without budgets and our mechanism is a 7.5-approximation which improving on the 30.4 bound of Ha and Hartline (2012).",
"In this paper, we consider the problem of designing incentive compatible auctions for multiple (homogeneous) units of a good, when bidders have private valuations and private budget constraints. When only the valuations are private and the budgets are public, [8] show that the adaptive clinching auction is the unique incentive-compatible auction achieving Pareto-optimality. They further show that this auction is not truthful with private budgets, so that there is no deterministic Pareto-optimal auction with private budgets. Our main contribution is to show the following Budget Monotonicity property of this auction: When there is only one infinitely divisible good, a bidder cannot improve her utility by reporting a budget smaller than the truth. This implies that the adaptive clinching auction is incentive compatible when over-reporting the budget is not possible (for instance, when funds must be shown upfront). We can also make reporting larger budgets suboptimal with a small randomized modification to the auction. In either case, this makes the modified auction Pareto-optimal with private budgets. We also show that the Budget Monotonicity property does not hold for auctioning indivisible units of the good, showing a sharp contrast between the divisible and indivisible cases. The Budget Monotonicity property also implies other improved results in this context. For revenue maximization, the same auction improves the best-known competitive ratio due to Abrams [1] by a factor of 4, and asymptotically approaches the performance of the optimal single-price auction. Finally, we consider the problem of revenue maximization (or social welfare) in a Bayesian setting. We allow the bidders have public size constraints (on the amount of good they are willing to buy) in addition to private budget constraints. We show a simple poly-time computable 5.83-approximation to the optimal Bayesian incentive compatible mechanism, that is implementable in dominant strategies. Our technique again crucially needs the ability to prevent bidders from over-reporting budgets via randomization. We show the approximation result via designing a rounding scheme for an LP relaxation of the problem, which may be of independent interest.",
"We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+e)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5 3 approximation to the revenue achievable by an optimal deterministic truthful mechanism. We show that the 5 3-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC'04]. Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.",
"Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywordsworks well. More precisely, this strategy gets at least a 1-1 e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.",
"Sponsored search is an important monetization channel for search engines, in which an auction mechanism is used to select the ads shown to users and determine the prices charged from advertisers. There have been several pieces of work in the literature that investigate how to design an auction mechanism in order to optimize the revenue of the search engine. However, due to some unrealistic assumptions used, the practical values of these studies are not very clear. In this paper, we propose a novel game-theoretic machine learning approach, which naturally combines machine learning and game theory, and learns the auction mechanism using a bilevel optimization framework. In particular, we first learn a Markov model from historical data to describe how advertisers change their bids in response to an auction mechanism, and then for any given auction mechanism, we use the learnt model to predict its corresponding future bid sequences. Next we learn the auction mechanism through empirical revenue maximization on the predicted bid sequences. We show that the empirical revenue will converge when the prediction period approaches infinity, and a Genetic Programming algorithm can effectively optimize this empirical revenue. Our experiments indicate that the proposed approach is able to produce a much more effective auction mechanism than several baselines."
]
} |
cs0612086 | 1645234836 | We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients. | The only semantics supported by Deno or VVWV is to enforce Lamport's happens-before relation @cite_11 ; all actions are assumed be mutually non-commuting. Happens-before captures potential causality; however an event may happen-before another even if they are not truly dependent. This paper further generalizes VVWV by considering semantic constraints. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2605805765"
],
"abstract": [
"The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition. Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets."
]
} |
cs0612086 | 1645234836 | We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients. | Bayou @cite_7 supports arbitrary application semantics. User-supplied code controls whether an action is committed or aborted. However the system imposes an arbitrary total execution order. Bayou centralises decision at a single primary replica. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2117260615"
],
"abstract": [
"Bayou is a replicated, weakly consistent storage system designed for a mobile computing environment that includes portable machines with less than ideal network connectivity. To maximize availability, users can read and write any accessible replica. Bayou’s design has focused on supporting application-specific mechanisms to detect and resolve the update conflicts that naturally arise in such a system, ensuring that replicas move towards eventual consistency, and defining a protocol by which the resolution of update conflicts stabilizes. It includes novel methods for conflict detection, called dependency checks, and per -write conflict resolution based on client-provid ed mer ge procedures. To guarantee eventual consistency, Bayou servers must be able to rollback the effects of previously executed writes and redo them according to a global serialization order . Furthermore, Bayou permits clients to observe the results of all writes received by a server , including tentative writes whose conflicts have not been ultimately resolved. This paper presents the motivation for and design of these mechanisms and describes the experiences gained with an initial implementation of the system."
]
} |
cs0612086 | 1645234836 | We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients. | IceCube @cite_14 introduced the idea of reifying semantics with constraints. The IceCube algorithm computes optimal proposals, minimizing the number of dead actions. Like Bayou, commitment in IceCube is centralised at a primary. Compared to this article, IceCube supports a richer constraint vocabulary, which is useful for applications, but harder to reason about formally. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2154193287"
],
"abstract": [
"We describe a novel approach to log-based reconciliation called IceCube. It is general and is parameterised by application and object semantics. IceCube considers more flexible orderings and is designed to ease the burden of reconciliation on the application programmers. IceCube captures the static and dynamic reconciliation constraints between all pairs of actions, proposes schedules that satisfy the static constraints, and validates them against the dynamic constraints. Preliminary experience indicates that strong static constraints successfully contain the potential combinatorial explosion of the simulation stage. With weaker static constraints, the system still finds good solutions in a reasonable time."
]
} |
cs0612086 | 1645234836 | We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients. | Generalized Paxos @cite_10 and Generic Broadcast @cite_4 take commutativity relations into account and compute a partial order. They do not consider any other semantic relations. Both Generalized Paxos @cite_10 and our algorithm make progress when a majority is not reached, although through different means. Generalized Paxos starts a new election instance, whereas our algorithm waits for a plurality decision. | {
"cite_N": [
"@cite_10",
"@cite_4"
],
"mid": [
"2067740651",
"196507789"
],
"abstract": [
"This paper describes the design and implementation of Egalitarian Paxos (EPaxos), a new distributed consensus algorithm based on Paxos. EPaxos achieves three goals: (1) optimal commit latency in the wide-area when tolerating one and two failures, under realistic conditions; (2) uniform load balancing across all replicas (thus achieving high throughput); and (3) graceful performance degradation when replicas are slow or crash. Egalitarian Paxos is to our knowledge the first protocol to achieve the previously stated goals efficiently---that is, requiring only a simple majority of replicas to be non-faulty, using a number of messages linear in the number of replicas to choose a command, and committing commands after just one communication round (one round trip) in the common case or after at most two rounds in any case. We prove Egalitarian Paxos's properties theoretically and demonstrate its advantages empirically through an implementation running on Amazon EC2.",
"In computational social choice, one important problem is to take the votes of a subelectorate (subset of the voters), and summarize them using a small number of bits. This needs to be done in such a way that, if all that we know is the summary, as well as the votes of voters outside the subelectorate, we can conclude which of the m alternatives wins. This corresponds to the notion of compilation complexity, the minimum number of bits required to summarize the votes for a particular rule, which was introduced by [IJCAI-09]. We study three different types of compilation complexity. The first, studied by , depends on the size of the subelectorate but not on the size of the complement (the voters outside the subelectorate). The second depends on the size of the complement but not on the size of the subelectorate. The third depends on both. We first investigate the relations among the three types of compilation complexity. Then, we give upper and lower bounds on all three types of compilation complexity for the most prominent voting rules. We show that for l-approval (when l ≤ m 2), Borda, and Bucklin, the bounds for all three types are asymptotically tight, up to a multiplicative constant; for l-approval (when l > m 2), plurality with runoff, all Condorcet consistent rules that are based on unweighted majority graphs (including Copeland and voting trees), and all Condorcet consistent rules that are based on the order of pairwise elections (including ranked pairs and maximin), the bounds for all three types are asymptotically tight up to a multiplicative constant when the sizes of the subelectorate and its complement are both larger than m1+∊ for some ∊ > 0."
]
} |
cs0611102 | 1647784286 | We present a method to secure the complete path between a server and the local human user at a network node. This is useful for scenarios like internet banking, electronic signatures, or online voting. Protection of input authenticity and output integrity and authenticity is accomplished by a combination of traditional and novel technologies, e.g., SSL, ActiveX, and DirectX. Our approach does not require administrative privileges to deploy and is hence suitable for consumer applications. Results are based on the implementation of a proof-of-concept application for the Windows platform. | A proposal for a user interface for prevents Trojan horses from tampering with application output. @cite_17 Kernelizing the graphics server and delegating window manager tasks to the application level is a prototypical solution in @cite_27 . However, it is not compatible with the Windows platform used on the vast majority of existing client computers. | {
"cite_N": [
"@cite_27",
"@cite_17"
],
"mid": [
"2107252100",
"2168075985"
],
"abstract": [
"Malware such as Trojan horses and spyware remain to be persistent security threats that exploit the overly complex graphical user interfaces of today's commodity operating systems. In this paper, we present the design and implementation of Nitpicker - an extremely minimized secure graphical user interface that addresses these problems while retaining compatibility to legacy operating systems. We describe our approach of kernelizing the window server and present the deployed security mechanisms and protocols. Our implementation comprises only 1,500 lines of code while supporting commodity software such as X11 applications alongside protected graphical security applications. We discuss key techniques such as client-side window handling, a new floating-labels mechanism, drag-and-drop, and denial-of-service-preventing resource management. Furthermore, we present an application scenario to evaluate the feasibility, performance, and usability of our approach",
"To programmatically discover and interact with services in ubiquitous computing environments, an application needs to solve two problems: (1) is it semantically meaningful to interact with a service? If the task is \"printing a file\", a printer service would be appropriate, but a screen rendering service or CD player service would not. (2) If yes, what are the mechanics of interacting with the service - remote invocation mechanics, names of methods, numbers and types of arguments, etc.? Existing service frameworks such as Jini and UPnP conflate these problems - two services are \"semantically compatible\" if and only if their interface signatures match. As a result, interoperability is severely restricted unless there is a single, globally agreed-upon, unique interface for each service type. By separating the two subproblems and delegating different parts of the problem to the user and the system, we show how applications can interoperate with services even when globally unique interfaces do not exist for certain services."
]
} |
cs0611102 | 1647784286 | We present a method to secure the complete path between a server and the local human user at a network node. This is useful for scenarios like internet banking, electronic signatures, or online voting. Protection of input authenticity and output integrity and authenticity is accomplished by a combination of traditional and novel technologies, e.g., SSL, ActiveX, and DirectX. Our approach does not require administrative privileges to deploy and is hence suitable for consumer applications. Results are based on the implementation of a proof-of-concept application for the Windows platform. | In the Microsoft Windows operating system, applications typically receive information about user actions by messages. Since these can be sent by malicious programs as well, they are a convenient attack vector. It is a vulnerability by design -- Windows treats all processes equally that run on the same desktop. If one needs an undisturbed interface, a separate attached to the interactive should be assigned. That approach is pursued by @cite_5 . However, managing separate desktops can be cumbersome for software developers. So most of today's software that interacts with a local user runs in a single desktop shared by benign and malign programs. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2107252100"
],
"abstract": [
"Malware such as Trojan horses and spyware remain to be persistent security threats that exploit the overly complex graphical user interfaces of today's commodity operating systems. In this paper, we present the design and implementation of Nitpicker - an extremely minimized secure graphical user interface that addresses these problems while retaining compatibility to legacy operating systems. We describe our approach of kernelizing the window server and present the deployed security mechanisms and protocols. Our implementation comprises only 1,500 lines of code while supporting commodity software such as X11 applications alongside protected graphical security applications. We discuss key techniques such as client-side window handling, a new floating-labels mechanism, drag-and-drop, and denial-of-service-preventing resource management. Furthermore, we present an application scenario to evaluate the feasibility, performance, and usability of our approach"
]
} |
cs0611102 | 1647784286 | We present a method to secure the complete path between a server and the local human user at a network node. This is useful for scenarios like internet banking, electronic signatures, or online voting. Protection of input authenticity and output integrity and authenticity is accomplished by a combination of traditional and novel technologies, e.g., SSL, ActiveX, and DirectX. Our approach does not require administrative privileges to deploy and is hence suitable for consumer applications. Results are based on the implementation of a proof-of-concept application for the Windows platform. | This problem is encountered by local security applications such as electronic signature software @cite_10 , virus scanners, personal fire walls etc. In @cite_28 a dilemma is pointed out when notifying users about security events. Users are notified about presence of a possibly malicious program that could hide that very notification immediately. Some improvements to dialog-based security are shown in @cite_8 . Application output should be defended against hiding. Actions should be delayed so that users could interfere when a program is controlled by simulated input or scripting. can be used to achieve undisturbed output instead of the co-operative Windows GDI. @cite_20 , @cite_21 , @cite_7 Modifying the web browser to convey meta-information to the user about which window can be trusted is advocated by @cite_6 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_10",
"@cite_20"
],
"mid": [
"2095519872",
"1997775274",
"2162003954",
"2126329998",
"2101587116",
"2107252100",
"2904010205"
],
"abstract": [
"The ability to introspect into the behavior of software at runtime is crucial for many security-related tasks, such as virtual machine-based intrusion detection and low-artifact malware analysis. Although some progress has been made in this task by automatically creating programs that can passively retrieve kernel-level information, two key challenges remain. First, it is currently difficult to extract useful information from user-level applications, such as web browsers. Second, discovering points within the OS and applications to hook for active monitoring is still an entirely manual process. In this paper we propose a set of techniques to mine the memory accesses made by an operating system and its applications to locate useful places to deploy active monitoring, which we call tap points. We demonstrate the efficacy of our techniques by finding tap points for useful introspection tasks such as finding SSL keys and monitoring web browser activity on five different operating systems (Windows 7, Linux, FreeBSD, Minix and Haiku) and two processor architectures (ARM and x86).",
"One aspect of security in mobile code is privacy: private (or secret) data should not be leaked to unauthorised agents. Most of the work on secure information flow has until recently only been concerned with detecting direct and indirect flows. Secret information can however be leaked to the attacker also through covert channels. It is very reasonable to assume that the attacker, even as an external observer, can monitor the timing (including termination) behaviour of the program. Thus to claim a program secure, the security analysis must take also these into account. In this work we present a surprisingly simple solution to the problem of detecting timing leakages to external observers. Our system consists of a type system in which well-typed programs do not leak secret information directly, indirectly or through timing, and a transformation for removing timing leakages. For any program that is well typed according to Volpano and Smith [VS97a], our transformation generates a program that is also free of timing leaks.",
"Eavesdropping on electronic communication is usually prevented by using cryptography-based mechanisms. However, these mechanisms do not prevent one from obtaining private information through side channels, such as the electromagnetic emissions of monitors or the sound produced by keyboards. While extracting the same information by watching somebody typing on a keyboard might seem to be an easy task, it becomes extremely challenging if it has to be automated. However, an automated tool is needed in the case of long-lasting surveillance procedures or long user activity, as a human being is able to reconstruct only a few characters per minute. This paper presents a novel approach to automatically recovering the text being typed on a keyboard, based solely on a video of the user typing. As part of the approach, we developed a number of novel techniques for motion tracking, sentence reconstruction, and error correction. The approach has been implemented in a tool, called ClearShot, which has been tested in a number of realistic settings where it was able to reconstruct a substantial part of the typed information.",
"It is an interesting problem how a human can prove its identity to a trustworthy (local or remote) computer with untrustworthy input devices and via an insecure channel controlled by adversaries. Any input devices and auxiliary devices are untrustworthy under the following assumptions: the adversaries can record humans’ operations on the devices, and can access the devices to replay the recorded operations. Strictly, only the common brain intelligence is available for the human. In this paper, such an identication system is called SecHCI as the abbreviation Secure Human-Computer Identication (or Interface). In the real world, SecHCI means the peeping attacks to widely-used xed passwords: an adversary can observe your password via his own eyes or some hidden device (such as min-camera) when your input them on your keyboard or with your mouse. Compared with human-computer identications with the aid of trustworthy hardware devices, only a few contributions have devoted to the design and analysis of SecHCI. The most systematic works are made by N. J. Hopper & M. Blum recently: some formal denitions are given and the feasibility is shown by several SecHCI protocols with acceptable security (but usability is not very good because of their inherent limitations). In this paper, we give comprehensive investigations on SecHCI, from both theoretical and practical viewpoint, and with both system-oriented and usercentered methods. A user study is made to show problems of xed passwords, the signicance of peeping attack and some design principles of human-computer identications. All currently known SecHCI protocols and some related works (such as visual graphical passwords and CAPTCHAs) are surveyed in detail. In addition, we also give our opinions on future research and suggest a new prototype protocol as a possible solution to this problem.",
"Corruption or disclosure of sensitive user documents can be among the most lasting and costly effects of malicious software attacks. Many malicious programs specifically target files that are likely to contain important user data. Researchers have approached this problem by developing techniques for restricting access to resources on an application-by-application basis. These so-called \"sandbox environments,\" though effective, are cumbersome and difficult to use. In this paper, we present a prototype Windows NT 2000 tool that addresses malicious software threats to user data by extending the existing set of file-access permissions. Management and configuration options make the tool unobtrusive and easy to use. We have conducted preliminary experiments to assess the usability of the tool and to evaluate the effects of improvements we have made. Our work has produced an intuitive data-centric method of protecting valuable documents that provides an additional layer of defense beyond existing antivirus solutions.",
"Malware such as Trojan horses and spyware remain to be persistent security threats that exploit the overly complex graphical user interfaces of today's commodity operating systems. In this paper, we present the design and implementation of Nitpicker - an extremely minimized secure graphical user interface that addresses these problems while retaining compatibility to legacy operating systems. We describe our approach of kernelizing the window server and present the deployed security mechanisms and protocols. Our implementation comprises only 1,500 lines of code while supporting commodity software such as X11 applications alongside protected graphical security applications. We discuss key techniques such as client-side window handling, a new floating-labels mechanism, drag-and-drop, and denial-of-service-preventing resource management. Furthermore, we present an application scenario to evaluate the feasibility, performance, and usability of our approach",
"With the proliferation of communication networks and mobile devices, the privacy and security concerns on their information flow are raised. Given a critical system that may leak confidential information, the problem consists of verifying and also enforcing opacity by designing supervisors, to conceal confidential information from unauthorized persons. To find out what the intruder sees, it is required to construct an observer of the system. In this paper, we consider incremental observer generation of modular systems, for verification and enforcement of current state opacity. The synchronization of the subsystems generate a large state space. Moreover, the observer generation with exponential complexity adds even larger state space. To tackle the complexity problem, we prove that observer generation can be done locally before synchronizing the subsystems. The incremental local observer generation along with an abstraction method lead to a significant state space reduction compared to traditional monolithic methods. The existence of shared unobservable events is also considered in the incremental approach. Moreover, we present an illustrative example, where the results of verification and enforcement of current state opacity are shown on a modular multiple floor elevator building with an intruder. Furthermore, we extend the current state opacity, current state anonymity, and language based opacity formulations for verification of modular systems."
]
} |
math-ph0611049 | 1611194078 | Geophysical research has focused on flows, such as ocean currents, as two dimensional. Two dimensional point or blob vortex models have the advantage of having a Hamiltonian, whereas 3D vortex filament or tube systems do not necessarily have one, although they do have action functionals. On the other hand, certain classes of 3D vortex models called nearly parallel vortex filament models do have a Hamiltonian and are more accurate descriptions of geophysical and atmospheric flows than purely 2D models, especially at smaller scales. In these quasi-2D'' models we replace 2D point vortices with vortex filaments that are very straight and nearly parallel but have Brownian variations along their lengths due to local self-induction. When very straight, quasi-2D filaments are expected to have virtually the same planar density distributions as 2D models. An open problem is when quasi-2D model statistics behave differently than those of the related 2D system and how this difference is manifested. In this paper we study the nearly parallel vortex filament model of Klein, Majda, Damodaran in statistical equilibrium. We are able to obtain a free-energy functional for the system in a non-extensive thermodynamic limit that is a function of the mean square vortex position @math and solve for @math . Such an explicit formula has never been obtained for a non-2D model. We compare the results of our formula to a 2-D formula of Lim:2005 and show qualitatively different behavior even when we disallow vortex braiding. We further confirm our results using Path Integral Monte Carlo (Ceperley (1995)) permutations and that the Klein, Majda, Damodaran model's asymptotic assumptions for parameters where these deviations occur. | As mentioned in the previous section, simulations of flux lines in type-II superconductors using the PIMC method have been done, generating the Abrikosov lattice ( @cite_13 , @cite_0 ). However, the superconductor model has periodic boundary conditions in the xy-plane, is a different problem altogether, and is not applicable to trapped fluids. No Monte Carlo studies of the model of @cite_18 have been done to date and dynamical simulations have been confined to a handful of vortices. @cite_5 added a white noise term to the KMD Hamiltonian, Equation , to study vortex reconnection in comparison to direct Navier-Stokes, but he confined his simulations to two vortices. Direct Navier-Stokes simulations of a large number of vortices are beyond our computational capacities. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_18",
"@cite_13"
],
"mid": [
"2010371781",
"1893146140",
"2129152507",
"1992225844"
],
"abstract": [
"We show that the stochastic differential equation (SDE) model for the merger of two identical two-dimensional vortices proposed by Agullo and Verga [“Exact two vortices solution of Navier–Stokes equation,” Phys. Rev. Lett. 78, 2361 (1997)] is a special case of a more general class of SDE models for N interacting vortex filaments. These toy models include vorticity diffusion via a white noise forcing of the inviscid equations, and thus extend inviscid models to include core dynamics and topology change (e.g., merger in two dimensions and vortex reconnection in three dimensions). We demonstrate that although the N=2 two-dimensional model is qualitatively and quantitatively incorrect, it can be dramatically improved by accounting for self-advection. We then extend the two-dimensional SDE model to three dimensions using the semi-inviscid asymptotic approximation of [“Simplified equations for the interactions of nearly parallel vortex filaments,” J. Fluid Mech. 288, 201 (1995)] for nearly parallel...",
"This is the second in a series of papers in which we derive a @math -expansion for the two-dimensional non-local Ginzburg-Landau energy with Coulomb repulsion known as the Ohta-Kawasaki model in connection with diblock copolymer systems. In this model, two phases appear, which interact via a nonlocal Coulomb type energy. Here we focus on the sharp interface version of this energy in the regime where one of the phases has very small volume fraction, thus creating small \"droplets\" of the minority phase in a \"sea\" of the majority phase. In our previous paper, we computed the @math -limit of the leading order energy, which yields the averaged behavior for almost minimizers, namely that the density of droplets should be uniform. Here we go to the next order and derive a next order @math -limit energy, which is exactly the Coulombian renormalized energy obtained by Sandier and Serfaty as a limiting interaction energy for vortices in the magnetic Ginzburg-Landau model. The derivation is based on the abstract scheme of Sandier-Serfaty that serves to obtain lower bounds for 2-scale energies and express them through some probabilities on patterns via the multiparameter ergodic theorem. Without thus appealing to the Euler-Lagrange equation, we establish for all configurations which have \"almost minimal energy\" the asymptotic roundness and radius of the droplets, and the fact that they asymptotically shrink to points whose arrangement minimizes the renormalized energy in some averaged sense. Via a kind of @math -equivalence, the obtained results also yield an expansion of the minimal energy for the original Ohta-Kawasaki energy. This leads to expecting to see triangular lattices of droplets as energy minimizers.",
"Abstract This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop s on gpu hardware using single precision. The simulations use a vortex particle method to solve the Navier–Stokes equations, with a highly parallel fast multipole method ( fmm ) as numerical engine, and match the current record in mesh size for this application, a cube of 4096 3 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the fft algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the fmm -based vortex method achieving 74 parallel efficiency on 4096 processes (one gpu per mpi process, 3 gpu s per node of the tsubame -2.0 system). The fft -based spectral method is able to achieve just 14 parallel efficiency on the same number of mpi processes (using only cpu cores), due to the all-to-all communication pattern of the fft algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.",
"This paper aims at building a variational approach to the dynamics of discrete topological singularities in two dimensions, based on Γ-convergence. We consider discrete systems, described by scalar functions defined on a square lattice and governed by periodic interaction potentials. Our main motivation comes from XY spin systems, described by the phase parameter, and screw dislocations, described by the displacement function. For these systems, we introduce a discrete notion of vorticity. As the lattice spacing tends to zero we derive the first order Γ-limit of the free energy which is referred to as renormalized energy and describes the interaction of vortices. As a byproduct of this analysis, we show that such systems exhibit increasingly many metastable configurations of singularities. Therefore, we propose a variational approach to the depinning and dynamics of discrete vortices, based on minimizing movements. We show that, letting first the lattice spacing and then the time step of the minimizing movements tend to zero, the vortices move according with the gradient flow of the renormalized energy, as in the continuous Ginzburg–Landau framework."
]
} |
math-ph0611049 | 1611194078 | Geophysical research has focused on flows, such as ocean currents, as two dimensional. Two dimensional point or blob vortex models have the advantage of having a Hamiltonian, whereas 3D vortex filament or tube systems do not necessarily have one, although they do have action functionals. On the other hand, certain classes of 3D vortex models called nearly parallel vortex filament models do have a Hamiltonian and are more accurate descriptions of geophysical and atmospheric flows than purely 2D models, especially at smaller scales. In these quasi-2D'' models we replace 2D point vortices with vortex filaments that are very straight and nearly parallel but have Brownian variations along their lengths due to local self-induction. When very straight, quasi-2D filaments are expected to have virtually the same planar density distributions as 2D models. An open problem is when quasi-2D model statistics behave differently than those of the related 2D system and how this difference is manifested. In this paper we study the nearly parallel vortex filament model of Klein, Majda, Damodaran in statistical equilibrium. We are able to obtain a free-energy functional for the system in a non-extensive thermodynamic limit that is a function of the mean square vortex position @math and solve for @math . Such an explicit formula has never been obtained for a non-2D model. We compare the results of our formula to a 2-D formula of Lim:2005 and show qualitatively different behavior even when we disallow vortex braiding. We further confirm our results using Path Integral Monte Carlo (Ceperley (1995)) permutations and that the Klein, Majda, Damodaran model's asymptotic assumptions for parameters where these deviations occur. | @cite_12 has done some excellent simulations of vortex tangles in He-4 with rotation, boundary walls, and vortex reconnections to study disorder in rotating superfluid turbulence. Because vortex tangles are extremely curved, they applied the full Biot-Savart law to calculate the motion of the filaments in time. Their study did not include any sort of comparison to 2-D models because for most of the simulation vortices were far too tangled. The inclusion of rigid boundary walls, although correct for the study of He-4, also makes the results only tangentially applicable to the KMD system we use. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2586908737"
],
"abstract": [
"Turbulent vortices in smoke flows are crucial for a visually interesting appearance. Unfortunately, it is challenging to efficiently simulate these appealing effects in the framework of vortex filament methods. The vortex filaments in grids scheme allows to efficiently generate turbulent smoke with macroscopic vortical structures, but suffers from the projection-related dissipation, and thus the small-scale vortical structures under grid resolution are hard to capture. In addition, this scheme cannot be applied in wall-bounded turbulent smoke simulation, which requires efficiently handling smoke-obstacle interaction and creating vorticity at the obstacle boundary. To tackle above issues, we propose an effective filament-mesh particle-particle (FMPP) method for fast wall-bounded turbulent smoke simulation with ample details. The Filament-Mesh component approximates the smooth long-range interactions by splatting vortex filaments on grid, solving the Poisson problem with a fast solver, and then interpolating back to smoke particles. The Particle-Particle component introduces smoothed particle hydrodynamics (SPH) turbulence model for particles in the same grid, where interactions between particles cannot be properly captured under grid resolution. Then, we sample the surface of obstacles with boundary particles, allowing the interaction between smoke and obstacle being treated as pressure forces in SPH. Besides, the vortex formation region is defined at the back of obstacles, providing smoke particles flowing by the separation particles with a vorticity force to simulate the subsequent vortex shedding phenomenon. The proposed approach can synthesize the lost small-scale vortical structures and also achieve the smoke-obstacle interaction with vortex shedding at obstacle boundaries in a lightweight manner. The experimental results demonstrate that our FMPP method can achieve more appealing visual effects than vortex filaments in grids scheme by efficiently simulating more vivid thin turbulent features."
]
} |
math-ph0611049 | 1611194078 | Geophysical research has focused on flows, such as ocean currents, as two dimensional. Two dimensional point or blob vortex models have the advantage of having a Hamiltonian, whereas 3D vortex filament or tube systems do not necessarily have one, although they do have action functionals. On the other hand, certain classes of 3D vortex models called nearly parallel vortex filament models do have a Hamiltonian and are more accurate descriptions of geophysical and atmospheric flows than purely 2D models, especially at smaller scales. In these quasi-2D'' models we replace 2D point vortices with vortex filaments that are very straight and nearly parallel but have Brownian variations along their lengths due to local self-induction. When very straight, quasi-2D filaments are expected to have virtually the same planar density distributions as 2D models. An open problem is when quasi-2D model statistics behave differently than those of the related 2D system and how this difference is manifested. In this paper we study the nearly parallel vortex filament model of Klein, Majda, Damodaran in statistical equilibrium. We are able to obtain a free-energy functional for the system in a non-extensive thermodynamic limit that is a function of the mean square vortex position @math and solve for @math . Such an explicit formula has never been obtained for a non-2D model. We compare the results of our formula to a 2-D formula of Lim:2005 and show qualitatively different behavior even when we disallow vortex braiding. We further confirm our results using Path Integral Monte Carlo (Ceperley (1995)) permutations and that the Klein, Majda, Damodaran model's asymptotic assumptions for parameters where these deviations occur. | Other related work on the statistical mechanics of turbulence in 3-D vortex lines can be found in @cite_4 and @cite_3 in addition to @cite_17 . | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_17"
],
"mid": [
"2010371781",
"2129152507",
"1966911335"
],
"abstract": [
"We show that the stochastic differential equation (SDE) model for the merger of two identical two-dimensional vortices proposed by Agullo and Verga [“Exact two vortices solution of Navier–Stokes equation,” Phys. Rev. Lett. 78, 2361 (1997)] is a special case of a more general class of SDE models for N interacting vortex filaments. These toy models include vorticity diffusion via a white noise forcing of the inviscid equations, and thus extend inviscid models to include core dynamics and topology change (e.g., merger in two dimensions and vortex reconnection in three dimensions). We demonstrate that although the N=2 two-dimensional model is qualitatively and quantitatively incorrect, it can be dramatically improved by accounting for self-advection. We then extend the two-dimensional SDE model to three dimensions using the semi-inviscid asymptotic approximation of [“Simplified equations for the interactions of nearly parallel vortex filaments,” J. Fluid Mech. 288, 201 (1995)] for nearly parallel...",
"Abstract This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop s on gpu hardware using single precision. The simulations use a vortex particle method to solve the Navier–Stokes equations, with a highly parallel fast multipole method ( fmm ) as numerical engine, and match the current record in mesh size for this application, a cube of 4096 3 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the fft algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the fmm -based vortex method achieving 74 parallel efficiency on 4096 processes (one gpu per mpi process, 3 gpu s per node of the tsubame -2.0 system). The fft -based spectral method is able to achieve just 14 parallel efficiency on the same number of mpi processes (using only cpu cores), due to the all-to-all communication pattern of the fft algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.",
"New analytical solutions are presented for steadily translating von Karman vortex streets made up of two infinite rows of hollow vortices. First, the solution for a single row of hollow vortices due to [\"Structure of a linear array of hollow vortices of finite cross-section,\" J. Fluid Mech. 74, 469 (1976)] is rederived, in a modified form, and using a new mathematical approach. This approach is then generalized to find relative equilibria for both unstaggered and staggered double hollow vortex streets. The method employs a combination of free streamline theory and conformal mapping ideas. The staggered hollow vortex streets are compared with analogous numerical solutions for double streets of vortex patches due to Saffman and Schatzman [“Properties of a vortex street of finite vortices,” SIAM (Soc. Ind. Appl. Math.) J. Sci. Stat. Comput. 2, 285 (1981)] and several common features are found. In particular, within the two different inviscid vortex models, the same street aspect ratio of approxi..."
]
} |
cs0611003 | 2141738320 | Time synchronization is an important aspect of sensor network operation. However, it is well known that synchro- nization error accumulates over multiple hops. This presents a challenge for large-scale, multi-hop sensor networks with a large number of nodes distributed over wide areas. In this work, we present a protocol that uses spatial averaging to reduce error accumulation in large-scale networks. We provide an analysis to quantify the synchronization improvement achieved using spatial averaging and find that in a basic cooperative network, the skew and offset variance decrease approximately as 1 ¯ N where ¯ N is the number of cooperating nodes. For general networks, simulation results and a comparison to basic cooperative network results are used to illustrate the improvement in synchronization performance. | The traditional synchronization techniques describe in @cite_11 @cite_13 @cite_1 @cite_4 @cite_10 all operate fundamentally on the idea of communicating timing information from one set of nodes to the next. One other approach to synchronization that has recently received much attention is to apply mathematical models of natural phenomena to engineered networks. A model for the emergence of synchrony in pulse-coupled oscillators was developed in @cite_14 for a fully-connected group of identical oscillators. @cite_5 , this convergence to synchrony result was extended to networks that were not fully connected. | {
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"2058455332",
"1116550701",
"2234682834",
"1520594164",
"1548544878",
"2090768129",
"2098897082"
],
"abstract": [
"Solutions for time synchronization based on coupled oscillators operate in a self-organizing and adaptive manner and can be applied to various types of dynamic networks. The basic idea was inspired by swarms of fireflies, whose flashing dynamics shows an emergent behavior. This article introduces such a synchronization technique whose main components are “inhibitory coupling” and “self-adjustment.” Based on this new technique, a number of contributions are made. First, we prove that inhibitory coupling can lead to perfect synchrony independent of initial conditions for delay-free environments and homogeneous oscillators. Second, relaxing the assumptions to systems with delays and different phase rates, we prove that such systems synchronize up to a certain precision bound. We derive this bound assuming inhomogeneous delays and show by simulations that it gives a good estimate in strongly-coupled systems. Third, we show that inhibitory coupling with self-adjustment quickly leads to synchrony with a precision comparable to that of excitatory coupling. Fourth, we analyze the robustness against faulty members performing incorrect coupling. While the specific precision-loss encountered by such disturbances depends on system parameters, the system always regains synchrony for the investigated scenarios.",
"People have always tried to understand natural phenomena. In computer science natural phenomena are mostly used as a source of inspiration for solving various problems in distributed systems such as optimization, clustering, and data processing. In this paper we will give an overview of research in field of computer science where fireflies in nature are used as role models for time synchronization. We will compare two models of oscillators that explain firefly synchronization along with other phenomena of synchrony in nature (e.g., synchronization of pacemaker cells of the heart and synchronization of neuron networks of the circadian pacemaker). Afterwards, we will present Mirollo and Strogatz's pulse coupled oscillator model together with its limitations. As discussed by the authors of the model, this model lacks of explanation what happens when oscillators are nonidentical. It also does not support mobile and faulty oscillators. Finally, it does not take into consideration that in communication among oscillators there are communication delays. Since these limitations prevent Mirollo and Strogatz's model to be used in real-world environments (such as Machine-to-Machine systems), we will sum up related work in which scholars investigated how to modify the model in order for it to be applicable in distributed systems. However, one has to bear in mind that there are usually large differences between mathematical models in theory and their implementation in practice. Therefore, we give an overview of both mathematical models and mechanisms in distributed systems that were designed after them.",
"The convergence and precision of synchronization algorithms based on the theory of pulse-coupled oscillators is evaluated on programmable radios. Measurements in different wireless topologies show that such algorithms reach precisions in the low microsecond range. Based on the observation that phase rate deviation among radios is a limiting factor for the achievable precision, we propose a distributed algorithm for automatic phase rate equalization and show by experiments that an improved precision below one microsecond is possible in the given setups. It is also experimentally demonstrated that the stochastic nature of coupling is a key ingredient for convergence to synchrony. The proposed scheme can be applied in wireless systems for distributed synchronization of transmission slots, or sleep cycles.",
"Consider a distributed network of n nodes that is connected to a global source of \"beats\". All nodes receive the \"beats\" simultaneously, and operate in lock-step. A scheme that produces a \"pulse\" every Cycle beats is shown. That is, the nodes agree on \"special beats\", which are spaced Cycle beats apart. Given such a scheme, a clock synchronization algorithm is built. The \"pulsing\" scheme is self-stabilized despite any transient faults and the continuous presence of up to f < n 3 Byzantine nodes. Therefore, the clock synchronization built on top of the \"pulse\" is highly fault tolerant. In addition, a highly fault tolerant general stabilizer algorithm is constructed on top of the \"pulse\" mechanism. Previous clock synchronization solutions, operating in the exact same model as this one, either support f < n 4 and converge in linear time, or support f < n 3 and have exponential convergence time that also depends on the value of max-clock (the clock wrap around value). The proposed scheme combines the best of both worlds: it converges in linear time that is independent of max-clock and is tolerant to up to f < n 3 Byzantine nodes. Moreover, considering problems in a self-stabilizing, Byzantine tolerant environment that require nodes to know the global state (clock synchronization, token circulation, agreement, etc.), the work presented here is the first protocol to operate in a network that is not fully connected.",
"\"Pulse Synchronization\" intends to invoke a recurring distributed event at the different nodes, of a distributed system as simultaneously as possible and with a frequency that matches a predetermined regularity. This paper shows how to achieve that goal when the system is facing both transient and permanent (Byzantine) failures. Byzantine nodes might incessantly try to de-synchronize the correct nodes. Transient failures might throw the system into an arbitrary state in which correct nodes have no common notion what-so-ever, such as time or round numbers, and thus cannot use any aspect of their own local states to infer anything about the states of other correct nodes. The algorithm we present here guarantees that eventually all correct nodes will invoke their pulses within a very short time interval of each other and will do so regularly. The problem of pulse synchronization was recently solved in a system in which there exists an outside beat system that synchronously signals all nodes at once. In this paper we present a solution for a bounded-delay system. When the system in a steady state, a message sent by a correct node arrives and is processed by all correct nodes within a bounded time, say d time units, where at steady state the number of Byzantine nodes, f, should obey the n > 3f inequality, for a network of n nodes.",
"Given a set of sensor nodes V where each node wants to broadcast a message to all its neighbors that are within a certain broadcasting range, the local broadcasting problem is to schedule all these requests in as few timeslots as possible. In this paper, assuming the more realistic physical interference model and no knowledge of the topology, we present three distributed local broadcasting algorithms where the first one is for the asynchronized model and the other two are for the synchronized model. Under the asynchronized model, nodes may join the execution of the protocol at any time and do not have access to a global clock, for which we give a distributed randomized algorithm with approximation ratio O(log2 n). This improves the state-of-the-art result given in [14] by a logarithmic factor. For the synchronized model where communications among nodes are synchronous and nodes can perform physical carrier sensing, we propose two distributed deterministic local broadcasting algorithms for synchronous and asynchronous node wakeups, respectively. Both algorithms have approximation ratio O(log n).",
"Synchronization is considered a particularly difficult task in wireless sensor networks due to its decentralized structure. Interestingly, synchrony has often been observed in networks of biological agents (e.g., synchronously flashing fireflies, or spiking of neurons). In this paper, we propose a bio-inspired network synchronization protocol for large scale sensor networks that emulates the simple strategies adopted by the biological agents. The strategy synchronizes pulsing devices that are led to emit their pulses periodically and simultaneously. The convergence to synchrony of our strategy follows from the theory of Mirollo and Strogatz, 1990, while the scalability is evident from the many examples existing in the natural world. When the nodes are within a single broadcast range, our key observation is that the dependence of the synchronization time on the number of nodes N is subject to a phase transition: for values of N beyond a specific threshold, the synchronization is nearly immediate; while for smaller N, the synchronization time decreases smoothly with respect to N. Interestingly, a tradeoff is observed between the total energy consumption and the time necessary to reach synchrony. We obtain an optimum operating point at the local minimum of the energy consumption curve that is associated to the phase transition phenomenon mentioned before. The proposed synchronization protocol is directly applied to the cooperative reach-back communications problem. The main advantages of the proposed method are its scalability and low complexity."
]
} |
cs0611003 | 2141738320 | Time synchronization is an important aspect of sensor network operation. However, it is well known that synchro- nization error accumulates over multiple hops. This presents a challenge for large-scale, multi-hop sensor networks with a large number of nodes distributed over wide areas. In this work, we present a protocol that uses spatial averaging to reduce error accumulation in large-scale networks. We provide an analysis to quantify the synchronization improvement achieved using spatial averaging and find that in a basic cooperative network, the skew and offset variance decrease approximately as 1 ¯ N where ¯ N is the number of cooperating nodes. For general networks, simulation results and a comparison to basic cooperative network results are used to illustrate the improvement in synchronization performance. | The convergence result is clearly desirable for synchronization in networks and in @cite_0 theoretical and simulation results suggested that such a technique could be adapted to communication and sensor networks. Experimental validation for the ideas of @cite_14 was obtained in @cite_18 where the authors implemented the Reachback Firefly Algorithm (RFA) on TinyOS-based motes. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_14"
],
"mid": [
"2030381790",
"2234682834",
"2098897082"
],
"abstract": [
"Synchronicity is a useful abstraction in many sensor network applications. Communication scheduling, coordinated duty cycling, and time synchronization can make use of a synchronicity primitive that achieves a tight alignment of individual nodes' firing phases. In this paper we present the Reachback Firefly Algorithm (RFA), a decentralized synchronicity algorithm implemented on TinyOS-based motes. Our algorithm is based on a mathematical model that describes how fireflies and neurons spontaneously synchronize. Previous work has assumed idealized nodes and not considered realistic effects of sensor network communication, such as message delays and loss. Our algorithm accounts for these effects by allowing nodes to use delayed information from the past to adjust the future firing phase. We present an evaluation of RFA that proceeds on three fronts. First, we prove the convergence of our algorithm in simple cases and predict the effect of parameter choices. Second, we leverage the TinyOS simulator to investigate the effects of varying parameter choice and network topology. Finally, we present results obtained on an indoor sensor network testbed demonstrating that our algorithm can synchronize sensor network devices to within 100 μsec on a real multi-hop topology with links of varying quality.",
"The convergence and precision of synchronization algorithms based on the theory of pulse-coupled oscillators is evaluated on programmable radios. Measurements in different wireless topologies show that such algorithms reach precisions in the low microsecond range. Based on the observation that phase rate deviation among radios is a limiting factor for the achievable precision, we propose a distributed algorithm for automatic phase rate equalization and show by experiments that an improved precision below one microsecond is possible in the given setups. It is also experimentally demonstrated that the stochastic nature of coupling is a key ingredient for convergence to synchrony. The proposed scheme can be applied in wireless systems for distributed synchronization of transmission slots, or sleep cycles.",
"Synchronization is considered a particularly difficult task in wireless sensor networks due to its decentralized structure. Interestingly, synchrony has often been observed in networks of biological agents (e.g., synchronously flashing fireflies, or spiking of neurons). In this paper, we propose a bio-inspired network synchronization protocol for large scale sensor networks that emulates the simple strategies adopted by the biological agents. The strategy synchronizes pulsing devices that are led to emit their pulses periodically and simultaneously. The convergence to synchrony of our strategy follows from the theory of Mirollo and Strogatz, 1990, while the scalability is evident from the many examples existing in the natural world. When the nodes are within a single broadcast range, our key observation is that the dependence of the synchronization time on the number of nodes N is subject to a phase transition: for values of N beyond a specific threshold, the synchronization is nearly immediate; while for smaller N, the synchronization time decreases smoothly with respect to N. Interestingly, a tradeoff is observed between the total energy consumption and the time necessary to reach synchrony. We obtain an optimum operating point at the local minimum of the energy consumption curve that is associated to the phase transition phenomenon mentioned before. The proposed synchronization protocol is directly applied to the cooperative reach-back communications problem. The main advantages of the proposed method are its scalability and low complexity."
]
} |
cs0611003 | 2141738320 | Time synchronization is an important aspect of sensor network operation. However, it is well known that synchro- nization error accumulates over multiple hops. This presents a challenge for large-scale, multi-hop sensor networks with a large number of nodes distributed over wide areas. In this work, we present a protocol that uses spatial averaging to reduce error accumulation in large-scale networks. We provide an analysis to quantify the synchronization improvement achieved using spatial averaging and find that in a basic cooperative network, the skew and offset variance decrease approximately as 1 ¯ N where ¯ N is the number of cooperating nodes. For general networks, simulation results and a comparison to basic cooperative network results are used to illustrate the improvement in synchronization performance. | The problem with these emergent synchronization results is that the fundamental theory assumes all nodes have nearly the same firing period. Results from @cite_0 and @cite_18 show that the convergence results may hold when nodes have approximately the same firing period, but the authors of @cite_18 explain that clock skew will degrade synchronization performance. Since we are not aware of any results that provide an extension to deal with networks of nodes with arbitrary firing periods, our work focuses on synchronization algorithms that explicitly estimate clock skew. | {
"cite_N": [
"@cite_0",
"@cite_18"
],
"mid": [
"2143450555",
"1995261392"
],
"abstract": [
"We introduce the distributed gradient clock synchronization problem. As in traditional distributed clock synchronization, we consider a network of nodes equipped with hardware clocks with bounded drift. Nodes compute logical clock values based on their hardware clocks and message exchanges, and the goal is to synchronize the nodes' logical clocks as closely as possible, while satisfying certain validity conditions. The new feature of gradient clock synchronization (GCS for short) is to require that the skew between any two nodes' logical clocks be bounded by a nondecreasing function of the uncertainty in message delay (call this the distance) between the two nodes. That is, we require nearby nodes to be closely synchronized, and allow faraway nodes to be more loosely synchronized. We contrast GCS with traditional clock synchronization, and discuss several practical motivations for GCS, mostly arising in sensor and ad hoc networks. Our main result is that the worst case clock skew between two nodes at distance d from each other is Ω(d + log D log log D), where D is the diameter1 of the network. This means that clock synchronization is not a local property, in the sense that the clock skew between two nodes depends not only on the distance between the nodes, but also on the size of the network. Our lower bound implies, for example, that the TDMA protocol with a fixed slot granularity will fail as the network grows, even if the maximum degree of each node stays constant.",
"We present a distributed clock synchronization algorithm that guarantees an exponentially improved bound of O(log D) on the clock skew between neighboring nodes in any graph G of diameter D. In light of the lower bound of Omega(log D log log D), this result is almost tight. Moreover, the global clock skew between any two nodes, particularly nodes that are not directly connected, is bounded by O(D), which is optimal up to a constant factor. Our algorithm further ensures that the clock values are always within a linear envelope of real time. A better bound on the accuracy with respect to real time cannot be achieved in the absence of an external timer. These results all hold in a general model where both the clock drifts and the message delays may vary arbitrarily within pre-specified bounds."
]
} |
cs0611052 | 2953215000 | For a large number of random constraint satisfaction problems, such as random k-SAT and random graph and hypergraph coloring, there are very good estimates of the largest constraint density for which solutions exist. Yet, all known polynomial-time algorithms for these problems fail to find solutions even at much lower densities. To understand the origin of this gap we study how the structure of the space of solutions evolves in such problems as constraints are added. In particular, we prove that much before solutions disappear, they organize into an exponential number of clusters, each of which is relatively small and far apart from all other clusters. Moreover, inside each cluster most variables are frozen, i.e., take only one value. The existence of such frozen variables gives a satisfying intuitive explanation for the failure of the polynomial-time algorithms analyzed so far. At the same time, our results establish rigorously one of the two main hypotheses underlying Survey Propagation, a heuristic introduced by physicists in recent years that appears to perform extraordinarily well on random constraint satisfaction problems. | Finally, the authors prove that the property has a pair of satisfying assignments at distance @math " has a sharp threshold, thus boosting their constant probability result for having a pair of satisfying assignments at a given distance to a high probability one. To the best of our understanding, these three are the only results established in @cite_0 . Combined, they imply that for every @math , there is @math and constants @math , such that in @math : W.h.p. every pair of satisfying assignments has distance either less than @math or more than @math . For every @math , there is a pair of truth assignments that have distance @math . We note that even if the maximizer in the second moment computation was determined rigorously and coincided with the heuristic guess of @cite_0 , the strongest statement that can be inferred from the above two assertions in terms of establishing clustering" is: for every @math , there is @math , such that @math has at least two clusters. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2950981509"
],
"abstract": [
"Let @math be a tree space (or tree network) represented by a weighted tree with @math vertices, and @math be a set of @math stochastic points in @math , each of which has a fixed location with an independent existence probability. We investigate two fundamental problems under such a stochastic setting, the closest-pair problem and the nearest-neighbor search. For the former, we study the computation of the @math -threshold probability and the expectation of the closest-pair distance of a realization of @math . We propose the first algorithm to compute the @math -threshold probability in @math time for any given threshold @math , which immediately results in an @math -time algorithm for computing the expected closest-pair distance. Based on this, we further show that one can compute a @math -approximation for the expected closest-pair distance in @math time, by arguing that the expected closest-pair distance can be approximated via @math threshold probability queries. For the latter, we study the @math most-likely nearest-neighbor search ( @math -LNN) via a notion called @math most-likely Voronoi Diagram ( @math -LVD). We show that the size of the @math -LVD @math of @math on @math is bounded by @math if the existence probabilities of the points in @math are constant-far from 0. Furthermore, we establish an @math average-case upper bound for the size of @math , by regarding the existence probabilities as i.i.d. random variables drawn from some fixed distribution. Our results imply the existence of an LVD data structure which answers @math -LNN queries in @math time using average-case @math space, and worst-case @math space if the existence probabilities are constant-far from 0. Finally, we also give an @math -time algorithm to construct the LVD data structure."
]
} |
cs0611135 | 2952342381 | Support Vector Machines (SVMs) are well-established Machine Learning (ML) algorithms. They rely on the fact that i) linear learning can be formalized as a well-posed optimization problem; ii) non-linear learning can be brought into linear learning thanks to the kernel trick and the mapping of the initial search space onto a high dimensional feature space. The kernel is designed by the ML expert and it governs the efficiency of the SVM approach. In this paper, a new approach for the automatic design of kernels by Genetic Programming, called the Evolutionary Kernel Machine (EKM), is presented. EKM combines a well-founded fitness function inspired from the margin criterion, and a co-evolution framework ensuring the computational scalability of the approach. Empirical validation on standard ML benchmark demonstrates that EKM is competitive using state-of-the-art SVMs with tuned hyper-parameters. | The most relevant work to EKM is the Genetic Kernel Support Vector Machine (GK-SVM) @cite_1 . GK-SVM similarly uses GP within an SVM-based approach, with two main differences compared to EKM. On one hand, GK-SVM focuses on feature construction, using GP to optimize mapping @math (instead of the kernel). On the other hand, the fitness function used in GK-SVM suffers from a quadratic complexity in the number of training examples. Accordingly, all datasets but one considered in the experimentations are small (less than 200 examples). On a larger dataset, the authors acknowledge that their approach does not improve on a standard SVM with well chosen parameters. Another related work similarly uses GP for feature construction, in order to classify time series @cite_16 . The set of features (GP trees) is further evolved using a GA, where the fitness function is based on the accuracy of an SVM classifier. Most other works related to evolutionary optimization within SVMs (see @cite_15 ) actually focus on parametric optimization, e.g. achieving features selection or tuning some parameters. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_1"
],
"mid": [
"1978917315",
"1986490585",
"2134380836"
],
"abstract": [
"The Support Vector Machine (SVM) has emerged in recent years as a popular approach to the classification of data. One problem that faces the user of an SVM is how to choose a kernel and the specific parameters for that kernel. Applications of an SVM therefore require a search for the optimum settings for a particular problem. This paper proposes a classification technique, which we call the Genetic Kernel SVM (GK SVM), that uses Genetic Programming to evolve a kernel for a SVM classifier. Results of initial experiments with the proposed technique are presented. These results are compared with those of a standard SVM classifier using the Polynomial, RBF and Sigmoid kernel with various parameter settings",
"The problem of model selection for support vector machines (SVMs) is considered. We propose an evolutionary approach to determine multiple SVM hyperparameters: The covariance matrix adaptation evolution strategy (CMA-ES) is used to determine the kernel from a parameterized kernel space and to control the regularization. Our method is applicable to optimize non-differentiable kernel functions and arbitrary model selection criteria. We demonstrate on benchmark datasets that the CMA-ES improves the results achieved by grid search already when applied to few hyperparameters. Further, we show that the CMA-ES is able to handle much more kernel parameters compared to grid-search and that tuning of the scaling and the rotation of Gaussian kernels can lead to better results in comparison to standard Gaussian kernels with a single bandwidth parameter. In particular, more flexibility of the kernel can reduce the number of support vectors.",
"Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with runtime complexity of the classifier logarithmic in the number of support vectors as opposed to linear for the standard approach. We further show that by precomputing auxiliary tables we can construct an approximate classifier with constant runtime and space requirements, independent of the number of support vectors, with negligible loss in classification accuracy on various tasks. This approximation also applies to 1 - chi2 and other kernels of similar form. We also introduce novel features based on a multi-level histograms of oriented edge energy and present experiments on various detection datasets. On the INRIA pedestrian dataset an approximate IKSVM classifier based on these features has the current best performance, with a miss rate 13 lower at 10-6 False Positive Per Window than the linear SVM detector of Dalal & Triggs. On the Daimler Chrysler pedestrian dataset IKSVM gives comparable accuracy to the best results (based on quadratic SVM), while being 15times faster. In these experiments our approximate IKSVM is up to 2000times faster than a standard implementation and requires 200times less memory. Finally we show that a 50times speedup is possible using approximate IKSVM based on spatial pyramid features on the Caltech 101 dataset with negligible loss of accuracy."
]
} |
hep-th0610185 | 1993186195 | We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral. | In @cite_21 @cite_30 Erdem, and in @cite_25 't Hooft and Nobbenhuis discuss a novel kind of symmetry transformation consisting of a rotation of real positional coordinates to the imaginary axis, with the aim of ruling out a cosmological constant. The rotated representation that these authors use for their non-relativistic particle toy modle is identical to the one we discuss in section for the indefinite inner product theory. These authors are particularly interested in the relationship between the real and imaginary coordinate representations. Since we study this relationship in detail in the present article, it is conceivable that our mathematical framework may have further applications in this direction. | {
"cite_N": [
"@cite_30",
"@cite_21",
"@cite_25"
],
"mid": [
"2084925837",
"2127952943",
"2162227770"
],
"abstract": [
"This paper introduces an algorithm for the registration of rotated and translated volumes using the three-dimensional (3-D) pseudopolar Fourier transform, which accurately computes the Fourier transform of the registered volumes on a near-spherical 3-D domain without using interpolation. We propose a three-step procedure. The first step estimates the rotation axis. The second step computes the planar rotation relative to the rotation axis. The third step recovers the translational displacement. The rotation estimation is based on Euler's theorem, which allows one to represent a 3-D rotation as a planar rotation around a 3-D rotation axis. This axis is accurately recovered by the 3-D pseudopolar Fourier transform using radial integrations. The residual planar rotation is computed by an extension of the angular difference function to cylindrical motion. Experimental results show that the algorithm is accurate and robust to noise",
"We present a spectral approach for detecting and analyzing rotational and reflectional symmetries in n-dimensions. Our main contribution is the derivation of a symmetry detection and analysis scheme for sets of points IRn and its extension to image analysis by way of local features. Each object is represented by a set of points S ∈ IRn, where the symmetry is manifested by the multiple self-alignments of S . The alignment problem is formulated as a quadratic binary optimization problem, with an efficient solution via spectral relaxation. For symmetric objects, this results in a multiplicity of eigenvalues whose corresponding eigenvectors allow the detection and analysis of both types of symmetry. We improve the scheme's robustness by incorporating geometrical constraints into the spectral analysis. Our approach is experimentally verified by applying it to 2D and 3D synthetic objects as well as real images.",
"It is shown that for small, spherically symmetric perturbations of asymptotically flat two-ended Reissner–Nordstrom data for the Einstein–Maxwell-real scalar field system, the boundary of the dynamic spacetime which evolves is globally represented by a bifurcate null hypersurface across which the metric extends continuously. Under additional assumptions, it is shown that the Hawking mass blows up identically along this bifurcate null hypersurface, and thus the metric cannot be extended twice differentiably; in fact, it cannot be extended in a weaker sense characterized at the level of the Christoffel symbols. The proof combines estimates obtained in previous work with an elementary Cauchy stability argument. There are no restrictions on the size of the support of the scalar field, and the result applies to both the future and past boundary of spacetime. In particular, it follows that for an open set in the moduli space of solutions around Reissner–Nordstrom, there is no spacelike component of either the future or the past singularity."
]
} |
hep-th0610185 | 1993186195 | We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral. | Also related to the cosmological constant problem is the paper @cite_11 , which introduces phantom fields to cancel the ordinary matter contribution to the vacuum energy. Again, our non-perturbative approach to phantom fields may be useful in the study of these models. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1983032885"
],
"abstract": [
"We study a symmetry, schematically Energy → — Energy, which suppresses matter contributions to the cosmological constant. The requisite negative energy fluctuations are identified with a ghost'' copy of the Standard Model. Gravity explicitly, but weakly, violates the symmetry, and naturalness requires General Relativity to break down at short distances with testable consequences. If this breakdown is accompanied by gravitational Lorentz-violation, the decay of flat spacetime by ghost production is acceptably slow. We show that inflation works in our scenario and can lead to the initial conditions required for standard Big Bang cosmology."
]
} |
hep-th0610185 | 1993186195 | We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral. | In scattering theory, so-called Siegert or Gamow states may be used to represent resonances. These are states with complex momentum @cite_5 , which may be given precise mathematical meaning in the framework of section of the current article, where such states are defined as distributions on test function spaces of Gel'fand-Shilov type. Since we are mainly interested in field theory applications where interactions are polynomial, we do not treat sufficiently general potentials for our results to be directly applicable to many traditional non-relativistic scattering problems, but perhaps the mathematical machinery can be generalized. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1599439450"
],
"abstract": [
"We study individual eigenstates of quantized area-preserving maps on the 2-torus which are classically chaotic. In order to analyze their semiclassical behavior, we use the Bargmann–Husimi representations for quantum states as well as their stellar parametrization, which encodes states through a minimal set of points in phase space (the constellation of zeros of the Husimi density). We rigorously prove that a semiclassical uniform distribution of Husimi densities on the torus entails a similar equidistribution for the corresponding constellations. We deduce from this property a universal behavior for the phase patterns of chaotic Bargmann eigenfunctions which is reminiscent of the WKB approximation for eigenstates of integrable systems (though in a weaker sense). In order to obtain more precise information on “chaotic eigenconstellations,” we then model their properties by ensembles of random states, generalizing former results on the 2-sphere to the torus geometry. This approach yields statistical predictions for the constellations which fit quite well the chaotic data. We finally observe that specific dynamical information, e.g., the presence of high peaks (like scars) in Husimi densities, can be recovered from the knowledge of a few long-wavelength Fourier coefficients, which therefore appear as valuable order parameters at the level of individual chaotic eigenfunctions."
]
} |
hep-th0610185 | 1993186195 | We investigate the nonperturbative quantization of phantom and ghost degrees of freedom by relating their representations in definite and indefinite inner product spaces. For a large class of potentials, we argue that the same physical information can be extracted from either representation. We provide a definition of the path integral for these theories, even in cases where the integrand may be exponentially unbounded, thereby removing some previous obstacles to their nonperturbative study. We apply our results to the study of ghost fields of Pauli–Villars and Lee–Wick type, and we show in the context of a toy model how to derive, from an exact nonperturbative path integral calculation, previously ad hoc prescriptions for Feynman diagram contour integrals in the presence of complex energies. We point out that the pole prescriptions obtained in ghost theories are opposite to what would have been expected if one had added conventional i∊ convergence factors in the path integral. | The author of @cite_12 describes a treatment of these states in the context of rigged Hilbert spaces. His construction should be very closely related to the one of the current article. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2949432135"
],
"abstract": [
"In the present paper I show how it is possible to derive the Hilbert space formulation of Quantum Mechanics from a comprehensive definition of \"physical experiment\" and assuming \"experimental accessibility and simplicity\" as specified by five simple Postulates. This accomplishes the program presented in form of conjectures in the previous paper quant-ph 0506034. Pivotal roles are played by the \"local observability principle\", which reconciles the holism of nonlocality with the reductionism of local observation, and by the postulated existence of \"informationally complete observables\" and of a \"symmetric faithful state\". This last notion allows one to introduce an operational definition for the real version of the \"adjoint\"--i. e. the transposition--from which one can derive a real Hilbert-space structure via either the Mackey-Kakutani or the Gelfand-Naimark-Segal constructions. Here I analyze in detail only the Gelfand-Naimark-Segal construction, which leads to a real Hilbert space structure analogous to that of (classes of generally unbounded) selfadjoint operators in Quantum Mechanics. For finite dimensions, general dimensionality theorems that can be derived from a local observability principle, allow us to represent the elements of the real Hilbert space as operators over an underlying complex Hilbert space (see, however, a still open problem at the end of the paper). The route for the present operational axiomatization was suggested by novel ideas originated from Quantum Tomography."
]
} |
math0610051 | 2950957448 | We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation and general hyperbolic equations. The problem is to evaluate numerically a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory at high frequencies. Because an FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations, and as low as @math in storage space. It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: 1) a diffeomorphism which is handled by means of a nonuniform FFT and 2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is that the separation rank of the residual kernel is . Several numerical examples demonstrate the efficiency and accuracy of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology. | In the case where @math , the operator is said to be pseudodifferential. In this simpler setting, it is known that separated variables expansions of the symbol @math are good strategies for reducing complexity. For instance, Bao and Symes @cite_8 propose a numerical method based on a Fourier series expansion of the symbol in the angular variable arg @math , and a polyhomogeneous expansion in @math , which is a particularly effective example of separation of variables. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2962804622"
],
"abstract": [
"We consider the perturbation of elliptic pseudodifferential operators @math with more than square integrable Green's functions by random, rapidly varying, sufficiently mixing, potentials of the form @math . We analyze the source and spectral problems associated with such operators and show that the rescaled difference between the perturbed and unperturbed solutions may be written asymptotically as @math as explicit Gaussian processes. Such results may be seen as central limit corrections to homogenization (law of large numbers). Similar results are derived for more general elliptic equations with random coefficients in one dimension of space. The results are based on the availability of a rapidly converging integral formulation for the perturbed solutions and on the use of classical central limit results for random processes with appropriate mixing conditions."
]
} |
math0610051 | 2950957448 | We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation and general hyperbolic equations. The problem is to evaluate numerically a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory at high frequencies. Because an FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations, and as low as @math in storage space. It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: 1) a diffeomorphism which is handled by means of a nonuniform FFT and 2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is that the separation rank of the residual kernel is . Several numerical examples demonstrate the efficiency and accuracy of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology. | We would also like to acknowledge the line of research related to Filon-type quadratures for oscillatory integrals @cite_14 . When the integrand is of the form @math with @math smooth and @math large, it is not always necessary to sample the integrand at the Nyquist rate. For instance, integration of a polynomial interpolant of @math (Filon quadrature) provides an accurate approximation to @math using fewer and fewer evaluations of the function @math as @math . While these ideas are important, they are not directly applicable in the case of FIOs. The reasons are threefold. First, we make no notable assumption on the support of the function to which the operator is applied, meaning that the oscillations of @math may be on the same scale as those of the exponential @math . Second the phase does not in general have a simple formula that would lend itself to precomputations. And third, Filon-type quadratures do not address the problem of simplifying computations of such oscillatory integrals at once (i.e. computing a family of integrals indexed by @math in the case of FIOs). | {
"cite_N": [
"@cite_14"
],
"mid": [
"2063399239"
],
"abstract": [
"We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation, general hyperbolic equations, and curvilinear tomography. The problem is to numerically evaluate a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude, and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory. Because a FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations and as low as @math in storage space (the constants in front of these estimates are small). It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: (1) a diffeomorphism which is handled by means of a nonuniform FFT and (2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is the fact that the separation rank of the residual kernel is provably independent of the problem size. Several numerical examples demonstrate the numerical accuracy and low computational complexity of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology."
]
} |
cs0610046 | 2951738093 | The running maximum-minimum (max-min) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage. | @cite_7 presented the filter algorithm requiring @math comparisons per element in the worst case and an average-case performance over independent and identically distributed (i.i.d.) noise data of slightly more than 3 comparisons per element. @cite_6 presented a better alternative: the filter algorithm was shown to average 3 comparisons per element for i.i.d. input signals and @cite_4 presented an asynchronous implementation. | {
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_7"
],
"mid": [
"2039851473",
"2172028873",
"2132162087"
],
"abstract": [
"This correspondence presents an average case denoising performance analysis for SP, CoSaMP, and IHT algorithms. This analysis considers the recovery of a noisy signal, with the assumptions that it is corrupted by an additive random zero-mean white Gaussian noise and has a K-sparse representation with respect to a known dictionary D . The proposed analysis is based on the RIP, establishing a near-oracle performance guarantee for each of these algorithms. Beyond bounds for the reconstruction error that hold with high probability, in this work we also provide a bound for the average error.",
"We analyze a sublinear RA@?SFA (randomized algorithm for Sparse Fourier analysis) that finds a near-optimal B-term Sparse representation R for a given discrete signal S of length N, in time and space poly(B,log(N)), following the approach given in [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002]. Its time cost poly(log(N)) should be compared with the superlinear @W(NlogN) time requirement of the Fast Fourier Transform (FFT). A straightforward implementation of the RA@?SFA, as presented in the theoretical paper [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002], turns out to be very slow in practice. Our main result is a greatly improved and practical RA@?SFA. We introduce several new ideas and techniques that speed up the algorithm. Both rigorous and heuristic arguments for parameter choices are presented. Our RA@?SFA constructs, with probability at least 1-@d, a near-optimal B-term representation R in time poly(B)log(N)log(1 @d) @e^2log(M) such that @?S-R@?\"2^2=<(1+@e)@?S-R\"o\"p\"t@?\"2^2. Furthermore, this RA@?SFA implementation already beats the FFTW for not unreasonably large N. We extend the algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N 70,000 in one dimension, and at N 900 for data on a NxN grid in two dimensions for small B signals where there is noise.",
"We introduce efficient margin-based algorithms for selective sampling and filtering in binary classification tasks. Experiments on real-world textual data reveal that our algorithms perform significantly better than popular and similarly efficient competitors. Using the so-called Mammen-Tsybakov low noise condition to parametrize the instance distribution, and assuming linear label noise, we show bounds on the convergence rate to the Bayes risk of a weaker adaptive variant of our selective sampler. Our analysis reveals that, excluding logarithmic factors, the average risk of this adaptive sampler converges to the Bayes risk at rate N ?(1+?)(2+?) 2(3+?) where N denotes the number of queried labels, and ?>0 is the exponent in the low noise condition. For all @math this convergence rate is asymptotically faster than the rate N ?(1+?) (2+?) achieved by the fully supervised version of the base selective sampler, which queries all labels. Moreover, for ??? (hard margin condition) the gap between the semi- and fully-supervised rates becomes exponential."
]
} |
cs0610046 | 2951738093 | The running maximum-minimum (max-min) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage. | @cite_2 proposed a fast algorithm based on anchors. They do not improve on the number of comparisons per element. For window sizes ranging from 10 to 30 and data values ranging from 0 to 255, their implementation has a running time lower than their implementation by as much as 30 implementation by as much as 15 than 15, but is outperformed similarly for smaller window sizes, and both are comparable for a window size equals to 15. The Droogenbroeck-Buckley filter pseudocode alone requires a full page compared to a few lines for algorithm. Their experiments did not consider window sizes beyond @math nor arbitrary floating point data values. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2113622448"
],
"abstract": [
"We present the design of a sample sort algorithm for manycore GPUs. Despite being one of the most efficient comparison-based sorting algorithms for distributed memory architectures its performance on GPUs was previously unknown. For uniformly distributed keys our sample sort is at least 25 and on average 68 faster than the best comparison-based sorting algorithm, GPU Thrust merge sort, and on average more than 2 times faster than GPU quicksort. Moreover, for 64-bit integer keys it is at least 63 and on average 2 times faster than the highly optimized GPU Thrust radix sort that directly manipulates the binary representation of keys. Our implementation is robust to different distributions and entropy levels of keys and scales almost linearly with the input size. These results indicate that multi-way techniques in general and sample sort in particular achieve substantially better performance than two-way merge sort and quicksort."
]
} |
cs0610105 | 1616788770 | We present a new class of statistical de-anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information. | Unlike statistical databases @cite_22 @cite_4 @cite_10 @cite_17 @cite_23 , micro-data datasets contain actual records of individuals even after anonymization. A popular approach to micro-data privacy is @math -anonymity @cite_18 @cite_8 @cite_20 . The data publisher must determine in advance which of the attributes are available to the adversary (these are called quasi-identifiers''), and which are the sensitive attributes'' to be protected. @math -anonymization ensures that each quasi-identifier'' tuple occurs in at least @math records in the anonymized database. It is well-known that @math -anonymity does not guarantee privacy, because the values of sensitive attributes associated with a given quasi-identifier may not be sufficiently diverse @cite_19 @cite_5 or because the adversary has access to background knowledge @cite_19 . Mere knowledge of the @math -anonymization algorithm may be sufficient to break privacy @cite_2 . Furthermore, @math -anonymization completely fails on high-dimensional datasets @cite_6 , such as the Netflix Prize dataset and most real-world datasets of individual recommendations and purchases. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2120263102",
"1982183556",
"2135930857",
"2164649498",
"2136114025",
"1518033500",
"2163882872",
"1992286709",
"1502916507",
"1966406552",
"2119047901",
"1606251440"
],
"abstract": [
"Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code [15]. Sweeney [16] proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an e fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well.",
"Re-identification is a major privacy threat to public datasets containing individual records. Many privacy protection algorithms rely on generalization and suppression of \"quasi-identifier\" attributes such as ZIP code and birthdate. Their objective is usually syntactic sanitization: for example, k-anonymity requires that each \"quasi-identifier\" tuple appear in at least k records, while l-diversity requires that the distribution of sensitive attributes for each quasi-identifier have high entropy. The utility of sanitized data is also measured syntactically, by the number of generalization steps applied or the number of records with the same quasi-identifier. In this paper, we ask whether generalization and suppression of quasi-identifiers offer any benefits over trivial sanitization which simply separates quasi-identifiers from sensitive attributes. Previous work showed that k-anonymous databases can be useful for data mining, but k-anonymization does not guarantee any privacy. By contrast, we measure the tradeoff between privacy (how much can the adversary learn from the sanitized records?) and utility, measured as accuracy of data-mining algorithms executed on the same sanitized records. For our experimental evaluation, we use the same datasets from the UCI machine learning repository as were used in previous research on generalization and suppression. Our results demonstrate that even modest privacy gains require almost complete destruction of the data-mining utility. In most cases, trivial sanitization provides equivalent utility and better privacy than k-anonymity, l-diversity, and similar methods based on generalization and suppression.",
"We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain “identifying” attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented (in Section 2) values for each sensitive attribute. In this paper, we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we propose a new notion of privacy called “closeness.” We first present the base model t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We then propose a more flexible privacy model called (n,t)-closeness that offers higher utility. We describe our desiderata for designing a distance measure between two probability distributions and present two distance measures. We discuss the rationale for using closeness as a privacy measure and illustrate its advantages through examples and experiments.",
"The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain \"identifying\" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments.",
"We study the distributed privacy preserving data collection problem: an untrusted data collector (e.g., a medical research institute) wishes to collect data (e.g., medical records) from a group of respondents (e.g., patients). Each respondent owns a multi-attributed record which contains both non-sensitive (e.g., quasi-identifiers) and sensitive information (e.g., a particular disease), and submits it to the data collector. Assuming T is the table formed by all the respondent data records, we say that the data collection process is privacy preserving if it allows the data collector to obtain a k-anonymized or l-diversified version of T without revealing the original records to the adversary. We propose a distributed data collection protocol that outputs an anonymized table by generalization of quasi-identifier attributes. The protocol employs cryptographic techniques such as homomorphic encryption, private information retrieval and secure multiparty computation to ensure the privacy goal in the process of data collection. Meanwhile, the protocol is designed to leak limited but noncritical information to achieve practicability and efficiency. Experiments show that the utility of the anonymized table derived by our protocol is in par with the utility achieved by traditional anonymization techniques.",
"We identify proximity breach as a privacy threat specific to numerical sensitive attributes in anonymized data publication. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual must fall in a short interval --- even though the adversary may have low confidence about the victim's actual value. None of the existing anonymization principles (e.g., k-anonymity, l-diversity, etc.) can effectively prevent proximity breach. We remedy the problem by introducing a novel principle called (e, m)-anonymity. Intuitively, the principle demands that, given a QI-group G, for every sensitive value x in G, at most 1 m of the tuples in G can have sensitive values \"similar\" to x, where the similarity is controlled by e. We provide a careful analytical study of the theoretical characteristics of (e, m)-anonymity, and the corresponding generalization algorithm. Our findings are verified by experiments with real data.",
"It is not uncommon in the data anonymization literature to oppose the \"old\" @math k -anonymity model to the \"new\" differential privacy model, which offers more robust privacy guarantees. Yet, it is often disregarded that the utility of the anonymized results provided by differential privacy is quite limited, due to the amount of noise that needs to be added to the output, or because utility can only be guaranteed for a restricted type of queries. This is in contrast with @math k -anonymity mechanisms, which make no assumptions on the uses of anonymized data while focusing on preserving data utility from a general perspective. In this paper, we show that a synergy between differential privacy and @math k -anonymity can be found: @math k -anonymity can help improving the utility of differentially private responses to arbitrary queries. We devote special attention to the utility improvement of differentially private published data sets. Specifically, we show that the amount of noise required to fulfill @math ? -differential privacy can be reduced if noise is added to a @math k -anonymous version of the data set, where @math k -anonymity is reached through a specially designed microaggregation of all attributes. As a result of noise reduction, the general analytical utility of the anonymized output is increased. The theoretical benefits of our proposal are illustrated in a practical setting with an empirical evaluation on three data sets.",
"The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).",
"Micro-data protection is a hot topic in the field of Statistical Disclosure Control (SDC), that has gained special interest after the disclosure of 658000 queries by the AOL search engine in August 2006. Many algorithms, methods and properties have been proposed to deal with micro-data disclosure, p-Sensitive k-anonymity has been recently defined as a sophistication of k-anonymity. This new property requires that there be at least p different values for each confidential attribute within the records sharing a combination of key attributes. Like k-anonymity, the algorithm originally proposed to achieve this property was based on generalisations and suppressions; when data sets are numerical this has several data utility problems, namely turning numerical key attributes into categorical, injecting new categories, injecting missing data, and so on. In this article, we recall the foundational concepts of micro-aggregation, k-anonymity and p-sensitive k-anonymity. We show that k-anonymity and p-sensitive k-anonymity can be achieved in numerical data sets by means of micro-aggregation heuristics properly adapted to deal with this task. In addition, we present and evaluate two heuristics for p-sensitive k-anonymity which, being based on micro-aggregation, overcome most of the drawbacks resulting from the generalisation and suppression method.",
"Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.",
"In recent years, the wide availability of personal data has made the problem of privacy preserving data mining an important one. A number of methods have recently been proposed for privacy preserving data mining of multidimensional data records. One of the methods for privacy preserving data mining is that of anonymization, in which a record is released only if it is indistinguishable from k other entities in the data. We note that methods such as k-anonymity are highly dependent upon spatial locality in order to effectively implement the technique in a statistically robust way. In high dimensional space the data becomes sparse, and the concept of spatial locality is no longer easy to define from an application point of view. In this paper, we view the k-anonymization problem from the perspective of inference attacks over all possible combinations of attributes. We show that when the data contains a large number of attributes which may be considered quasi-identifiers, it becomes difficult to anonymize the data without an unacceptably high amount of information loss. This is because an exponential number of combinations of dimensions can be used to make precise inference attacks, even when individual attributes are partially specified within a range. We provide an analysis of the effect of dimensionality on k-anonymity methods. We conclude that when a data set contains a large number of attributes which are open to inference attacks, we are faced with a choice of either completely suppressing most of the data or losing the desired level of anonymity. Thus, this paper shows that the curse of high dimensionality also applies to the problem of privacy preserving data mining."
]
} |
cs0610105 | 1616788770 | We present a new class of statistical de-anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information. | Our main case study is the Netflix Prize dataset of movie ratings. We are aware of only one previous paper that considered privacy of movie ratings. In collaboration with the MovieLens recommendation service, Frankowski correlated public mentions of movies in the MovieLens discussion forum with the users' movie rating histories in the MovieLens dataset @cite_7 . The algorithm uses the entire public record as the background knowledge (29 ratings per user, on average), and is not robust if this knowledge is imprecise ( , if the user publicly mentioned movies which he did not rate). | {
"cite_N": [
"@cite_7"
],
"mid": [
"2135930857"
],
"abstract": [
"We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information."
]
} |
cs0610137 | 2950868392 | The Software Transactional Memory (STM) model is an original approach for controlling concurrent accesses to ressources without the need for explicit lock-based synchronization mechanisms. A key feature of STM is to provide a way to group sequences of read and write actions inside atomic blocks, similar to database transactions, whose whole effect should occur atomically. In this paper, we investigate STM from a process algebra perspective and define an extension of asynchronous CCS with atomic blocks of actions. Our goal is not only to set a formal ground for reasoning on STM implementations but also to understand how this model fits with other concurrency control mechanisms. We also view this calculus as a test bed for extending process calculi with atomic transactions. This is an interesting direction for investigation since, for the most part, actual works that mix transactions with process calculi consider compensating transactions, a model that lacks all the well-known ACID properties. We show that the addition of atomic transactions results in a very expressive calculus, enough to easily encode other concurrent primitives such as guarded choice and multiset-synchronization ( a la join-calculus). The correctness of our encodings is proved using a suitable notion of bisimulation equivalence. The equivalence is then applied to prove interesting laws of transactions'' and to obtain a simple normal form for transactions. | Linked to the upsurge of works on Web Services (and on long running Web transactions), a larger body of works is concerned with formalizing . In this context, each transactive block of actions is associated with a compensation (code) that has to be run if a failure is detected. The purpose of compensation is to undo most of the visible actions that have been performed and, in this case, atomicity, isolation and durability are obviously violated. We give a brief survey of works that formalize compensable processes using process calculi. These works are of two types: (1) @cite_19 @cite_4 @cite_2 , which are extensions of process calculi (like @math or join-calculus) for describing transactional choreographies where composition take place dynamically and where each service describes its possible interactions and compensations; (2) @cite_22 @cite_17 @cite_11 @cite_20 , where ad hoc process algebras are designed from scratch to describe the possible flow of control among services. These calculi are oriented towards the orchestration of services and service failures. This second approach is also followed in @cite_21 @cite_12 where two frameworks for composing transactional services are presented. | {
"cite_N": [
"@cite_11",
"@cite_4",
"@cite_22",
"@cite_21",
"@cite_19",
"@cite_2",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"1974168649",
"2460103410",
"2971432649",
"1983914828",
"1989393769",
"1587877137",
"2160436229",
"2100515808",
"2126170153"
],
"abstract": [
"A key aspect when aggregating business processes and web services is to assure transactional properties of process executions. Since transactions in this context may require long periods of time to complete, traditional mechanisms for guaranteeing atomicity are not always appropriate. Generally the concept of long running transactions relies on a weaker notion of atomicity based on compensations. For this reason, programming languages for service composition cannot leave out two key aspects: compensations, i.e. ad hoc activities that can undo the effects of a process that fails to complete, and transactional boundaries to delimit the scope of a transactional flow. This paper presents a hierarchy of transactional calculi with increasing expressiveness. We start from a very small language in which activities can only be composed sequentially. Then, we progressively introduce parallel composition, nesting, programmable compensations and exception handling. A running example illustrates the main features of each calculus in the hierarchy.",
"A long-running transaction is an interactive component of a distributed system which must be executed as if it were a single atomic action. In principle, it should not be interrupted or fail in the middle, and it must not be interleaved with other atomic actions of other concurrently executing components of the system. In practice, the illusion of atomicity for a long-running transaction is achieved with the aid of compensation actions supplied by the original programmer: because the transaction is interactive, familiar automatic techniques of check-pointing and rollback are no longer adequate. This paper constructs a model of long-running transactions within the framework of the CSP process algebra, showing how the compensations are orchestrated to achieve the illusion of atomicity. It introduces a method for declaring that a process is a transaction, and for declaring a compensation for it in case it needs to be rolled back after it has committed. The familiar operator of sequential composition is redefined to ensure that all necessary compensations will be called in the right order if a later failure makes this necessary. The techniques are designed to work well in a highly concurrent and distributed setting. In addition we define an angelic choice operation, implemented by speculative execution of alternatives; its judicious use can improve responsiveness of a system in the face of the unpredictable latencies of remote communication. Many of the familiar properties of process algebra are preserved by these new definitions, on reasonable assumptions of the correctness and independence of the programmer-declared compensations.",
"Abstract In this paper, we present a unifying analysis for redundancy systems with cancel-on-start ( c . o . s . ) and cancel-on-complete ( c . o . c . ) with exponentially distributed service requirements. With c . o . s . ( c . o . c . ) all redundant copies are removed as soon as one of the copies starts (completes) service. As a consequence, c . o . s . does not waste any computing resources, as opposed to c . o . c . We show that the c . o . s . model is equivalent to a queueing system with multi-type jobs and servers, which was analyzed in , (2012), and show that c . o . c . (under the assumption of i.i.d. copies) can be analyzed by a generalization of , (2012) where state-dependent departure rates are permitted. This allows us to show that the stationary distribution for both the c . o . c . and c . o . s . models has a product form. We give a detailed first-time analysis for c . o . s and derive a closed form expression for important metrics like mean number of jobs in the system, and probability of waiting. We also note that the c . o . s . model is equivalent to Join-Shortest-Work queue with power of d (JSW( d )). In the latter, an incoming job is dispatched to the server with smallest workload among d randomly chosen ones. Thus, all our results apply mutatis-mutandis to JSW( d ). Comparing the performance of c . o . s . with that of c . o . c . with i.i.d. copies gives the unexpected conclusion (since c . o . s . does not waste any resources) that c . o . s . is worse in terms of mean number of jobs. As part of ancillary results, we illustrate that this is primarily due to the assumption of i.i.d. copies in case of c . o . c . (together with exponentially distributed requirements) and that such assumptions might lead to conclusions that are qualitatively different from that observed in practice.",
"In this paper we recast the classical Darondeau---Degano's causal semantics of concurrency in a coalgebraic setting, where we derive a compact model. Our construction is inspired by the one of Montanari and Pistore yielding causal automata, but we show that it is instance of an existing categorical framework for modeling the semantics of nominal calculi, whose relevance is further demonstrated. The key idea is to represent events as names, and the occurrence of a new event as name generation. We model causal semantics as a coalgebra over a presheaf, along the lines of the Fiore---Turi approach to the semantics of nominal calculi. More specifically, we take a suitable category of finite posets, representing causal relations over events, and we equip it with an endofunctor that allocates new events and relates them to their causes. Presheaves over this category express the relationship between processes and causal relations among the processes' events. We use the allocation operator to define a category of well-behaved coalgebras: it models the occurrence of a new event along each transition. Then we turn the causal transition relation into a coalgebra in this category, where labels only exhibit maximal events with respect to the source states' poset, and we show that its bisimilarity is essentially Darondeau---Degano's strong causal bisimilarity. This coalgebra is still infinite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, where states only retain the poset of the most recent events for each atomic subprocess, and are isomorphic up to order-preserving permutations. Remarkably, this reduction of states is automatically performed along the equivalence.",
"In this paper we compare the expressive power of elementary representation formats for vague, incomplete or conflicting information. These include Boolean valuation pairs introduced by Lawry and Gonzalez-Rodriguez, orthopairs of sets of variables, Boolean possibility and necessity measures, three-valued valuations, supervaluations. We make explicit their connections with strong Kleene logic and with Belnap logic of conflicting information. The formal similarities between 3-valued approaches to vagueness and formalisms that handle incomplete information often lead to a confusion between degrees of truth and degrees of uncertainty. Yet there are important differences that appear at the interpretive level: while truth-functional logics of vagueness are accepted by a part of the scientific community (even if questioned by supervaluationists), the truth-functionality assumption of three-valued calculi for handling incomplete information looks questionable, compared to the non-truth-functional approaches based on Boolean possibility-necessity pairs. This paper aims to clarify the similarities and differences between the two situations. We also study to what extent operations for comparing and merging information items in the form of orthopairs can be expressed by means of operations on valuation pairs, three-valued valuations and underlying possibility distributions. We explore the connections between several representations of imperfect information.In each case we compare the expressive power of these formalisms.In each case we study how to express aggregation operations.We demonstrate the formal similarities among these approaches.We point out the differences in interpretations between these approaches.",
"We study long-running transactions in open component-based distributed applications, such as Web Services platforms. Long-running transactions describe time-extensive activities that involve several distributed components. Henceforth, in case of failure, it is usually not possible to restore the initial state, and firing a compensation process is preferable. Despite the interest of such transactional mechanisms, a formal modeling of them is still lacking. In this paper we address this issue by designing an extension of the asynchronous π-calculus with long-running transactions (and sequences) – the πt -calculus. We study the practice of πt-calculus, by discussing few paradigmatic examples, and its theory, by defining a semantics and providing a correct encoding of πt-calculus into asynchronous π-calculus.",
"We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of client's individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads.",
"We present a novel reasoning calculus for the description logic SHOIQ+--a knowledge representation formalism with applications in areas such as the SemanticWeb. Unnecessary nondeterminism and the construction of large models are two primary sources of inefficiency in the tableau-based reasoning calculi used in state-of-the-art reasoners. In order to reduce nondeterminism, we base our calculus on hypertableau and hyperresolution calculi, which we extend with a blocking condition to ensure termination. In order to reduce the size of the constructed models, we introduce anywhere pairwise blocking. We also present an improved nominal introduction rule that ensures termination in the presence of nominals, inverse roles, and number restrictions--a combination of DL constructs that has proven notoriously difficult to handle. Our implementation shows significant performance improvements over state-of-the-art reasoners on several well-known ontologies.",
"The authors show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each agent's secret data is naturally expressed by a single positive real number. The goal of the mechanisms we consider is to allocate loads placed on the agents, and an agent's secret data is the cost she incurs per unit load. We give an exact characterization for the algorithms that can be used to design truthful mechanisms for such load balancing problems using appropriate side payments. We use our characterization to design polynomial time truthful mechanisms for several problems in combinatorial optimization to which the celebrated VCG mechanism does not apply. For scheduling related parallel machines (Q spl par C sub max ), we give a 3-approximation mechanism based on randomized rounding of the optimal fractional solution. This problem is NP-complete, and the standard approximation algorithms (greedy load-balancing or the PTAS) cannot be used in truthful mechanisms. We show our mechanism to be frugal, in that the total payment needed is only a logarithmic factor more than the actual costs incurred by the machines, unless one machine dominates the total processing power. We also give truthful mechanisms for maximum flow, Q spl par spl Sigma C sub j (scheduling related machines to minimize the sum of completion times), optimizing an affine function over a fixed set, and special cases of uncapacitated facility location. In addition, for Q spl par spl Sigma w sub j C sub j (minimizing the weighted sum of completion times), we prove a lower bound of 2 spl radic 3 for the best approximation ratio achievable by truthful mechanism."
]
} |
math-ph0609072 | 2949598775 | We study nodal sets for typical eigenfunctions of the Laplacian on the standard torus in 2 or more dimensions. Making use of the multiplicities in the spectrum of the Laplacian, we put a Gaussian measure on the eigenspaces and use it to average over the eigenspace. We consider a sequence of eigenvalues with multiplicity N tending to infinity. The quantity that we study is the Leray, or microcanonical, measure of the nodal set. We show that the expected value of the Leray measure of an eigenfunction is constant. Our main result is that the variance of Leray measure is asymptotically 1 (4 pi N), as N tends to infinity, at least in dimensions 2 and at least 5. | The study of nodal lines of random waves goes back to Longuet-Higgins @cite_7 @cite_13 who computed various statistics of nodal lines for Gaussian random waves in connection with the analysis of ocean waves. Berry @cite_10 suggested to model highly excited quantum states for classically chaotic systems by using various random wave models, and also computed fluctuations of various quantities in these models (see e.g. @cite_2 ). See also Zelditch @cite_11 . The idea of averaging over a single eigenspace in the presence of multiplicities appears in B 'erard @cite_17 who computed the expected surface measure of the nodal set for eigenfunctions of the Laplacian on spheres. Neuheisel @cite_9 also worked on the sphere and studied the statistics of Leray measure. He gave an upper bound for the variance, which we believe is not sharp. | {
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_9",
"@cite_17",
"@cite_2",
"@cite_10",
"@cite_11"
],
"mid": [
"2094161636",
"2510611518",
"1599439450",
"2002700608",
"2522434885",
"2044172468",
"1545211435"
],
"abstract": [
"For real (time-reversal symmetric) quantum billiards, the mean length L of nodal line is calculated for the nth mode (n>>1), with wavenumber k, using a Gaussian random wave model adapted locally to satisfy Dirichlet or Neumann boundary conditions. The leading term is of order k (i.e. √n), and the first (perimeter) correction, dominated by an unanticipated long-range boundary effect, is of order log k (i.e. log n), with the same sign (negative) for both boundary conditions. The leading-order state-to-state fluctuations δL are of order √log k. For the curvature κ of nodal lines, |κ| and √κ2 are of order k, but |κ|3 and higher moments diverge. For complex (e.g. Aharonov-Bohm) billiards, the mean number N of nodal points (phase singularities) in the mode has a leading term of order k2 (i.e. n), the perimeter correction (again a long-range effect) is of order klog k (i.e. √nlog n) (and positive, notwithstanding nodal depletion near the boundary) and the fluctuations δN are of order k√log k. Generalizations of the results for mixed (Robin) boundary conditions are stated.",
"Complex arithmetic random waves are stationary Gaussian complex-valued solutions of the Helmholtz equation on the two-dimensional flat torus. We use Wiener-It ^o chaotic expansions in order to derive a complete characterization of the second order high-energy behaviour of the total number of phase singularities of these functions. Our main result is that, while such random quantities verify a universal law of large numbers, they also exhibit non-universal and non-central second order fluctuations that are dictated by the arithmetic nature of the underlying spectral measures. Such fluctuations are qualitatively consistent with the cancellation phenomena predicted by Berry (2002) in the case of complex random waves on compact planar domains. Our results extend to the complex setting recent pathbreaking findings by Rudnick and Wigman (2008), Krishnapur, Kurlberg and Wigman (2013) and Marinucci, Peccati, Rossi and Wigman (2016). The exact asymptotic characterization of the variance is based on a fine analysis of the Kac-Rice kernel around the origin, as well as on a novel use of combinatorial moment formulae for controlling long-range weak correlations.",
"We study individual eigenstates of quantized area-preserving maps on the 2-torus which are classically chaotic. In order to analyze their semiclassical behavior, we use the Bargmann–Husimi representations for quantum states as well as their stellar parametrization, which encodes states through a minimal set of points in phase space (the constellation of zeros of the Husimi density). We rigorously prove that a semiclassical uniform distribution of Husimi densities on the torus entails a similar equidistribution for the corresponding constellations. We deduce from this property a universal behavior for the phase patterns of chaotic Bargmann eigenfunctions which is reminiscent of the WKB approximation for eigenstates of integrable systems (though in a weaker sense). In order to obtain more precise information on “chaotic eigenconstellations,” we then model their properties by ensembles of random states, generalizing former results on the 2-sphere to the torus geometry. This approach yields statistical predictions for the constellations which fit quite well the chaotic data. We finally observe that specific dynamical information, e.g., the presence of high peaks (like scars) in Husimi densities, can be recovered from the knowledge of a few long-wavelength Fourier coefficients, which therefore appear as valuable order parameters at the level of individual chaotic eigenfunctions.",
"We study the asymptotic statistical behavior of the 2-dimensional periodic Lorentz gas with an infinite horizon. We consider a particle moving freely in the plane with elastic reflections from a periodic set of fixed convex scatterers. We assume that the initial position of the particle in the phase space is random with uniform distribution with respect to the Liouville measure of the periodic problem. We are interested in the asymptotic statistical behavior of the particle displacement in the plane as the timet goes to infinity. We assume that the particle horizon is infinite, which means that the length of free motion of the particle is unbounded. Then we show that under some natural assumptions on the free motion vector autocorrelation function, the limit distribution of the particle displacement in the plane is Gaussian, but the normalization factor is (t logt)1 2 and nott1 2 as in the classical case. We find the covariance matrix of the limit distribution.",
"A central problem of random matrix theory is to understand the eigenvalues of spiked random matrix models, in which a prominent eigenvector is planted into a random matrix. These distributions form natural statistical models for principal component analysis (PCA) problems throughout the sciences. Baik, Ben Arous and Peche showed that the spiked Wishart ensemble exhibits a sharp phase transition asymptotically: when the signal strength is above a critical threshold, it is possible to detect the presence of a spike based on the top eigenvalue, and below the threshold the top eigenvalue provides no information. Such results form the basis of our understanding of when PCA can detect a low-rank signal in the presence of noise. However, not all the information about the spike is necessarily contained in the spectrum. We study the fundamental limitations of statistical methods, including non-spectral ones. Our results include: I) For the Gaussian Wigner ensemble, we show that PCA achieves the optimal detection threshold for a variety of benign priors for the spike. We extend previous work on the spherically symmetric and i.i.d. Rademacher priors through an elementary, unified analysis. II) For any non-Gaussian Wigner ensemble, we show that PCA is always suboptimal for detection. However, a variant of PCA achieves the optimal threshold (for benign priors) by pre-transforming the matrix entries according to a carefully designed function. This approach has been stated before, and we give a rigorous and general analysis. III) For both the Gaussian Wishart ensemble and various synchronization problems over groups, we show that inefficient procedures can work below the threshold where PCA succeeds, whereas no known efficient algorithm achieves this. This conjectural gap between what is statistically possible and what can be done efficiently remains open.",
"A fundamental result of free probability theory due to Voiculescu and subsequently refined by many authors states that conjugation by independent Haar-distributed random unitary matrices delivers asymptotic freeness. In this paper we exhibit many other systems of random unitary matrices that, when used for conjugation, lead to freeness. We do so by first proving a general result asserting “asymptotic liberation” under quite mild conditions, and then we explain how to specialize these general results in a striking way by exploiting Hadamard matrices. In particular, we recover and generalize results of the second-named author and of Tulino, Caire, Shamai and Verdu.",
"* Recasts topics in random fields by following a completely new way of handling both geometry and probability * Significant exposition of the work of others in the field * Presentation is clear and pedagogical * Excellent reference work as well as excellent work for self study @PARASPLIT This monograph is devoted to a completely new approach to geometric problems arising in the study of random fields. The groundbreaking material in Part III, for which the background is carefully prepared in Parts I and II, is of both theoretical and practical importance, and striking in the way in which problems arising in geometry and probability are beautifully intertwined. @PARASPLIT The three parts to the monograph are quite distinct. Part I presents a user-friendly yet comprehensive background to the general theory of Gaussian random fields, treating classical topics such as continuity and boundedness, entropy and majorizing measures, Borell and Slepian inequalities. Part II gives a quick review of geometry, both integral and Riemannian, to provide the reader with the material needed for Part III, and to give some new results and new proofs of known results along the way. Topics such as Crofton formulae, curvature measures for stratified manifolds, critical point theory, and tube formulae are covered. In fact, this is the only concise, self-contained treatment of all of the above topics, which are necessary for the study of random fields. The new approach in Part III is devoted to the geometry of excursion sets of random fields and the related Euler characteristic approach to extremal probabilities. @PARASPLIT \"Random Fields and Geometry\" will be useful for probabilists and statisticians, and for theoretical and applied mathematicians who wish to learn about new relationships between geometry and probability. It will be helpful for graduate students in a classroom setting, or for self-study. Finally, this text will serve as a basic reference for all those interested in the companion volume of the applications of the theory. These applications, to appear in a forthcoming volume, will cover areas as widespread as brain imaging, physical oceanography, and astrophysics."
]
} |
cs0609026 | 2950493373 | The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have proposed the replacement of the rarest first and choke algorithms in order to improve efficiency and fairness. In this paper, we use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet. We instrumented a BitTorrent client and ran experiments on real torrents with different characteristics. Our experimental evaluation is peer oriented, instead of tracker oriented, which allows us to get detailed information on all exchanged messages and protocol events. We go beyond the mere observation of the good efficiency of both algorithms. We show that the rarest first algorithm guarantees close to ideal diversity of the pieces among peers. In particular, on our experiments, replacing the rarest first algorithm with source or network coding solutions cannot be justified. We also show that the choke algorithm in its latest version fosters reciprocation and is robust to free riders. In particular, the choke algorithm is fair and its replacement with a bit level tit-for-tat solution is not appropriate. Finally, we identify new areas of improvements for efficient peer-to-peer file replication protocols. | @cite_3 study the file popularity, file availability, download performance, content lifetime and pollution level on a popular tracker site. This work is orthogonal to ours as they do not study the core algorithms of , but rather focus on the contents distributed using and on the users behavior. The work that is the most closely related to our study was done by @cite_29 . In this paper, the authors provide seminal insights into based on data collected from a log for a yet popular torrent, even if a sketch of a local vision from a local peer perspective is presented. Their results provide information on peers behavior, and show a correlation between uploaded and downloaded amount of data. Our work differs from @cite_29 in that we provide a thorough measurement-based analysis of the rarest first and choke algorithms. We also study a large variety of torrents, which allows us not to be biased toward a particular type of torrent. Moreover, without pretending to answer all possible questions that arise from a simple yet powerful protocol as , we provide new insights into the rarest first and choke algorithms. | {
"cite_N": [
"@cite_29",
"@cite_3"
],
"mid": [
"2145080738",
"2128094442"
],
"abstract": [
"In this paper, a simple mathematical model is presented for studying the performance of the BitTorrent (1) file sharing system. We are especially interested in the distribution of the peers with different states of the download job completedness. With the model we find that in the stable state the distribution of the download peers follows a U-shaped curve, and the parameters such as the departure rate of the seeds and the abort rate of the download peers will influence the peer distribution in different ways notably. We also analyze the file availability and the dying process of the BitTorrent file sharing system. We find that the system's stability deteriorates with the clustering of the peers, and BitTorrent's built-in \"tit-for-tat\" unchoking strategy could not help to preserve the integrity of the file among the download peers when the size of the community is small. An innovative peer selection strategy which enables more peers to finish the download job and prolongs the system's lifetime is proposed, in which the peers cooperate to improve the stability of the system by making a tradeoff between the current download rate and the future service availability. Finally, experimental results are presented to validate our analysis and findings.",
"The observed performance by individual peers in BitTorrent can be simply measured by their average download rate. While it is often stated that the observed peer-level performance by BitTorrent clients is high, it is difficult to accurately verify this claim due to the large scale, distributed and dynamic nature of this P2P system. To provide a \"representative\" characterization of peer-level performance in BitTorrent, the following two important questions should be addressed: (i) What is the distribution of observed performance among participating peers in a torrent? (ii) What are the primary peer-or group-level properties that determine observed performance by individual peers? In this paper, we conduct a measurement study to tackle these two questions. Toward this end, we derive observed performance for nearly all participating peers along with their main peer-and (peer-view of) group-level properties in three different torrents. Our results show that the probability of experiencing certain level of performance has a roughly uniform distribution across the entire range of observed values. Furthermore, while the performance of each peer has the highest correlation with its outgoing bandwidth, there is no dominant peer-and group-level property that primarily determines the observed performance by the majority of peers."
]
} |
cs0609166 | 1640198638 | We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm. | Other work in private communication-efficient protocols for specific functions includes the Private Information Retrieval problem @cite_9 @cite_6 @cite_2 , building decision trees @cite_10 , set intersection and matching @cite_12 , and @math 'th-ranked element @cite_0 . | {
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_10",
"@cite_12"
],
"mid": [
"2154654620",
"2143087446",
"2069717895",
"1963094505",
"2752936485",
"1597844714"
],
"abstract": [
"We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n sup spl epsiv ) communication complexity for any spl epsiv >0.",
"We consider the problem of computing the intersection of private datasets of two parties, where the datasets contain lists of elements taken from a large domain. This problem has many applications for online collaboration. We present protocols, based on the use of homomorphic encryption and balanced hashing, for both semi-honest and malicious environments. For lists of length k, we obtain O(k) communication overhead and O(k ln ln k) computation. The protocol for the semi-honest environment is secure in the standard model, while the protocol for the malicious environment is secure in the random oracle model. We also consider the problem of approximating the size of the intersection, show a linear lower-bound for the communication overhead of solving this problem, and provide a suitable secure protocol. Lastly, we investigate other variants of the matching problem, including extending the protocol to the multi-party setting as well as considering the problem of approximate matching.",
"A secure function evaluation protocol allows two parties to jointly compute a function f(x,y) of their inputs in a manner not leaking more information than necessary. A major result in this field is: “any function f that can be computed using polynomial resources can be computed securely using polynomial resources” (where “resources” refers to communication and computation). This result follows by a general transformation from any circuit for f to a secure protocol that evaluates f . Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of f . We propose a new methodology for designing secure protocols, utilizing the communication complexity tree (or branching program) representation of f . We start with an efficient (insecure) protocol for f and transform it into a secure protocol. In other words, any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter''. We show several simple applications of this new methodology resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the Millionaires problem, where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.",
"We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the φ-Hiding Assumption (φHA): essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization.",
"We study the problem of Private Information Retrieval (PIR) in the presence of prior side information. The problem setup includes a database of @math independent messages possibly replicated on several servers, and a user that needs to retrieve one of these messages. In addition, the user has some prior side information in the form of a subset of @math messages, not containing the desired message and unknown to the servers. This problem is motivated by practical settings in which the user can obtain side information opportunistically from other users or has previously downloaded some messages using classical PIR schemes. The objective of the user is to retrieve the required message without revealing its identity while minimizing the amount of data downloaded from the servers. We focus on achieving information-theoretic privacy in two scenarios: (i) the user wants to protect jointly its demand and side information; (ii) the user wants to protect only the information about its demand, but not the side information. To highlight the role of side information, we focus first on the case of a single server (single database). In the first scenario, we prove that the minimum download cost is @math messages, and in the second scenario it is @math messages, which should be compared to @math messages, the minimum download cost in the case of no side information. Then, we extend some of our results to the case of the database replicated on multiple servers. Our proof techniques relate PIR with side information to the index coding problem. We leverage this connection to prove converse results, as well as to design achievability schemes.",
"We propose a more efficient privacy preserving set intersection protocol which improves the previously known result by a factor of O(N) in both the computation and communication complexities (N is the number of parties in the protocol). Our protocol is obtained in the malicious model, in which we assume a probabilistic polynomial-time bounded adversary actively controls a fixed set of t (t < N 2) parties. We use a (t + 1,N)-threshold version of the Boneh-Goh-Nissim (BGN) cryptosystem whose underlying group supports bilinear maps. The BGN cryptosystem is generally used in applications where the plaintext space should be small, because there is still a Discrete Logarithm (DL) problem after the decryption. In our protocol the plaintext space can be as large as bounded by the security parameter τ, and the intractability of DL problem is utilized to protect the private datasets. Based on the bilinear map, we also construct some efficient non-interactive proofs. The security of our protocol can be reduced to the common intractable problems including the random oracle, subgroup decision and discrete logarithm problems. The computation complexity of our protocol is O(NS2τ3) (S is the cardinality of each party's dataset), and the communication complexity is O(NS2τ) bits. A similar work by (2006) needs O(N2S2τ3) computation complexity and O(N2S2τ) communication complexity for the same level of correctness as ours."
]
} |
cs0609166 | 1640198638 | We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm. | The breakthrough @cite_15 gives a general technique for converting any protocol into a private protocol with little communication overhead. It is not the end of the story, however, because the computation may increase exponentially. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2069717895"
],
"abstract": [
"A secure function evaluation protocol allows two parties to jointly compute a function f(x,y) of their inputs in a manner not leaking more information than necessary. A major result in this field is: “any function f that can be computed using polynomial resources can be computed securely using polynomial resources” (where “resources” refers to communication and computation). This result follows by a general transformation from any circuit for f to a secure protocol that evaluates f . Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of f . We propose a new methodology for designing secure protocols, utilizing the communication complexity tree (or branching program) representation of f . We start with an efficient (insecure) protocol for f and transform it into a secure protocol. In other words, any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter''. We show several simple applications of this new methodology resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the Millionaires problem, where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation."
]
} |
cs0609166 | 1640198638 | We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm. | Work in private approximations include @cite_19 that introduced the notion as a conference paper in 2001 and gave several protocols. Some negative results were given in @cite_13 for approximations to NP-Hard functions; more on NP-hard search problems appears in @cite_11 . Recently, @cite_18 gives a private approximation to the Euclidean norm that is central to our paper. | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_13",
"@cite_11"
],
"mid": [
"2035825137",
"1572949938",
"1608414128",
"2047515520"
],
"abstract": [
"The notion of private approximation was introduced recently by Feigenbaum, Fong, Strauss and Wright. Informally, a private approximation of a function f is another function F that approximates f in the usual sense, but does not yield any information on x other than what can be deduced from f(x) . As such, F(x) is useful for private computation of f(x) (assuming that F can be computed more efficiently than f . In this work we examine the properties and limitations of this new notion. Specifically, we show that for many NP-hard problems, the privacy requirement precludes non-trivial approximation. This is the case even for problems that otherwise admit very good approximation (e.g., problems with PTAS). On the other hand, we show that slightly relaxing the privacy requirement, by means of leaking “just a few bits of informationrdquo; about x , again permits good approximation.",
"In [12] a private approximation of a function f is defined to be another function F that approximates f in the usual sense, but does not reveal any information about x other than what can be deduced from f(x). We give the first two-party private approximation of the l2 distance with polylogarithmic communication. This, in particular, resolves the main open question of [12]. We then look at the private near neighbor problem in which Alice has a query point in 0,1 d and Bob a set of n points in 0,1 d, and Alice should privately learn the point closest to her query. We improve upon existing protocols, resolving open questions of [13,10]. Then, we relax the problem by defining the private approximate near neighbor problem, which requires introducing a notion of secure computation of approximations for functions that return sets of points rather than values. For this problem we give several protocols with sublinear communication.",
"Private approximation of search problems deals with finding approximate solutions to search problems while disclosing as little information as possible. The focus of this work is on private approximation of the vertex cover problem and two well studied clustering problems - k-center and k-median. Vertex cover was considered in [Beimel, Carmi, Nissim, and Weinreb, STOC, 2006] and we improve their infeasibility results. Clustering algorithms are frequently applied to sensitive data, and hence are of interest in the contexts of secure computation and private approximation. We show that these problems do not admit private approximations, or even approximation algorithms that leak significant number of bits. For the vertex cover problem we show a tight infeasibility result: every algorithm that p(n)-approximates vertex-cover must leak ω(n p(n)) bits (where n is the number of vertices in the graph). For the clustering problems we prove that even approximation algorithms with a poor approximation ratio must leak ω(n) bits (where n is the number of points in the instance). For these results we develop new proof techniques, which are more simple and intuitive than those in , and yet allow stronger infeasibility results. Our proofs rely on the hardness of the promise problem where a unique optimal solution exists [Valiant and Vazirani, Theoretical Computer Science, 1986], on the hardness of approximating witnesses for NP-hard problems ([Kumar and Sivakumar, CCC, 1999] and [Feige, Langberg, and Nissim, APPROX, 2000]), and on a simple random embedding of instances into bigger instances.",
"Many approximation algorithms have been presented in the last decades for hard search problems. The focus of this paper is on cryptographic applications, where it is desired to design algorithms which do not leak unnecessary information. Specifically, we are interested in private approximation algorithms -- efficient algorithms whose output does not leak information not implied by the optimal solutions to the search problems. Privacy requirements add constraints on the approximation algorithms; in particular, known approximation algorithms usually leak a lot of information.For functions, [, ICALP 2001] presented a natural requirement that a private algorithm should not leak information not implied by the original function. Generalizing this requirement to search problems is not straightforward as an input may have many different outputs. We present a new definition that captures a minimal privacy requirement from such algorithms -- applied to an input instance, it should not leak any information that is not implied by its collection of exact solutions. Although our privacy requirement seems minimal, we show that for well studied problems, as vertex cover and 3SAT, private approximation algorithms are unlikely to exist even for poor approximation ratios. Similar to [, STOC 2001], we define a relaxed notion of approximation algorithms that leak (little) information, and demonstrate the applicability of this notion by showing near optimal approximation algorithms for 3SAT that leak little information."
]
} |
cs0609166 | 1640198638 | We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm. | Statistical work such as @cite_5 also addresses approximate summaries over large databases, but differs from our work in many parameters, such as the number of players and the allowable communication. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1995294201"
],
"abstract": [
"While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications."
]
} |
cs0609166 | 1640198638 | We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm. | There are many papers that address the Heavy Hitters problem and sketching in general, in a variety of contexts. Many of the needed ideas can be seen in @cite_8 and other important papers include @cite_3 @cite_17 @cite_4 @cite_16 . | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_16",
"@cite_17"
],
"mid": [
"2963274201",
"2058858139",
"48624319",
"1978676170",
"1976664910"
],
"abstract": [
"This paper resolves one of the longest standing basic problems in the streaming computational model. Namely, optimal construction of quantile sketches. An e approximate quantile sketch receives a stream of items x1,·,xn and allows one to approximate the rank of any query item up to additive error e n with probability at least 1-δ.The rank of a query x is the number of stream items such that xi ≤ x. The minimal sketch size required for this task is trivially at least 1 e.Felber and Ostrovsky obtain a O((1 e)log(1 e)) space sketch for a fixed δ.Without restrictions on the nature of the stream or the ratio between e and n, no better upper or lower bounds were known to date. This paper obtains an O((1 e)log log (1 δ)) space sketch and a matching lower bound. This resolves the open problem and proves a qualitative gap between randomized and deterministic quantile sketching for which an Ω((1 e)log(1 e)) lower bound is known. One of our contributions is a novel representation and modification of the widely used merge-and-reduce construction. This modification allows for an analysis which is both tight and extremely simple. The same technique was reported, in private communications, to be useful for improving other sketching objectives and geometric coreset constructions.",
"Sketching techniques can provide approximate answers to aggregate queries either for data-streaming or distributed computation. Small space summaries that have linearity properties are required for both types of applications. The prevalent method for analyzing sketches uses moment analysis and distribution independent bounds based on moments. This method produces clean, easy to interpret, theoretical bounds that are especially useful for deriving asymptotic results. However, the theoretical bounds obscure fine details of the behavior of various sketches and they are mostly not indicative of which type of sketches should be used in practice. Moreover, no significant empirical comparison between various sketching techniques has been published, which makes the choice even harder. In this paper, we take a close look at the sketching techniques proposed in the literature from a statistical point of view with the goal of determining properties that indicate the actual behavior and producing tighter confidence bounds. Interestingly, the statistical analysis reveals that two of the techniques, Fast-AGMS and Count-Min, provide results that are in some cases orders of magnitude better than the corresponding theoretical predictions. We conduct an extensive empirical study that compares the different sketching techniques in order to corroborate the statistical analysis with the conclusions we draw from it. The study indicates the expected performance of various sketches, which is crucial if the techniques are to be used by practitioners. The overall conclusion of the study is that Fast-AGMS sketches are, for the full spectrum of problems, either the best, or close to the best, sketching technique. This makes Fast-AGMS sketches the preferred choice irrespective of the situation.",
"Motivated by the problem of querying and communicating bidders' valuations in combinatorial auctions, we study how well different classes of set functions can be sketched. More formally, let f be a function mapping subsets of some ground set [n] to the non-negative real numbers. We say that f' is an α-sketch of f if for every set S, the value f'(S) lies between f(S) α and f(S), and f' can be specified by poly(n) bits. We show that for every subadditive function f there exists an α-sketch where α = n1 2. O(polylog(n)). Furthermore, we provide an algorithm that finds these sketches with a polynomial number of demand queries. This is essentially the best we can hope for since: 1. We show that there exist subadditive functions (in fact, XOS functions) that do not admit an o(n1 2) sketch. (Balcan and Harvey [3] previously showed that there exist functions belonging to the class of substitutes valuations that do not admit an O(n1 3) sketch.) 2. We prove that every deterministic algorithm that accesses the function via value queries only cannot guarantee a sketching ratio better than n1−e. We also show that coverage functions, an interesting subclass of submodular functions, admit arbitrarily good sketches. Finally, we show an interesting connection between sketching and learning. We show that for every class of valuations, if the class admits an α-sketch, then it can be α-approximately learned in the PMAC model of Balcan and Harvey. The bounds we prove are only information-theoretic and do not imply the existence of computationally efficient learning algorithms in general.",
"Sketching and streaming algorithms are in the forefront of current research directions for cut problems in graphs. In the streaming model, we show that (1--e)-approximation for Max-Cut must use n 1-O(e) space; moreover, beating 4 5-approximation requires polynomial space. For the sketching model, we show that every r-uniform hypergraph admits a (1+ e)-cut-sparsifier (i.e., a weighted subhypergraph that approximately preserves all the cuts) with O(e-2n(r+log n)) edges. We also make first steps towards sketching general CSPs (Constraint Satisfaction Problems).",
"We introduce an approach for sketch classification based on Fisher vectors that significantly outperforms existing techniques. For the TU-Berlin sketch benchmark [ 2012a], our recognition rate is close to human performance on the same task. Motivated by these results, we propose a different benchmark for the evaluation of sketch classification algorithms. Our key idea is that the relevant aspect when recognizing a sketch is not the intention of the person who made the drawing, but the information that was effectively expressed. We modify the original benchmark to capture this concept more precisely and, as such, to provide a more adequate tool for the evaluation of sketch classification techniques. Finally, we perform a classification-driven analysis which is able to recover semantic aspects of the individual sketches, such as the quality of the drawing and the importance of each part of the sketch for the recognition."
]
} |
cs0609122 | 2952623723 | We consider a general multiple antenna network with multiple sources, multiple destinations and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or non-clustered). We first study the multiple antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link is increased, whereas CF continues to perform optimally. We also study the multiple antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the idealistic assumption of full-duplex relays and a clustered network, this virtual multi-input multi-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect. | relay channels are studied in terms of ergodic capacity in @cite_47 and in terms of in @cite_17 . The latter considers the protocol only, presents a lower bound on the performance and designs space-time block codes. This lower bound is not tight in general and is valid only if the number of relay antennas is less than or equal to the number of source antennas. | {
"cite_N": [
"@cite_47",
"@cite_17"
],
"mid": [
"2106801259",
"2114973618"
],
"abstract": [
"We study the capacity of multiple-input multiple- output (MIMO) relay channels. We first consider the Gaussian MIMO relay channel with fixed channel conditions, and derive upper bounds and lower bounds that can be obtained numerically by convex programming. We present algorithms to compute the bounds. Next, we generalize the study to the Rayleigh fading case. We find an upper bound and a lower bound on the ergodic capacity. It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel. We investigate sufficient conditions for achieving the ergodic capacity; and in particular, for the case where all nodes have the same number of antennas, the capacity can be achieved under certain signal-to-noise ratio (SNR) conditions. Numerical results are also provided to illustrate the bounds on the ergodic capacity of the MIMO relay channel over Rayleigh fading. Finally, we present a potential application of the MIMO relay channel for cooperative communications in ad hoc networks.",
"In this paper, we consider a three-terminal state-dependent relay channel (RC) with the channel state noncausally available at only the relay. Such a model may be useful for designing cooperative wireless networks with some terminals equipped with cognition capabilities, i.e., the relay in our setup. In the discrete memoryless (DM) case, we establish lower and upper bounds on channel capacity. The lower bound is obtained by a coding scheme at the relay that uses a combination of codeword splitting, Gel'fand-Pinsker binning, and decode-and-forward (DF) relaying. The upper bound improves upon that obtained by assuming that the channel state is available at the source, the relay, and the destination. For the Gaussian case, we also derive lower and upper bounds on the capacity. The lower bound is obtained by a coding scheme at the relay that uses a combination of codeword splitting, generalized dirty paper coding (DPC), and DF relaying; the upper bound is also better than that obtained by assuming that the channel state is available at the source, the relay, and the destination. In the case of degraded Gaussian channels, the lower bound meets with the upper bound for some special cases, and, so, the capacity is obtained for these cases. Furthermore, in the Gaussian case, we also extend the results to the case in which the relay operates in a half-duplex mode."
]
} |
cs0609122 | 2952623723 | We consider a general multiple antenna network with multiple sources, multiple destinations and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or non-clustered). We first study the multiple antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link is increased, whereas CF continues to perform optimally. We also study the multiple antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the idealistic assumption of full-duplex relays and a clustered network, this virtual multi-input multi-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect. | The multiple-access relay channel ( ) is introduced in @cite_43 @cite_0 @cite_39 . In , the relay helps multiple sources simultaneously to reach a common destination. The for the half-duplex with single antenna nodes is studied in @cite_31 @cite_37 @cite_34 . In @cite_31 , the authors find that is optimal for low multiplexing gains; however, this protocol remains to be suboptimal for high multiplexing gains analogous to the single-source relay channel. This region, where is suboptimal, is achieved by the multiple access amplify and forward ( ) protocol @cite_37 @cite_34 . | {
"cite_N": [
"@cite_37",
"@cite_39",
"@cite_0",
"@cite_43",
"@cite_31",
"@cite_34"
],
"mid": [
"2099857870",
"2120404432",
"2136209931",
"2139947445",
"2148861839",
"2189096384"
],
"abstract": [
"We consider a general multiple-antenna network with multiple sources, multiple destinations, and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or nonclustered). We first study the multiple-antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link are increased, whereas CF continues to perform optimally. We also study the multiple-antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the ideal assumption of full-duplex relays and a clustered network, this virtual multiple-input multiple-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains in effect.",
"This paper considers an interference network composed of K half-duplex single-antenna pairs of users who wish to establish bi-directional communication with the aid of a multi-input-multi-output (MIMO) half-duplex relay node. This channel is referred to as the “MIMO Wireless Switch” since, for the sake of simplicity, our model assumes no direct link between the two end nodes of each pair implying that all communication must go through the relay node (i.e., the MIMO switch). Assuming a delay-limited scenario, the fundamental limits in the high signal-to-noise ratio (SNR) regime is analyzed using the diversity-multiplexing tradeoff (DMT) framework. Our results sheds light on the structure of optimal transmission schemes and the gain offered by the relay node in two distinct cases, namely reciprocal and non-reciprocal channels (between the relay and end-users). In particular, the existence of a relay node, equipped with a sufficient number of antennas, is shown to increase the multiplexing gain; as compared with the traditional fully connected K-pair interference channel. To the best of our knowledge, this is the first known example where adding a relay node results in enlarging the pre-log factor of the sum rate. Moreover, for the case of reciprocal channels, it is shown that, when the relay has a number of antennas at least equal to the sum of antennas of all the users, static time allocation of decode and forward (DF) type schemes is optimal. On the other hand, in the non-reciprocal scenario, we establish the optimality of dynamic decode and forward in certain relevant scenarios.",
"We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels. The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff. For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1 N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.",
"In this paper, we study a three-node full-duplex network, where a base station is engaged in simultaneous up- and downlink communication in the same frequency band with two half-duplex mobile nodes. To reduce the impact of inter-node interference between the two mobile nodes on the system capacity, we study how an orthogonal side-channel between the two mobile nodes can be leveraged to achieve full-duplex-like multiplexing gains. We propose and characterize the achievable rates of four distributed full-duplex schemes, labeled bin-and-cancel, compress-and-cancel, estimate-and-cancel and decode-and-cancel. Of the four, bin-and-cancel is shown to achieve within 1 bit s Hz of the capacity region for all values of channel parameters. In contrast, the other three schemes achieve the near-optimal performance only in certain regimes of channel values. Asymptotic multiplexing gains of all proposed schemes are derived to show that the side-channel is extremely effective in regimes where inter-node interference has the highest impact.",
"In a multiple-antenna relay channel, the full-duplex cut-set capacity upper bound and decode-and-forward rate are formulated as convex optimization problems. For half-duplex relaying, bandwidth allocation and transmit signals are optimized jointly. Moreover, achievable rates based on the compress-and-forward strategy are presented using rate-distortion and Wyner-Ziv compression schemes.",
"In this paper, we investigate secure transmission in untrusted amplify-and-forward half-duplex relaying networks with the help of cooperative jamming at the destination (CJD). Under the assumption of full channel state information (CSI), conventional CJD using self-interference cancelation at the destination is efficient when the untrusted relay has no capability to suppress the jamming signal. However, if the source and destination are equipped with a single antenna and the only untrusted relay is equipped with N multiple antennas, it can remove the jamming signal from the received signal by linear filters and the full multiplexing gain of relaying cannot be achievable with the conventional CJD due to the saturation of the secrecy rate at the high transmit power regime. We propose in this paper new CJD scheme where neither destination nor relay can acquire CSI of relay-destination link. Our proposed scheme utilizes zero-forcing cancelation based on known jamming signals instead of self-interference subtraction, while the untrusted relay cannot suppress the jamming signals due to the lack of CSI. We show that the secrecy rate of the proposed scheme can enjoy a half of multiplexing gain in half-duplex relaying while that of conventional CJD is saturated at high transmit power for N ≥2. The impact of channel estimation error at the destination is also investigated to show the robustness of the proposed scheme against strong estimation errors."
]
} |
cs0609122 | 2952623723 | We consider a general multiple antenna network with multiple sources, multiple destinations and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or non-clustered). We first study the multiple antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link is increased, whereas CF continues to perform optimally. We also study the multiple antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the idealistic assumption of full-duplex relays and a clustered network, this virtual multi-input multi-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect. | When multiple single antenna relays are present, the papers @cite_12 @cite_9 @cite_49 @cite_27 @cite_32 @cite_22 @cite_30 show that diversity gains similar to multi-input single-output ( ) or single-input multi-output ( ) systems are achievable for Rayleigh fading channels. Similarly, @cite_3 @cite_48 @cite_18 @cite_29 upper bound the system behavior by or if all links have Rayleigh fading. In other words, relay systems behave similar to either transmit or receive antenna arrays. is first analyzed in @cite_2 in terms of achievable rates only, where the authors compare a two-source two-destination cooperative system with a @math and show that the former is multiplexing gain limited by 1, whereas the latter has maximum multiplexing gain of 2. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_22",
"@cite_48",
"@cite_9",
"@cite_29",
"@cite_32",
"@cite_3",
"@cite_27",
"@cite_49",
"@cite_2",
"@cite_12"
],
"mid": [
"2137531352",
"2031083855",
"2106801259",
"2132158614",
"2136209931",
"2152121970",
"2598021007",
"2541048740",
"2025889675",
"2153077717",
"2098567664",
"2099857870"
],
"abstract": [
"We consider a dense fading multi-user network with multiple active multi-antenna source-destination pair terminals communicating simultaneously through a large common set of K multi-antenna relay terminals in the full spatial multiplexing mode. We use Shannon-theoretic tools to analyze the tradeoff between energy efficiency and spectral efficiency (known as the power-bandwidth tradeoff) in meaningful asymptotic regimes of signal-to-noise ratio (SNR) and network size. We design linear distributed multi-antenna relay beamforming (LDMRB) schemes that exploit the spatial signature of multi-user interference and characterize their power-bandwidth tradeoff under a system-wide power constraint on source and relay transmissions. The impact of multiple users, multiple relays and multiple antennas on the key performance measures of the high and low SNR regimes is investigated in order to shed new light on the possible reduction in power and bandwidth requirements through the usage of such practical relay cooperation techniques. Our results indicate that point-to-point coded multi-user networks supported by distributed relay beamforming techniques yield enhanced energy efficiency and spectral efficiency, and with appropriate signaling and sufficient antenna degrees of freedom, can achieve asymptotically optimal power-bandwidth tradeoff with the best possible (i.e., as in the cutset bound) energy scaling of K-1 and the best possible spectral efficiency slope at any SNR for large number of relay terminals. Furthermore, our results help to identify the role of interference cancellation capability at the relay terminals on realizing the optimal power- bandwidth tradeoff; and show how relaying schemes that do not attempt to mitigate multi-user interference, despite their optimal capacity scaling performance, could yield a poor power- bandwidth tradeoff.",
"We consider the use of spatial diversity to improve the performance of analog joint source-channel coding in wireless fading channels. The communication system analyzed in this paper consists of discrete-time all-analog-processing joint source-channel coding where Maximum Likelihood (ML) and Minimum Mean Square Error (MMSE) detection are employed. By assuming a fast-fading Rayleigh channel, we show that MMSE performs much better than ML at high Channel Signal-to-Noise Ratios (CSNR) in single-antenna wireless systems. However, such performance gap can be significantly reduced by using multiple receive antennas, thus making low complexity ML decoding very attractive in the case of receive diversity. Moreover, we show that the analog scheme can be considerably robust to imperfect channel estimation. In addition, as an alternative to multiple antennas, we also consider spatial diversity through cooperative communications, and show that the application of the Amplify-and-Forward (AF) protocol with single antenna nodes leads to similar results than when two antennas are available at the receiver and Maximal Ratio Combining (MRC) is applied. Finally, we show that the MMSE implementation of the analog scheme performs very close to the unconstrained capacity of digital schemes using scalar quantization, while its complexity is much lower than that of capacity-approaching digital systems.",
"We study the capacity of multiple-input multiple- output (MIMO) relay channels. We first consider the Gaussian MIMO relay channel with fixed channel conditions, and derive upper bounds and lower bounds that can be obtained numerically by convex programming. We present algorithms to compute the bounds. Next, we generalize the study to the Rayleigh fading case. We find an upper bound and a lower bound on the ergodic capacity. It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel. We investigate sufficient conditions for achieving the ergodic capacity; and in particular, for the case where all nodes have the same number of antennas, the capacity can be achieved under certain signal-to-noise ratio (SNR) conditions. Numerical results are also provided to illustrate the bounds on the ergodic capacity of the MIMO relay channel over Rayleigh fading. Finally, we present a potential application of the MIMO relay channel for cooperative communications in ad hoc networks.",
"This paper extends Khatri (1964, 1969) distribution of the largest eigenvalue of central complex Wishart matrices to the noncentral case. It then applies the resulting new statistical results to obtain closed-form expressions for the outage probability of multiple-input-multiple-output (MIMO) systems employing maximal ratio combining (known also as \"beamforming\" systems) and operating over Rician-fading channels. When applicable these expressions are compared with special cases previously reported in the literature dealing with the performance of (1) MIMO systems over Rayleigh-fading channels and (2) single-input-multiple-output (SIMO) systems over Rician-fading channels. As a double check these analytical results are validated by Monte Carlo simulations and as an illustration of the mathematical formalism some numerical examples for particular cases of interest are plotted and discussed. These results show that, given a fixed number of total antenna elements and under the same scattering condition (1) SIMO systems are equivalent to multiple-input-single-output systems and (2) it is preferable to distribute the number of antenna elements evenly between the transmitter and the receiver for a minimum outage probability performance.",
"We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels. The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff. For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1 N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.",
"We develop and analyze low-complexity cooperative diversity protocols that combat fading induced by multipath propagation in wireless networks. The underlying techniques exploit space diversity available through cooperating terminals' relaying signals for one another. We outline several strategies employed by the cooperating radios, including fixed relaying schemes such as amplify-and-forward and decode-and-forward, selection relaying schemes that adapt based upon channel measurements between the cooperating terminals, and incremental relaying schemes that adapt based upon limited feedback from the destination terminal. We develop performance characterizations in terms of outage events and associated outage probabilities, which measure robustness of the transmissions to fading, focusing on the high signal-to-noise ratio (SNR) regime. Except for fixed decode-and-forward, all of our cooperative diversity protocols are efficient in the sense that they achieve full diversity (i.e., second-order diversity in the case of two terminals), and, moreover, are close to optimum (within 1.5 dB) in certain regimes. Thus, using distributed antennas, we can provide the powerful benefits of space diversity without need for physical arrays, though at a loss of spectral efficiency due to half-duplex operation and possibly at the cost of additional receive hardware. Applicable to any wireless setting, including cellular or ad hoc networks-wherever space constraints preclude the use of physical arrays-the performance characterizations reveal that large power or energy savings result from the use of these protocols.",
"We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size.",
"A multiple antenna broadcast channel (multiple transmit antennas, one antenna at each receiver) with imperfect channel state information available to the transmitter is considered. If perfect channel state information is available to the transmitter, then a multiplexing gain equal to the minimum of the number of transmit antennas and the number of receivers is achievable. On the other hand, if each receiver has identical fading statistics and the transmitter has no channel information, the maximum achievable multiplexing gain is only one. The focus of this paper is on determination of necessary and sufficient conditions on the rate at which CSIT quality must improve with SNR in order for full multiplexing gain to be achievable. The main result of the paper shows that scaling CSIT quality such that the CSIT error is dominated by the inverse of the SNR is both necessary and sufficient to achieve the full multiplexing gain as well as a bounded rate offset (i.e., the sum rate has no negative sub-logarithmic terms) in the compound channel setting.",
"In this work, we extend the nonorthogonal amplify-and-forward (NAF) cooperative diversity scheme to the multiple-input multiple-output (MIMO) channel. A family of space-time block codes for a half-duplex MIMO NAF fading cooperative channel with N relays is constructed. The code construction is based on the nonvanishing determinant (NVD) criterion and is shown to achieve the optimal diversity-multiplexing tradeoff (DMT) of the channel. We provide a general explicit algebraic construction, followed by some examples. In particular, in the single-relay case, it is proved that the Golden code and the 4times4 Perfect code are optimal for the single-antenna and two-antenna cases, respectively. Simulation results reveal that a significant gain (up to 10 dB) can be obtained with the proposed codes, especially in the single-antenna case",
"A wireless network with fading and a single source-destination pair is considered. The information reaches the destination via multiple hops through a sequence of layers of single-antenna relays. At high signal-to-noise ratio (SNR), the simple amplify-and-forward strategy is shown to be optimal in terms of degrees of freedom, because it achieves the degrees of freedom equal to a point-to-point multiple-input multiple-output (MIMO) system. Hence, the lack of coordination in relay nodes does not reduce the achievable degrees of freedom. The performance of this amplify-and-forward strategy degrades with increasing network size. This phenomenon is analyzed by finding the tradeoffs between network size, rate, and diversity. A lower bound on the diversity-multiplexing tradeoff for concatenation of multiple random Gaussian matrices is obtained. Also, it is shown that achievable network size in the outage formulation (short codes) is a lot smaller than the ergodic formulation (long codes).",
"Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.",
"We consider a general multiple-antenna network with multiple sources, multiple destinations, and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or nonclustered). We first study the multiple-antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link are increased, whereas CF continues to perform optimally. We also study the multiple-antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the ideal assumption of full-duplex relays and a clustered network, this virtual multiple-input multiple-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains in effect."
]
} |
quant-ph0608199 | 2951547182 | Assume that two distant parties, Alice and Bob, as well as an adversary, Eve, have access to (quantum) systems prepared jointly according to a tripartite state. In addition, Alice and Bob can use local operations and authenticated public classical communication. Their goal is to establish a key which is unknown to Eve. We initiate the study of this scenario as a unification of two standard scenarios: (i) key distillation (agreement) from classical correlations and (ii) key distillation from pure tripartite quantum states. Firstly, we obtain generalisations of fundamental results related to scenarios (i) and (ii), including upper bounds on the key rate. Moreover, based on an embedding of classical distributions into quantum states, we are able to find new connections between protocols and quantities in the standard scenarios (i) and (ii). Secondly, we study specific properties of key distillation protocols. In particular, we show that every protocol that makes use of pre-shared key can be transformed into an equally efficient protocol which needs no pre-shared key. This result is of practical significance as it applies to quantum key distribution (QKD) protocols, but it also implies that the key rate cannot be locked with information on Eve's side. Finally, we exhibit an arbitrarily large separation between the key rate in the standard setting where Eve is equipped with quantum memory and the key rate in a setting where Eve is only given classical memory. This shows that assumptions on the nature of Eve's memory are important in order to determine the correct security threshold in QKD. | The first to spot a relation between the classical and the quantum development were Gisin and Wolf; in analogy to in quantum information theory, they conjectured the existence of , namely classical correlation that can only be created from key but from which no key can be distilled @cite_37 . Their conjecture remains unsolved, but has stimulated the community in search for an answer. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2125303188"
],
"abstract": [
"as quantum engineering. In the past two decades it has become increasingly clear that many (perhaps all) of the symptoms of classicality can be induced in quantum systems by their environments. Thus decoherence is caused by the interaction in which the environment in effect monitors certain observables of the system, destroying coherence between the pointer states corresponding to their eigenvalues. This leads to environment-induced superselection or einselection, a quantum process associated with selective loss of information. Einselected pointer states are stable. They can retain correlations with the rest of the universe in spite of the environment. Einselection enforces classicality by imposing an effective ban on the vast majority of the Hilbert space, eliminating especially the flagrantly nonlocal ''Schrodinger-cat states.'' The classical structure of phase space emerges from the quantum Hilbert space in the appropriate macroscopic limit. Combination of einselection with dynamics leads to the idealizations of a point and of a classical trajectory. In measurements, einselection replaces quantum entanglement between the apparatus and the measured system with the classical correlation. Only the preferred pointer observable of the apparatus can store information that has predictive power. When the measured quantum system is microscopic and isolated, this restriction on the predictive utility of its correlations with the macroscopic apparatus results in the effective ''collapse of the wave packet.'' The existential interpretation implied by einselection regards observers as open quantum systems, distinguished only by their ability to acquire, store, and process information. Spreading of the correlations with the effectively classical pointer states throughout the environment allows one to understand ''classical reality'' as a property based on the relatively objective existence of the einselected states. Effectively classical pointer states can be ''found out'' without being re-prepared, e.g, by intercepting the information already present in the environment. The redundancy of the records of pointer states in the environment (which can be thought of as their ''fitness'' in the Darwinian sense) is a measure of their classicality. A new symmetry appears in this setting. Environment-assisted invariance or envariance sheds new light on the nature of ignorance of the state of the system due to quantum correlations with the environment and leads to Born's rules and to reduced density matrices, ultimately justifying basic principles of the program of decoherence and einselection."
]
} |
cs0608031 | 1573220314 | With the rapid spread of various mobile terminals in our society, the importance of secure positioning is growing for wireless networks in adversarial settings. Recently, several authors have proposed a secure positioning mechanism of mobile terminals which is based on the geometric property of wireless node placement, and on the postulate of modern physics that a propagation speed of information never exceeds the velocity of light. In particular, they utilize the measurements of the round-trip time of radio signal propagation and bidirectional communication for variants of the challenge-and-response. In this paper, we propose a novel means to construct the above mechanism by use of unidirectional communication instead of bidirectional communication. Our proposal is based on the assumption that a mobile terminal incorporates a high-precision inner clock in a tamper-resistant protected area. In positioning, the mobile terminal uses its inner clock and the time and location information broadcasted by radio from trusted stations. Our proposal has a major advantage in protecting the location privacy of mobile terminal users, because the mobile terminal need not provide any information to the trusted stations through positioning procedures. Besides, our proposal is free from the positioning error due to claimant's processing-time fluctuations in the challenge-and-response, and is well-suited for mobile terminals in the open air, or on the move at high speed, in terms of practical usage. We analyze the security, the functionality, and the feasibility of our proposal in comparison to previous proposals. | The secure positioning technique with RF mainly discussed in this paper was proposed in @cite_11 @cite_2 . The distance bounding protocols using bidirectional communication to upper bound claimant's distance was first introduced in @cite_6 , and the proposal in @cite_2 is based on the protocols @cite_6 . For easier implementation, a secure positioning technique with a distance bounding protocol using ultrasound and radio communication was proposed in @cite_4 , but it has a security vulnerability to the replay attack due to its use of ultrasound. In @cite_14 , a distance bounding protocol for RFID is proposed. The protocol uses duplex radio communication, and is designed to lessen the processing load of RFID as far as possible. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_2",
"@cite_11"
],
"mid": [
"2907112888",
"2012557714",
"2962487379",
"2147008944",
"2034244766"
],
"abstract": [
"Previous research studies mostly focused on enhancing the security of radio frequency identification (RFID) protocols for various RFID applications that rely on a centralized database. However, blockchain technology is quickly emerging as a novel distributed and decentralized alternative that provides higher data protection, reliability, immutability, transparency, and lower management costs compared with a conventional centralized database. These properties make it extremely suitable for integration in a supply chain management system. In order to successfully fuse RFID and blockchain technologies together, a secure method of communication is required between the RFID tagged goods and the blockchain nodes. Therefore, this paper proposes a robust ultra-lightweight mutual authentication RFID protocol that works together with a decentralized database to create a secure blockchain-enabled supply chain management system. Detailed security analysis is performed to prove that the proposed protocol is secure from key disclosure, replay, man-in-the-middle, de-synchronization, and tracking attacks. In addition to that, a formal analysis is conducted using Gong, Needham, and Yahalom logic and automated validation of internet security protocols and applications tool to verify the security of the proposed protocol. The protocol is proven to be efficient with respect to storage, computational, and communication costs. In addition to that, a further step is taken to ensure the robustness of the protocol by analyzing the probability of data collision written to the blockchain.",
"Radio Frequency IDentification (RFID) technology has been adopted in many applications, such as inventory control, object tracking, theft prevention, and supply chain management. Privacy-preserving authentication in RFID systems is a very important problem. Existing protocols employ tree structures to achieve fast authentication. We observe that these protocols require a tag to transmit a large amount of data in each authentication, which costs significant bandwidth and energy overhead. Current protocols also impose heavy computational demand on the RFID reader. To address these issues, we design two privacy-preserving protocols based on a new technique called cryptographical encoding, which significantly reduces both authentication data transmitted by each tag and computation overhead incurred at the reader. Our analysis shows that the new protocols are able to reduce authentication data by more than an order of magnitude and reduce computational demand by about an order of magnitude, when comparing with the best existing protocol.",
"Abstract The safety of medical data and equipment plays a vital role in today’s world of Medical Internet of Things (MIoT). These IoT devices have many constraints (e.g., memory size, processing capacity, and power consumption) that make it challenging to use cost-effective and energy-efficient security solutions. Recently, researchers have proposed a few Radio-Frequency Identification (RFID) based security solutions for MIoT. The use of RFID technology in securing IoT systems is rapidly increasing because it provides secure and lightweight safety mechanisms for these systems. More recently, authors have proposed a lightweight RFID mutual authentication (LRMI) protocol. The authors argue that LRMI meets the necessary security requirements for RFID systems, and the same applies to MIoT applications as well. In this paper, our contribution has two-folds, firstly we analyze the LRMI protocol’s security to demonstrate that it is vulnerable to various attacks such as secret disclosure, reader impersonation, and tag traceability. Also, it is not able to preserve the anonymity of the tag and the reader. Secondly, we propose a new secure and lightweight mutual RFID authentication (SecLAP) protocol, which provides secure communication and preserves privacy in MIoT systems. Our security analysis shows that the SecLAP protocol is robust against de-synchronization, replay, reader tag impersonation, and traceability attacks, and it ensures forward and backward data communication security. We use Burrows-Abadi-Needham (BAN) logic to validate the security features of SecLAP. Moreover, we compare SecLAP with the state-of-the-art and validate its performance through a Field Programmable Gate Array (FPGA) implementation, which shows that it is lightweight, consumes fewer resources on tags concerning computation functions, and requires less number of flows.",
"In order to protect privacy, Radio Frequency Identification (RFID) systems employ Privacy-Preserving Authentication (PPA) to allow valid readers to explicitly authenticate their dominated tags without leaking private information. Typically, an RF tag sends an encrypted message to the reader, then the reader searches for the key that can decrypt the cipher to identify the tag. Due to the large-scale deployment of today's RFID systems, the key search scheme for any PPA requires a short response time. Previous designs construct balance-tree based key management structures to accelerate the search speed to O(logN), where N is the number of tags. Being efficient, such approaches are vulnerable to compromising attacks. By capturing a small number of tags, compromising attackers are able to identify other tags that have not been corrupted. To address this issue, we propose an Anti- Compromising authenticaTION protocol, ACTION, which employs a novel sparse tree architecture, such that the key of every tag is independent from one another. The advantages of this design include: 1) resilience to the compromising attack, 2) reduction of key storage for tags from O(logN) to O(1), which is significant for resource critical tag devices, and 3) high search efficiency, which is O(logN), as good as the best in the previous designs.",
"Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45 of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time."
]
} |
cs0608031 | 1573220314 | With the rapid spread of various mobile terminals in our society, the importance of secure positioning is growing for wireless networks in adversarial settings. Recently, several authors have proposed a secure positioning mechanism of mobile terminals which is based on the geometric property of wireless node placement, and on the postulate of modern physics that a propagation speed of information never exceeds the velocity of light. In particular, they utilize the measurements of the round-trip time of radio signal propagation and bidirectional communication for variants of the challenge-and-response. In this paper, we propose a novel means to construct the above mechanism by use of unidirectional communication instead of bidirectional communication. Our proposal is based on the assumption that a mobile terminal incorporates a high-precision inner clock in a tamper-resistant protected area. In positioning, the mobile terminal uses its inner clock and the time and location information broadcasted by radio from trusted stations. Our proposal has a major advantage in protecting the location privacy of mobile terminal users, because the mobile terminal need not provide any information to the trusted stations through positioning procedures. Besides, our proposal is free from the positioning error due to claimant's processing-time fluctuations in the challenge-and-response, and is well-suited for mobile terminals in the open air, or on the move at high speed, in terms of practical usage. We analyze the security, the functionality, and the feasibility of our proposal in comparison to previous proposals. | The protocol called Temporal Leashes is proposed in @cite_0 for detection of the specific attack called the wormhole attack. The protocol detects the attack by checking the packet transmission time measured by tightly synchronized clocks of a sender and a receiver. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2157921329"
],
"abstract": [
"As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies."
]
} |
cs0608031 | 1573220314 | With the rapid spread of various mobile terminals in our society, the importance of secure positioning is growing for wireless networks in adversarial settings. Recently, several authors have proposed a secure positioning mechanism of mobile terminals which is based on the geometric property of wireless node placement, and on the postulate of modern physics that a propagation speed of information never exceeds the velocity of light. In particular, they utilize the measurements of the round-trip time of radio signal propagation and bidirectional communication for variants of the challenge-and-response. In this paper, we propose a novel means to construct the above mechanism by use of unidirectional communication instead of bidirectional communication. Our proposal is based on the assumption that a mobile terminal incorporates a high-precision inner clock in a tamper-resistant protected area. In positioning, the mobile terminal uses its inner clock and the time and location information broadcasted by radio from trusted stations. Our proposal has a major advantage in protecting the location privacy of mobile terminal users, because the mobile terminal need not provide any information to the trusted stations through positioning procedures. Besides, our proposal is free from the positioning error due to claimant's processing-time fluctuations in the challenge-and-response, and is well-suited for mobile terminals in the open air, or on the move at high speed, in terms of practical usage. We analyze the security, the functionality, and the feasibility of our proposal in comparison to previous proposals. | On the other hand, there are location verification protocols which substantially make use of the physical properties of broadcasted radio waves @cite_8 @cite_15 . In @cite_8 , their proposal depends on the intensity and the directivity of broadcasted radio waves for location verification. In @cite_15 , their proposal with duplex radio communication assumes spatial isotropic propagation of radio waves by use of mobile terminal's omni-directional antenna, and uses its particular geometric relation for location verification. But both proposals have a security vulnerability to malicious modification of the assumed physical properties of radio waves. There are many possible ways, especially for a mobile terminal user, to carry out the physical modification of radio waves, e.g., by fraudulently using a directional antenna for the mobile terminal, or by surrounding the mobile terminal with carefully chosen mediums or materials. | {
"cite_N": [
"@cite_15",
"@cite_8"
],
"mid": [
"624827785",
"2115735006"
],
"abstract": [
"We propose and investigate a compressive architecture for estimation and tracking of sparse spatial channels in millimeter (mm) wave picocellular networks. The base stations are equipped with antenna arrays with a large number of elements (which can fit within compact form factors because of the small carrier wavelength) and employ radio frequency (RF) beamforming, so that standard least squares adaptation techniques (which require access to individual antenna elements) are not applicable. We focus on the downlink, and show that “compressive beacons,” transmitted using pseudorandom phase settings at the base station array, and compressively processed using pseudorandom phase settings at the mobile array, provide information sufficient for accurate estimation of the two-dimensional (2D) spatial frequencies associated with the directions of departure of the dominant rays from the base station, and the associated complex gains. This compressive approach is compatible with coarse phase-only control, and is based on a near-optimal sequential algorithm for frequency estimation which approaches the Cramer Rao Lower Bound. The algorithm exploits the geometric continuity of the channel across successive beaconing intervals to reduce the overhead to less than 1 even for very large ( @math ) arrays. Compressive beaconing is essentially omnidirectional, and hence does not enjoy the SNR and spatial reuse benefits of beamforming obtained during data transmission. We therefore discuss system level design considerations for ensuring that the beacon SNR is sufficient for accurate channel estimation, and that inter-cell beacon interference is controlled by an appropriate reuse scheme.",
"The wireless medium contains domain-specific information that can be used to complement and enhance traditional security mechanisms. In this paper we propose ways to exploit the spatial variability of the radio channel response in a rich scattering environment, as is typical of indoor environments. Specifically, we describe a physical-layer authentication algorithm that utilizes channel probing and hypothesis testing to determine whether current and prior communication attempts are made by the same transmit terminal. In this way, legitimate users can be reliably authenticated and false users can be reliably detected. We analyze the ability of a receiver to discriminate between transmitters (users) according to their channel frequency responses. This work is based on a generalized channel response with both spatial and temporal variability, and considers correlations among the time, frequency and spatial domains. Simulation results, using the ray-tracing tool WiSE to generate the time-averaged response, verify the efficacy of the approach under realistic channel conditions, as well as its capability to work under unknown channel variations."
]
} |
cs0608051 | 2951605536 | Inspired by the classical theory of modules over a monoid, we give a first account of the natural notion of module over a monad. The associated notion of morphism of left modules ("Linear" natural transformations) captures an important property of compatibility with substitution, in the heterogeneous case where "terms" and variables therein could be of different types as well as in the homogeneous case. In this paper, we present basic constructions of modules and we show examples concerning in particular abstract syntax and lambda-calculus. | We have introduced the notion of module over a monad, and more importantly the notion of linearity for transformations among such modules and we have tried to show that this notion is ubiquitous as soon as syntax and semantics are concerned. Our thesis is that the point of view of modules opens some new room for initial algebra semantics, as we sketched for typed @math -calculus (see also @cite_5 ). | {
"cite_N": [
"@cite_5"
],
"mid": [
"2884913901"
],
"abstract": [
"This paper addresses the semantics of weighted argumentation graphs that are bipolar, i.e. contain both attacks and supports for arguments. It builds on previous work by Amgoud, Ben-Naim et. al. We study the various characteristics of acceptability semantics that have been introduced in these works, and introduce the notion of a modular acceptability semantics. A semantics is modular if it cleanly separates aggregation of attacking and supporting arguments (for a given argument @math ) from the computation of their influence on @math 's initial weight. We show that the various semantics for bipolar argumentation graphs from the literature may be analysed as a composition of an aggregation function with an influence function. Based on this modular framework, we prove general convergence and divergence theorems. We demonstrate that all well-behaved modular acceptability semantics converge for all acyclic graphs and that no sum-based semantics can converge for all graphs. In particular, we show divergence of Euler-based semantics () for certain cyclic graphs. Further, we provide the first semantics for bipolar weighted graphs that converges for all graphs."
]
} |
cs0607040 | 1663216048 | This paper describes the development of the PALS system, an implementation of Prolog capable of efficiently exploiting or-parallelism on distributed-memory platforms--specifically Beowulf clusters. PALS makes use of a novel technique, called incremental stack-splitting. The technique proposed builds on the stack-splitting approach, previously described by the authors and experimentally validated on shared-memory systems, which in turn is an evolution of the stack-copying method used in a variety of parallel logic and constraint systems--e.g., MUSE, YAP, and Penny. The PALS system is the first distributed or-parallel implementation of Prolog based on the stack-splitting method ever realized. The results presented confirm the superiority of this method as a simple yet effective technique to transition from shared-memory to distributed-memory systems. PALS extends stack-splitting by combining it with incremental copying; the paper provides a description of the implementation of PALS, including details of how distributed scheduling is handled. We also investigate methodologies to effectively support order-sensitive predicates (e.g., side-effects) in the context of the stack-splitting scheme. Experimental results obtained from running PALS on both Shared Memory and Beowulf systems are presented and analyzed. | A rich body of research has been developed to investigate methodologies for the exploitation of or-parallelism from Prolog executions on SMPs. Comprehensive surveys describing and comparing these methodologies have appeared, e.g., @cite_44 @cite_32 @cite_12 . | {
"cite_N": [
"@cite_44",
"@cite_32",
"@cite_12"
],
"mid": [
"22168010",
"1574126082",
"27834061"
],
"abstract": [
"We collected a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web1. This corpus has found widespread use in the NLP community. Here, we focus on its acquisition and its application as training data for statistical machine translation (SMT). We trained SMT systems for 110 language pairs, which reveal interesting clues into the challenges ahead.",
"Despite significant recent work, purely unsupervised techniques for part-of-speech (POS) tagging have not achieved useful accuracies required by many language processing tasks. Use of parallel text between resource-rich and resource-poor languages is one source of weak supervision that significantly improves accuracy. However, parallel text is not always available and techniques for using it require multiple complex algorithmic steps. In this paper we show that we can build POS-taggers exceeding state-of-the-art bilingual methods by using simple hidden Markov models and a freely available and naturally growing resource, the Wiktionary. Across eight languages for which we have labeled data to evaluate results, we achieve accuracy that significantly exceeds best unsupervised and parallel text methods. We achieve highest accuracy reported for several languages and show that our approach yields better out-of-domain taggers than those trained using fully supervised Penn Treebank.",
"From the Publisher: Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of Logic Programming Languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in Logic Programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs will be useful for people implementing parallel logic programming systems, parallel symbolic systems Parallel AI systems, and parallel theorem proving systems. This work will also be useful to people who wish to learn about implementation of parallel logic programming systems."
]
} |
cs0607040 | 1663216048 | This paper describes the development of the PALS system, an implementation of Prolog capable of efficiently exploiting or-parallelism on distributed-memory platforms--specifically Beowulf clusters. PALS makes use of a novel technique, called incremental stack-splitting. The technique proposed builds on the stack-splitting approach, previously described by the authors and experimentally validated on shared-memory systems, which in turn is an evolution of the stack-copying method used in a variety of parallel logic and constraint systems--e.g., MUSE, YAP, and Penny. The PALS system is the first distributed or-parallel implementation of Prolog based on the stack-splitting method ever realized. The results presented confirm the superiority of this method as a simple yet effective technique to transition from shared-memory to distributed-memory systems. PALS extends stack-splitting by combining it with incremental copying; the paper provides a description of the implementation of PALS, including details of how distributed scheduling is handled. We also investigate methodologies to effectively support order-sensitive predicates (e.g., side-effects) in the context of the stack-splitting scheme. Experimental results obtained from running PALS on both Shared Memory and Beowulf systems are presented and analyzed. | A theoretical analysis of the properties of different methodologies has been presented in @cite_33 @cite_15 . These works provide an abstraction of the environment representation problem as a data structure problem on dynamic trees. These studies identify the presence of unavoidable overheads in the dynamic management of environments in a parallel setting, and recognize methods with constant-time environment creation and access as optimal methods for environment representation. Methods such as stack-copying @cite_14 , binding arrays @cite_25 , and recomputation @cite_30 meet such requirements. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_33",
"@cite_15",
"@cite_25"
],
"mid": [
"2083878525",
"1866018165",
"2061213370",
"2018331535",
"2079366392"
],
"abstract": [
"In standard control-flow analyses for higher-order languages, a single abstract binding for a variable represents a set of exact bindings, and a single abstract reference cell represents a set of exact reference cells. While such analyses provide useful may-alias information, they are unable to answer mustalias questions about variables and cells, as these questions ask about equality of specific bindings and references.In this paper, we present a novel program analysis for higher-order languages that answers must-alias questions. At every program point, the analysis associates with each variable and abstract cell a cardinality, which is either single or multiple. If variable x is single at program point p, then all bindings for x in the heap reachable from the environment at p hold the same value. If abstract cell r is single at p, then at most one exact cell corresponding to r is reachable from the environment at p.Must-alias information facilitates various program optimizations such as lightweight closure conversion [19]. In addition, must-alias information permits analyses to perform strong updates [3] on abstract reference cells known to be single. Strong updates improve analysis precision for programs that make significant use of state.A prototype implementation of our analysis yields encouraging results. Over a range of benchmarks, our analysis classifies a large majority of the variables as single.",
"Despite the performance potential of multicomputers, several factors have limited their widespread adoption. Of these, performance variability is among the most significant. Execution of some programs may yield only a small fraction of peak system performance, whereas others approach the system's theoretical performance peak. Moreover, the observed performance may change substantially as application program parameters vary. Data parallel languages, which facilitate the programming of multicomputers, increase the semantic distance between the program's source code and its observable performance, thus aggravating the performance problem. In this thesis, we propose a new methodology to predict the performance scalability of data parallel applications on multicomputers. Our technique represents the execution time of a program as a symbolic expression that is a function of the number of processors (P), problem size (N), and other system-dependent parameters. This methodology is based on information collected at compile time. By extending an existing data parallel compiler (Fortran D95), we derive, during compilation, a symbolic model that represents the cost of each high-level program section and, inductively, of the complete program. These symbolic expressions may be simplified externally with current symbolic tools. Predicting performance of the program for a given pair @math requires simply the evaluation of its corresponding cost expression. We validate our implementation by predicting scalability of a variety of loop nests, with distinct computation and communication patterns. To demonstrate the applicability of our technique, we present a series of concrete performance problems where it was successfully employed: prediction of total execution time, identification and tracking of bottlenecks, cross-system prediction, and evaluation of code transformations. These examples show that the technique would be useful both to users, in optimizing and tuning their programs, and to advanced compilers, which would have a means to evaluate the expected performance of a synthesized code. According to the results of our study, by integrating compilation, performance analysis and symbolic manipulation tools, it is possible to correctly predict, in an automated fashion, the major performance variations of a data parallel program written in a high-level language.",
"In programming languages with dynamic use of memory, such as Java, knowing that a reference variable x points to an acyclic data structure is valuable for the analysis of termination and resource usage (e.g., execution time or memory consumption). For instance, this information guarantees that the depth of the data structure to which x points is greater than the depth of the data structure pointed to by x.f for any field f of x. This, in turn, allows bounding the number of iterations of a loop which traverses the structure by its depth, which is essential in order to prove the termination or infer the resource usage of the loop. The present paper provides an Abstract-Interpretation-based formalization of a static analysis for inferring acyclicity, which works on the reduced product of two abstract domains: reachability, which models the property that the location pointed to by a variable w can be reached by dereferencing another variable v (in this case, v is said to reach w); and cyclicity, modeling the property that v can point to a cyclic data structure. The analysis is proven to be sound and optimal with respect to the chosen abstraction.",
"The single most serious issue in the development of a parallel implementation of non-deterministic programming languages and systems (e.g., logic programming, constraint programming, search-based artificial intelligence systems) is the dynamic management of the binding environments-i.e., the ability to associate with each parallel computation the correct set of bindings values representing the solution generated by that particular branch of the non-deterministic computation. The problem has been abstracted and formally studied previously (ACM Trans. Program. Lang. Syst. 15(4) (1993) 659; New Generation Comput. 17(3) (1999) 285), but to date only relatively inefficient data structures (ACM Trans. Program. Lang. Syst. (2002); New Generation Comput. 17(3) (1999) 285; J. Funct. Logic Program. Special issue #1 (1999)) have been developed to solve it. We provide a very efficient solution to the problem (O(lgn) per operation). This is a significant improvement over previously best known @W(n3) solution. Our solution is provably optimal for the pointer machine model. We also show how the solution can be extended to handle the abstraction of search problems in object-oriented systems, with the same time complexity.",
"This paper introduces dynamic object colocation, an optimization to reduce copying costs in generational and other incremental garbage collectors by allocating connected objects together in the same space. Previous work indicates that connected objects belong together because they often have similar lifetimes. Generational collectors, however, allocate all new objects in a nursery space. If these objects are connected to data structures residing in the mature space, the collector must copy them. Our solution is a cooperative optimization that exploits compiler analysis to make runtime allocation decisions. The compiler analysis discovers potential object connectivity for newly allocated objects. It then replaces these allocations with calls to coalloc, which takes an extra parameter called the colocator object. At runtime, coalloc determines the location of the colocator and allocates the new object together with it in either the nursery or mature space. Unlike pretenuring, colocation makes precise per-object allocation decisions and does not require lifetime analysis or allocation site homogeneity. Experimental results for SPEC Java benchmarks using Jikes RVM show colocation can reduce garbage collection time by 50 to 75 , and total performance by up to 1 ."
]
} |
cs0607079 | 2086126338 | The length-based approach is a heuristic for solving randomly generated equations in groups that possess a reasonably behaved length function. We describe several improvements of the previously suggested length-based algorithms, which make them applicable to Thompson's group with significant success rates. In particular, this shows that the Shpilrain-Ushakov public key cryp- tosystem based on Thompson's group is insecure, and suggests that no practical public key cryp- tosystem based on the difficulty of solving an equation in this group can be secure. | While we were finalizing our paper for publication, a very elegant specialized attack on the same cryptosystem was announced by Matucci @cite_15 . The main contribution of the present paper is thus the generalization of the length-based algorithms to make them applicable to a wider class of groups. Moreover, while our general attack can be easily adapted to other possible cryptosystems based on Thompson's group, this may not be the case for Matucci's specialized methods. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2765992693"
],
"abstract": [
"In this paper, we propose an algorithm for constructing guess-and-determine attacks on keystream generators and apply it to the cryptanalysis of the alternating step generator (ASG) and two its modifications (MASG and MASG0). In a guess-and-determine attack, we first “guess” some part of an initial state and then apply some procedure to determine, if the guess was correct and we can use the guessed information to solve the problem, thus performing an exhaustive search over all possible assignments of bits forming a chosen part of an initial state. We propose to use in the “determine” part the algorithms for solving Boolean satisfiability problem (SAT). It allows us to consider sets of bits with nontrivial structure. For each such set it is possible to estimate the runtime of a corresponding guess-and-determine attack via the Monte-Carlo method, so we can search for a set of bits yielding the best attack via a black-box optimization algorithm augmented with several SAT-specific features. We constructed and implemented such attacks on ASG, MASG, and MASG0 to prove that the constructed runtime estimations are reliable. We show, that the constructed attacks are better than the trivial ones, which imply exhaustive search over all possible states of the control register, and present the results of experiments on cryptanalysis of ASG and MASG MASG0 with total registers length of 72 and 96, which have not been previously published in the literature."
]
} |
cs0606044 | 2951892051 | In set-system auctions , there are several overlapping teams of agents, and a task that can be completed by any of these teams. The buyer's goal is to hire a team and pay as little as possible. Recently, Karlin, Kempe and Tamir introduced a new definition of frugality ratio for this setting. Informally, the frugality ratio is the ratio of the total payment of a mechanism to perceived fair cost. In this paper, we study this together with alternative notions of fair cost, and how the resulting frugality ratios relate to each other for various kinds of set systems. We propose a new truthful polynomial-time auction for the vertex cover problem (where the feasible sets correspond to the vertex covers of a given graph), based on the local ratio algorithm of Bar-Yehuda and Even. The mechanism guarantees to find a winning set whose cost is at most twice the optimal. In this situation, even though it is NP-hard to find a lowest-cost feasible set, we show that local optimality of a solution can be used to derive frugality bounds that are within a constant factor of best possible. To prove this result, we use our alternative notions of frugality via a bootstrapping technique, which may be of independent interest. | Vertex-cover auctions have been studied in the past by Talwar @cite_9 and Calinescu @cite_3 . Both of these papers are based on the definition of frugality ratio used in @cite_5 ; as mentioned before, this means that their results only apply to bipartite graphs. Talwar @cite_9 shows that the frugality ratio of VCG is at most @math . However, since finding the cheapest vertex cover is an NP-hard problem, the VCG mechanism is computationally infeasible. The first (and, to the best of our knowledge, only) paper to investigate polynomial-time truthful mechanisms for vertex cover is @cite_3 . That paper studies an auction that is based on the greedy allocation algorithm, which has an approximation ratio of @math . While the main focus of @cite_3 is the more general set cover problem, the results of @cite_3 imply a frugality ratio of @math for vertex cover. Our results improve on those of @cite_9 as our mechanism is polynomial-time computable, as well as on those of @cite_3 , as our mechanism has a better approximation ratio, and we prove a stronger bound on the frugality ratio; moreover, this bound also applies to the mechanism of @cite_3 . | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_3"
],
"mid": [
"2046744707",
"2057596055",
"2084685902"
],
"abstract": [
"In a STACS 2003 paper, Talwar analyzes the overpayment the VCG mechanism incurs for ensuring truthfulness in auctions. Among other results, he studies k-Set Cover (given a universe U and a collection of sets S 1 , S 2 , ? , S m , each having a cost c ( S i ) and at most k elements of U, find a minimum cost subcollection - a cover - whose union equals U) and shows that the payment of the VCG mechanism is at most k ? c ( OPT ' ) , where OPT ' is the best cover disjoint from the optimum cover OPT . The VCG mechanism requires finding an optimum cover. For k ? 3 , k-Set Cover is known to be NP-hard, and thus truthful mechanisms based on approximation algorithms are desirable. We show that the payment incurred by two mechanisms based on approximation algorithms (including the Greedy algorithm) is bounded by ( k - 1 ) c ( OPT ) + k ? c ( OPT ' ) . The same approximation algorithms have payment bounded by k ( c ( OPT ) + c ( OPT ' ) ) when applied to more general set systems, which include k-Polymatroid Cover, a problem related to Steiner Tree computations. If q is such that an element in a k-Set Cover instance appears in at most q sets, we show that the total payment based on our algorithms is bounded by q ? k 2 times the total payment of the VCG mechanism.",
"The cover time of a graph is a celebrated example of a parameter that is easy to approximate using a randomized algorithm, but for which no constant factor deterministic polynomial time approximation is known. A breakthrough due to Kahn, Kim, Lovasz and Vu [25] yielded a (log logn)2 polynomial time approximation. We refine the upper bound of [25], and show that the resulting bound is sharp and explicitly computable in random graphs. Cooper and Frieze showed that the cover time of the largest component of the Erdős–Renyi random graph G(n, c n) in the supercritical regime with c > 1 fixed, is asymptotic to ϕ(c)nlog2n, where ϕ(c) → 1 as c ↓ 1. However, our new bound implies that the cover time for the critical Erdős–Renyi random graph G(n, 1 n) has order n, and shows how the cover time evolves from the critical window to the supercritical phase. Our general estimate also yields the order of the cover time for a variety of other concrete graphs, including critical percolation clusters on the Hamming hypercube 0, 1 n, on high-girth expanders, and on tori ℤdn for fixed large d. This approach also gives a simpler proof of a result of Aldous [2] that the cover time of a uniform labelled tree on k vertices is of order k3 2. For the graphs we consider, our results show that the blanket time, introduced by Winkler and Zuckerman [45], is within a constant factor of the cover time. Finally, we prove that for any connected graph, adding an edge can increase the cover time by at most a factor of 4.",
"We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option. For a multi-dimensional domain with an arbitrary number of agents and items, and for the very large class of homogeneous valuation functions, we prove that our mechanism provides every agent with at least a 1 e ≈ 0.368 fraction of her Proportionally Fair valuation. To the best of our knowledge, this is the first result that gives a constant factor approximation to every agent for the Proportionally Fair solution. To complement this result, we show that no truthful mechanism can guarantee more than 0.5 approximation, even for the restricted class of additive linear valuations. In addition to this, we uncover a connection between the Partial Allocation mechanism and VCG-based mechanism design. We also ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works really well when all the items are highly demanded."
]
} |
cs0606124 | 1498463151 | In some applications of matching, the structural or hierarchical properties of the two graphs being aligned must be maintained. The hierarchical properties are induced by the direction of the edges in the two directed graphs. These structural relationships defined by the hierarchy in the graphs act as a constraint on the alignment. In this paper, we formalize the above problem as the weighted alignment between two directed acyclic graphs. We prove that this problem is NP-complete, show several upper bounds for approximating the solution, and finally introduce polynomial time algorithms for sub-classes of directed acyclic graphs. | Both of these problems have many practical applications, in particular, graph isomorphism has received a lot of attention in the area of computer vision. Images or objects can be represented as a graph. A weighted graph can be used to formulate a structural description of an object @cite_24 . There have been two main approaches to solving graph isomorphism: state--space construction with searching and nonlinear optimization. The first method consists of building the state--space, which can then be searched. This method has an exponential running time in the worst case scenario, but by employing heuristics, the search can be reduced to a low--order polynomial for many types of graphs @cite_20 @cite_9 . With the second approach (nonlinear optimization), the most successful approaches have been relaxation labeling @cite_27 , neural networks @cite_18 , linear programming @cite_8 , eigendecomposition @cite_0 , genetic algorithms @cite_4 , and Lagrangian relaxation @cite_21 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_20"
],
"mid": [
"1967974926",
"2046021036",
"2006836707",
"2409645877",
"2795050613",
"2097746044",
"1799262171",
"2143163611",
"2163249222"
],
"abstract": [
"A generalization of subgraph isomorphism for the fault-tolerant interpretation of disturbed line images has been achieved. Object recognition is effected by optimal matching of a reference graph to the graph of a distorted image. This optimization is based on the solution of linear and quadratic assignment problems. The efficiency of the procedures developed for this objective has been proved in practical applications. NP-complete problems such as subgraph recognition need exhaustive computation if exact (branch-and-bound) algorithms are used. In contrast to this, heuristics are very fast and sufficiently reliable for less complex relational structures of the kind investigated in the first part of this paper. Constrained continuous optimization techniques, such as relaxation labeling and neural network strategies, solve recognition problems within a reasonable time, even in rather complex relational structures where heuristics can fail. They are also well suited to parallelism. The second part of this paper is devoted exclusively to them.",
"The isomorphism problem for graphs G 1 and G 2 is to determine if there exists a one-to-one mapping of the vertices of G 1 onto the vertices of G 2 such that two vertices of G 1 are adjacent if and only if their images in G 2 are adjacent. In addition to determining the existence of such an isomorphism, it is useful to be able to produce an isomorphism-inducing mapping in the case where one exists. The isomorphism problem for triconnected planar graphs is particularly simple since a triconnected planar graph has a unique embedding on a sphere [6]. Weinberg [5] exploited this fact in developing an algorithm for testing isomorphism of triconnected planar graphs in O(|V| 2 ) time where V is the set consisting of the vertices of both graphs. The result has been extended to arbitrary planar graphs and improved to O(|V|log|V|) steps by Hopcroft and Tarjan [2,3]. In this paper, the time bound for planar graph isomorphism is improved to O(|V|). In addition to determining the isomorphism of two planar graphs, the algorithm can be easily extended to partition a set of planar graphs into equivalence classes of isomorphic graphs in time linear in the total number of vertices in all graphs in the set. A random access model of computation (see Cook [1]) is assumed. Although the proposed algorithm has a linear asymptotic growth rate, at the present stage of development it appears to be inefficient on account of a rather large constant. This paper is intended only to establish the existence of a linear algorithm which subsequent work might make truly efficient.",
"A Lagrangian relaxation network for graph matching is presented. The problem is formulated as follows: given graphs G and g, find a permutation matrix M that brings the two sets of vertices into correspondence. Permutation matrix constraints are formulated in the framework of deterministic annealing. Our approach is in the same spirit as a Lagrangian decomposition approach in that the row and column constraints are satisfied separately with a Lagrange multiplier used to equate the two \"solutions\". Due to the unavoidable symmetries in graph isomorphism (resulting in multiple global minima), we add a symmetry-breaking self-amplification term in order to obtain a permutation matrix. With the application of a fixpoint preserving algebraic transformation to both the distance measure and self-amplification terms, we obtain a Lagrangian relaxation network. The network performs minimization with respect to the Lagrange parameters and maximization with respect to the permutation matrix variables. Simulation results are shown on 100 node random graphs and for a wide range of connectivities.",
"We show that the Graph Isomorphism (GI) problem and the more general problems of String Isomorphism (SI) andCoset Intersection (CI) can be solved in quasipolynomial(exp((logn)O(1))) time. The best previous bound for GI was exp(O( √n log n)), where n is the number of vertices (Luks, 1983); for the other two problems, the bound was similar, exp(O (√ n)), where n is the size of the permutation domain (Babai, 1983). Following the approach of Luks’s seminal 1980 82 paper, the problem we actually address is SI. This problem takes two strings of length n and a permutation group G of degree n (the “ambient group”) as input (G is given by a list of generators) and asks whether or not one of the strings can be transformed into the other by some element of G. Luks’s divide-and-conquer algorithm for SI proceeds by recursion on the ambient group. We build on Luks’s framework and attack the obstructions to efficient Luks recurrence via an interplay between local and global symmetry. We construct group theoretic “local certificates” to certify the presence or absence of local symmetry, aggregate the negative certificates to canonical k-ary relations where k = O(log n), and employ combinatorial canonical partitioning techniques to split the k-ary relational structure for efficient divide-and- conquer. We show that in a well–defined sense, Johnson graphs are the only obstructions to effective canonical partitioning. The central element of the algorithm is the “local certificates” routine which is based on a new group theoretic result, the “Unaffected stabilizers lemma,” that allows us to construct global automorphisms out of local information.",
"The subgraph isomorphism problem involves deciding whether a copy of a pattern graph occurs inside a larger target graph. The non-induced version allows extra edges in the target, whilst the induced version does not. Although both variants are NP-complete, algorithms inspired by constraint programming can operate comfortably on many real-world problem instances with thousands of vertices. However, they cannot handle arbitrary instances of this size. We show how to generate \" really hard \" random instances for subgraph isomorphism problems, which are computationally challenging with a couple of hundred vertices in the target, and only twenty pattern vertices. For the non-induced version of the problem, these instances lie on a satisfiable unsatisfiable phase transition, whose location we can predict; for the induced variant, much richer behaviour is observed, and constrained-ness gives a better measure of difficulty than does proximity to a phase transition. These results have practical consequences: we explain why the widely researched \" filter verify \" indexing technique used in graph databases is founded upon a misunderstanding of the empirical hardness of NP-complete problems, and cannot be beneficial when paired with any reasonable subgraph isomorphism algorithm.",
"In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class \"almost always\" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.",
"Given two graphs @math and @math , the Subgraph Isomorphism problem asks if @math is isomorphic to a subgraph of @math . While NP-hard in general, algorithms exist for various parameterized versions of the problem: for example, the problem can be solved (1) in time @math using the color-coding technique of Alon, Yuster, and Zwick; (2) in time @math using Courcelle's Theorem; (3) in time @math using a result on first-order model checking by Frick and Grohe; or (4) in time @math for connected @math using the algorithm of Matou s ek and Thomas. Already this small sample of results shows that the way an algorithm can depend on the parameters is highly nontrivial and subtle. We develop a framework involving 10 relevant parameters for each of @math and @math (such as treewidth, pathwidth, genus, maximum degree, number of vertices, number of components, etc.), and ask if an algorithm with running time [ f_1(p_1,p_2,..., p_ ) n^ f_2(p_ +1 ,..., p_k) ] exist, where each of @math is one of the 10 parameters depending only on @math or @math . We show that all the questions arising in this framework are answered by a set of 11 maximal positive results (algorithms) and a set of 17 maximal negative results (hardness proofs); some of these results already appear in the literature, while others are new in this paper. On the algorithmic side, our study reveals for example that an unexpected combination of bounded degree, genus, and feedback vertex set number of @math gives rise to a highly nontrivial algorithm for Subgraph Isomorphism. On the hardness side, we present W[1]-hardness proofs under extremely restricted conditions, such as when @math is a bounded-degree tree of constant pathwidth and @math is a planar graph of bounded pathwidth.",
"The subgraph isomorphism problem involves deciding if there exists a copy of a pattern graph in a target graph. This problem may be solved by a complete tree search combined with filtering techniques that aim at pruning branches that do not contain solutions. We introduce a new filtering algorithm based on local all different constraints. We show that this filtering is stronger than other existing filterings - i.e., it prunes more branches - and that it is also more efficient - i.e., it allows one to solve more instances quicker.",
"A large number of problems arising in computer vision can be reduced to the problem of minimizing the nuclear norm of a matrix, subject to additional structural and sparsity constraints on its elements. Examples of relevant applications include, among others, robust tracking in the presence of outliers, manifold embedding, event detection, in-painting and tracklet matching across occlusion. In principle, these problems can be reduced to a convex semi-definite optimization form and solved using interior point methods. However, the poor scaling properties of these methods limit the use of this approach to relatively small sized problems. The main result of this paper shows that structured nuclear norm minimization problems can be efficiently solved by using an iterative Augmented Lagrangian Type (ALM) method that only requires performing at each iteration a combination of matrix thresholding and matrix inversion steps. As we illustrate in the paper with several examples, the proposed algorithm results in a substantial reduction of computational time and memory requirements when compared against interior-point methods, opening up the possibility of solving realistic, large sized problems."
]
} |
cs0606124 | 1498463151 | In some applications of matching, the structural or hierarchical properties of the two graphs being aligned must be maintained. The hierarchical properties are induced by the direction of the edges in the two directed graphs. These structural relationships defined by the hierarchy in the graphs act as a constraint on the alignment. In this paper, we formalize the above problem as the weighted alignment between two directed acyclic graphs. We prove that this problem is NP-complete, show several upper bounds for approximating the solution, and finally introduce polynomial time algorithms for sub-classes of directed acyclic graphs. | In @cite_6 , graph matching is applied to conceptual system matching for translation. The work is very similar to ontology alignment, however, the authors formalize their problem in terms of any conceptual system rather than restricting the work specifically to an ontological formalization of a domain. They formalize conceptual systems as graphs, and introduce algorithms for matching both unweighted and weighted versions of these graphs. | {
"cite_N": [
"@cite_6"
],
"mid": [
"103919619"
],
"abstract": [
"Ontology matching is an important task to achieve interoperation between semantic web applications using different ontologies. Structural similarity plays a central role in ontology matching. However, the existing approaches rely heavily on lexical similarity, and they mix up lexical similarity with structural similarity. In this paper, we present a graph matching approach for ontologies, called GMO. It uses bipartite graphs to represent ontologies, and measures the structural similarity between graphs by a new measurement. Furthermore, GMO can take a set of matched pairs, which are typically previously found by other approaches, as external input in matching process. Our implementation and experimental results are given to demonstrate the effectiveness of the graph matching approach."
]
} |
quant-ph0605181 | 2119968904 | A celebrated important result due to (2002 Commun. Math. Phys. 227 605–22) states that providing additive approximations of the Jones polynomial at the kth root of unity, for constant k=5 and k≥7, is BQP-hard. Together with the algorithmic results of (2005) and (2002 Commun. Math. Phys. 227 587–603), this gives perhaps the most natural BQP-complete problem known today and motivates further study of the topic. In this paper, we focus on the universality proof; we extend the result of (2002) to ks that grow polynomially with the number of strands and crossings in the link, thus extending the BQP-hardness of Jones polynomial approximations to all values to which the AJL algorithm applies ( 2005), proving that for all those values, the problems are BQP-complete. As a side benefit, we derive a fairly elementary proof of the density result, without referring to advanced results from Lie algebra representation theory, making this important result accessible to a wider audience in the computer science research community. We make use of two general lemmas we prove, the bridge lemma and the decoupling lemma, which provide tools for establishing the density of subgroups in SU(n). Those tools seem to be of independent interest in more general contexts of proving the quantum universality. Our result also implies a completely classical statement, that the multiplicative approximations of the Jones polynomial, at exactly the same values, are #P-hard, via a recent result due to Kuperberg (2009 arXiv:0908.0512). Since the first publication of those results in their preliminary form (Aharonov and Arad 2006 arXiv:quant-ph 0605181), the methods we present here have been used in several other contexts (Aharonov and Arad 2007 arXiv:quant-ph 0702008; Peter and Stephen 2008 Quantum Inf. Comput. 8 681). The present paper is an improved and extended version of the results presented by Aharonov and Arad (2006) and includes discussions of the developments since then. | Since the first publication of the results presented here (in preliminary form) @cite_26 , they were already used in several contexts: Shor and Jordan @cite_15 built on the methods we develop here to prove universality of a variant of the Jones polynomial approximation problem, in the model of quantum computation with one clean qubit. In the extension of the AJL algorithm @cite_21 to the Potts model @cite_36 , Aharonov build on those methods to prove universality of approximating the Jones polynomial in many other values, and even in values which correspond to non-unitary representations. We hope that the method we present here will be useful is future other contexts as well. | {
"cite_N": [
"@cite_36",
"@cite_15",
"@cite_26",
"@cite_21"
],
"mid": [
"1632584182",
"1616071251",
"2962824725",
"2566350286"
],
"abstract": [
"In the first 36 pages of this paper, we provide polynomial quantum algorithms for additive approximations of the Tutte polynomial, at any point in the Tutte plane, for any planar graph. This includes as special cases the AJL algorithm for the Jones polynomial, the partition function of the Potts model for any weighted planer graph at any temperature, and many other combinatorial graph properties. In the second part of the paper we prove the quantum universality of many of the problems for which we provide an algorithm, thus providing a large set of new quantum-complete problems. Unfortunately, we do not know that this holds for the Potts model case; this is left as an important open problem. The main progress in this work is in our ability to handle non-unitary representations of the Temperley Lieb algebra, both when applying them in the algorithm, and, more importantly, in the proof of universality, when encoding quantum circuits using non-unitary operators. To this end we develop many new tools, that allow proving density and applying the Solovay Kitaev theorem in the case of non-unitary matrices. We hope that these tools will open up new possibilities of using non-unitary reps in other quantum computation contexts.",
"It is known that evaluating a certain approximation to the Jones polynomial for the plat closure of a braid is a BQP-complete problem. That is, this problem exactly captures the power of the quantum circuit model[13, 3, 1]. The one clean qubit model is a model of quantum computation in which all but one qubit starts in the maximally mixed state. One clean qubit computers are believed to be strictly weaker than standard quantum computers, but still capable of solving some classically intractable problems [21]. Here we show that evaluating a certain approximation to the Jones polynomial at a fifth root of unity for the trace closure of a braid is a complete problem for the one clean qubit complexity class. That is, a one clean qubit computer can approximate these Jones polynomials in time polynomial in both the number of strands and number of crossings, and the problem of simulating a one clean qubit computer is reducible to approximating the Jones polynomial of the trace closure of a braid.",
"Freedman, Kitaev, and Wang (2002), and later Aharonov, Jones, and Landau (2009), established a quantum algorithm to \"additively\" approximate the Jones polynomial V(L;t) at any principal root of unity t. The strength of this additive approximation depends exponentially on the bridge number of the link presentation. Freedman, Larsen, and Wang (2002) established that the approximation is universal for quantum computation at a non- lattice, principal root of unity. We show that any value-distinguishing approximation of the Jones polynomial at these non-lattice roots of unity is #P-hard. Given the power to decide whetherjV(L;t)j b for fixed constants 0 1, T(G; x; y) is #P-hard to approximate within a factor of c even for planar graphs G. Along the way, we clarify and generalize both Aaronson's theorem and the Solovay- Kitaev theorem.",
"In the near future, there will likely be special-purpose quantum computers with 40--50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate \"quantum supremacy\": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by the Quantum AI group at Google. We show that there's a natural average-case hardness assumption, which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work -- for example, on BosonSampling and IQP -- the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, inspired by Savitch's Theorem, for simulating a general quantum circuit with n qubits and depth d in polynomial space and dO(n) time. We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem -- of the form \"if approximate quantum sampling is classically easy, then the polynomial hierarchy collapses\"-- must be non-relativizing. This sharply contrasts with the situation for exact sampling. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that there is a sampling task, namely Fourier Sampling, with a 1 versus linear separation between its quantum and classical query complexities. Fifth, in search of a \"happy medium\" between black-box and non-black-box arguments, we study quantum supremacy relative to oracles in P poly. Previous work implies that, if one-way functions exist, then quantum supremacy is possible relative to such oracles. We show, conversely, that some computational assumption is needed: if SampBPP = SampBQP and NP ⊆ BPP, then quantum supremacy is impossible relative to oracles with small circuits."
]
} |
cs0605080 | 2950591296 | Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications. | In the overlay-router approach such as OMNI @cite_5 and TOMA @cite_13 , reliable servers are installed across the network to act as application-level multicast routers. The content is transmitted from the source to a set of receivers on a multicast tree consisting of the overlay servers. This approach is designed to be scalable since the receivers get the content from the application-level routers, thus alleviating bandwidth demand at the source. However, it needs dedicated infrastructure deployment and costly servers. | {
"cite_N": [
"@cite_5",
"@cite_13"
],
"mid": [
"1568206987",
"2129235628"
],
"abstract": [
"In this paper, we propose a Two-tier Overlay Multicast Architecture (TOMA) to provide scalable and efficient multicast support for various group communication applications. In TOMA, Multicast Service Overlay Network (MSON) is advocated as the backbone service domain, while end users in the access domains form a number of small clusters, in which an application-layer multicast protocol is used for the communication between the clustered end users. TOMA is able to provide efficient resource utilization with less control overhead, especially for large-scale applications. It also alleviates the state scalability problem and simplifies multicast tree construction and maintenance when there are large numbers of groups in the networks. Simulation studies are conducted and the results demonstrate the promising performance of TOMA.",
"Structured peer-to-peer overlay networks such as CAN, Chord, Pastry, and Tapestry can be used to implement Internet-scale application-level multicast. There are two general approaches to accomplishing this: tree building and flooding. This paper evaluates these two approaches using two different types of structured overlay: 1) overlays which use a form of generalized hypercube routing, e.g., Chord, Pastry and Tapestry, and 2) overlays which use a numerical distance metric to route through a Cartesian hyperspace, e.g., CAN. Pastry and CAN are chosen as the representatives of each type of overlay. To the best of our knowledge, this paper reports the first head-to-head comparison of CAN-style versus Pastry-style overlay networks, using multicast communication workloads running on an identical simulation infrastructure. The two approaches to multicast are independent of overlay network choice, and we provide a comparison of flooding versus tree-based multicast on both overlays. Results show that the tree-based approach consistently outperforms the flooding approach. Finally, for tree-based multicast, we show that Pastry provides better performance than CAN."
]
} |
cs0605080 | 2950591296 | Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications. | The P2P approach requires no extra resources. Several proposals have been designed to handle small groups. Narada @cite_0 , MeshTree @cite_7 , and Hostcast @cite_10 are examples of distributed mesh-first'' algorithms where nodes arrange themselves into well-connected mesh on top of which a routing protocol is run to derive a delivery tree. These protocols rely on incremental improvements over time by adding and removing mesh links based on an utility function. Although these protocols offer robustness properties (thanks to the mesh structure), they do not scale to large population, due to excessive overhead resulting from the improvement process. The objective of LCC is to locate the newcomer prior to joining the overlay and hence process only a few number of refinements during the multicast session. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_7"
],
"mid": [
"1926875216",
"2129807746",
"2135593270"
],
"abstract": [
"We study decentralised low delay degree-constrained overlay multicast tree construction for single source real-time applications. This optimisation problem is NP-hard even if computed centrally. We identify two problems in traditional distributed solutions, namely the greedy problem and delay-cost trade-off. By offering solutions to these problems, we propose a new self-organising distributed tree building protocol called MeshTree. The main idea is to embed the delivery tree in a degree-bounded mesh containing many low cost links. Our simulation results show that MeshTree is comparable to the centralised Compact Tree algorithm, and always outperforms existing distributed solutions in delay optimisation. In addition, it always yields trees with lower cost and traffic redundancy.",
"We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties.We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25 ), improved or similar end-to-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic.Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, average group members established and maintained low-latency paths and incurred a maximum packet loss rate of less than 1 as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100.",
"Given that the Internet does not widely support Internet protocol multicast while content-distribution-network technologies are costly, the concept of peer-to-peer could be a promising start for enabling large-scale streaming systems. In our so-called Zigzag approach, we propose a method for clustering peers into a hierarchy called the administrative organization for easy management, and a method for building the multicast tree atop this hierarchy for efficient content transmission. In Zigzag, the multicast tree has a height logarithmic with the number of clients, and a node degree bounded by a constant. This helps reduce the number of processing hops on the delivery path to a client while avoiding network bottlenecks. Consequently, the end-to-end delay is kept small. Although one could build a tree satisfying such properties easily, an efficient control protocol between the nodes must be in place to maintain the tree under the effects of network dynamics. Zigzag handles such situations gracefully, requiring a constant amortized worst-case control overhead. Especially, failure recovery is done regionally with impact on, at most, a constant number of existing clients and with mostly no burden on the server."
]
} |
cs0605080 | 2950591296 | Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications. | Other tree-first'' protocols like ZigZag @cite_17 and NICE @cite_8 , are topology-aware clustering-based protocols which are designed to support wide-area size multicast for low bandwidth application. However, they do not consider individual node fan-out capability. Rather, they bound the overlay fan-out using a (global) cluster-size parameter. In particular, since both protocols only consider latency for cluster leader selection, they may experience problems if the cluster leader has insufficient fan-out. Other proposals exploit the AS-level @cite_1 or the router-level @cite_2 underlying network topology information to build efficient overlay networks. However, these approaches assume some assistance from the IP layer (routers sending ICMP messages, or BGP information access), which may be problematic. LCC does not require any extra assistance from entities that do not belong to the overlay. | {
"cite_N": [
"@cite_2",
"@cite_8",
"@cite_1",
"@cite_17"
],
"mid": [
"2135593270",
"2129235628",
"2014455962",
"2127380142"
],
"abstract": [
"Given that the Internet does not widely support Internet protocol multicast while content-distribution-network technologies are costly, the concept of peer-to-peer could be a promising start for enabling large-scale streaming systems. In our so-called Zigzag approach, we propose a method for clustering peers into a hierarchy called the administrative organization for easy management, and a method for building the multicast tree atop this hierarchy for efficient content transmission. In Zigzag, the multicast tree has a height logarithmic with the number of clients, and a node degree bounded by a constant. This helps reduce the number of processing hops on the delivery path to a client while avoiding network bottlenecks. Consequently, the end-to-end delay is kept small. Although one could build a tree satisfying such properties easily, an efficient control protocol between the nodes must be in place to maintain the tree under the effects of network dynamics. Zigzag handles such situations gracefully, requiring a constant amortized worst-case control overhead. Especially, failure recovery is done regionally with impact on, at most, a constant number of existing clients and with mostly no burden on the server.",
"Structured peer-to-peer overlay networks such as CAN, Chord, Pastry, and Tapestry can be used to implement Internet-scale application-level multicast. There are two general approaches to accomplishing this: tree building and flooding. This paper evaluates these two approaches using two different types of structured overlay: 1) overlays which use a form of generalized hypercube routing, e.g., Chord, Pastry and Tapestry, and 2) overlays which use a numerical distance metric to route through a Cartesian hyperspace, e.g., CAN. Pastry and CAN are chosen as the representatives of each type of overlay. To the best of our knowledge, this paper reports the first head-to-head comparison of CAN-style versus Pastry-style overlay networks, using multicast communication workloads running on an identical simulation infrastructure. The two approaches to multicast are independent of overlay network choice, and we provide a comparison of flooding versus tree-based multicast on both overlays. Results show that the tree-based approach consistently outperforms the flooding approach. Finally, for tree-based multicast, we show that Pastry provides better performance than CAN.",
"As the size of High Performance Computing clusters grows, so does the probability of interconnect hot spots that degrade the latency and effective bandwidth the network provides. This paper presents a solution to this scalability problem for real life constant bisectional-bandwidth fat-tree topologies. It is shown that maximal bandwidth and cut-through latency can be achieved for MPI global collective traffic. To form such a congestion-free configuration, MPI programs should utilize collective communication, MPI-node-order should be topology aware, and the packet routing should match the MPI communication patterns. First, we show that MPI collectives can be classified into unidirectional and bidirectional shifts. Using this property, we propose a scheme for congestion-free routing of the global collectives in fully and partially populated fat trees running a single job. The no-contention result is then obtained for multiple jobs running on the same fat-tree by applying some job size and placement restrictions. Simulation results of the proposed routing, MPI-node-order and communication patterns show no contention which provides a 40 throughput improvement over previously published results for all-to-all collectives.",
"Aggregate traffic loads and topology in multihop wireless networks may vary slowly, permitting MAC protocols to \"learn\" how to spatially coordinate and adapt contention patterns. Such an approach could reduce contention, leading to better throughput. To that end, we propose a family of MAC scheduling algorithms and demonstrate general conditions, which, if satisfied, ensure lattice rate optimality (i.e., achieving any rate-point on a uniform discrete lattice within the throughput region). This general framework enables the design of MAC protocols that meet various objectives and conditions. In this paper, as instances of such a lattice-rate-optimal family, we propose distributed, synchronous contention-based scheduling algorithms that: 1) are lattice-rate-optimal under both the signal-to-interference-plus-noise ratio (SINR)-based and graph-based interference models; 2) do not require node location information; and 3) only require three-stage RTS CTS message exchanges for contention signaling. Thus, the protocols are amenable to simple implementation and may be robust to network dynamics such as topology and load changes. Finally, we propose a heuristic, which also belongs to the proposed lattice-rate-optimal family of protocols and achieves faster convergence, leading to a better transient throughput."
]
} |
cs0605080 | 2950591296 | Recent proposals in multicast overlay construction have demonstrated the importance of exploiting underlying network topology. However, these topology-aware proposals often rely on incremental and periodic refinements to improve the system performance. These approaches are therefore neither scalable, as they induce high communication cost due to refinement overhead, nor efficient because long convergence time is necessary to obtain a stabilized structure. In this paper, we propose a highly scalable locating algorithm that gradually directs newcomers to their a set of their closest nodes without inducing high overhead. On the basis of this locating process, we build a robust and scalable topology-aware clustered hierarchical overlay scheme, called LCC. We conducted both simulations and PlanetLab experiments to evaluate the performance of LCC. Results show that the locating process entails modest resources in terms of time and bandwidth. Moreover, LCC demonstrates promising performance to support large scale multicast applications. | Landmark clustering is a general concept to construct topology-aware overlays. @cite_9 use such an approach to build a multicast topology-aware CAN overlay network. Prior to joining the overlay network, a newcomer has to measure its distance to each landmark. The node then orders the landmarks according to its distance measurements. The main intuition is that nodes with the same landmark ordering, are also quite likely to be close to each other topologically. An immediate issue with such a landmark-based approach is that it can be rather coarse-grained depending on the number of landmarks used and their distribution. Furthermore, requiring a fixed set of landmarks known by all participating nodes renders this approach unsuitable for dynamic networks. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2560863959"
],
"abstract": [
"In this paper, we present an online landmark selection method for distributed long-term visual localization systems in bandwidth-constrained environments. Sharing a common map for online localization provides a fleet of autonomous vehicles with the possibility to maintain and access a consistent map source, and therefore reduce redundancy while increasing efficiency. However, connectivity over a mobile network imposes strict bandwidth constraints and thus the need to minimize the amount of exchanged data. The wide range of varying appearance conditions encountered during long-term visual localization offers the potential to reduce data usage by extracting only those visual cues which are relevant at the given time. Motivated by this, we propose an unsupervised method of adaptively selecting landmarks according to how likely these landmarks are to be observable under the prevailing appearance condition. The ranking function this selection is based upon exploits landmark co-observability statistics collected in past traversals through the mapped area. Evaluation is performed over different outdoor environments, large time-scales and varying appearance conditions, including the extreme transition from day-time to night-time, demonstrating that with our appearance-dependent selection method, we can significantly reduce the amount of landmarks used for localization while maintaining or even improving the localization performance."
]
} |
cs0605097 | 1681051970 | We introduce knowledge flow analysis, a simple and flexible formalism for checking cryptographic protocols. Knowledge flows provide a uniform language for expressing the actions of principals, assump- tions about intruders, and the properties of cryptographic primitives. Our approach enables a generalized two-phase analysis: we extend the two-phase theory by identifying the necessary and sufficient proper- ties of a broad class of cryptographic primitives for which the theory holds. We also contribute a library of standard primitives and show that they satisfy our criteria. | The first formalisms designed for reasoning about cryptographic protocols are belief logics such as BAN logic @cite_27 , used by the Convince tool @cite_14 with the HOL theorem prover @cite_40 , and its generalizations (GNY @cite_32 , AT @cite_8 , and SVO logic @cite_37 which the C3PO tool @cite_7 employs with the Isabelle theorem prover @cite_5 ). Belief logics are difficult to use since the logical form of a protocol does not correspond to the protocol itself in an obvious way. Almost indistinguishable formulations of the same problem lead to different results. It is also hard to know if a formulation is over constrained or if any important assumptions are missing. BAN logic and its derivatives cannot deal with security flaws resulting from interleaving of protocol steps @cite_31 and cannot express any properties of protocols other than authentication @cite_23 . To overcome these limitations, the knowledge flow formalism has, like other approaches @cite_18 @cite_28 @cite_21 @cite_11 @cite_1 , a concrete operational model of protocol execution. Our model also includes a description of how the honest participants in the protocol behave and a description of how an adversary can interfere with the execution of the protocol. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_40",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_31",
"@cite_11"
],
"mid": [
"2169216703",
"2097842452",
"1508967933",
"2046270839",
"2102922499",
"2130123502",
"2106857693",
"2114497629",
"1992375708",
"2404159111",
"2011329056",
"2594004772",
"2023675273",
"2108533001",
"2101770573"
],
"abstract": [
"We present a logic for analyzing cryptographic protocols. This logic encompasses a unification of four of its predecessors in the BAN family of logics, namely those given by Li (1990); M. Abadi, M. Tuttle (1991); P.C. van Oorschot (1993); and BAN itself (M. , 1989). We also present a model-theoretic semantics with respect to which the logic is sound. The logic presented captures all of the desirable features of its predecessors and more; nonetheless, it accomplishes this with no more axioms or rules than the simplest of its predecessors. >",
"We present an improved logic for analysing authentication properties of cryptographic protocols, based on the SVO logic of Syverson and van Oorschot (1994). Such logics are useful in electronic commerce, among other areas. We have constructed this logic in order to simplify automation, and we describe an implementation using the Isabelle theorem-proving system, and a GUI tool based on this implementation. The tool is typically operated by opening a list of propositions intended to be true, and clicking one button. Since the rules form a clean framework, the logic is easily extensible. We also present in detail a proof of soundness, using Kripke possible-worlds semantics.",
"Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. In this paper, we show that it is possible to obtain the best of both worlds: fully automated proofs and strong, clear security guarantees. Specifically, for the case of protocols that use signatures and asymmetric encryption, we establish that symbolic integrity and secrecy proofs are sound with respect to the computational model. The main new challenges concern secrecy properties for which we obtain the first soundness result for the case of active adversaries. Our proofs are carried out using Casrul, a fully automated tool.",
"We are interested in the design of automated procedures for analyzing the (in)security of cryptographic protocols in the Dolev-Yao model for a bounded number of sessions when we take into account some algebraic properties satisfied by the operators involved in the protocol. This leads to a more realistic model in comparison to what we get under the perfect cryptography assumption, but it implies that protocol analysis deals with terms modulo some equational theory instead of terms in a free algebra. The main goal of this paper is to setup a general approach that works for a whole class of monoidal theories which contains many of the specific cases that have been considered so far in an ad-hoc way (e.g. exclusive or, Abelian groups, exclusive or in combination with the homomorphism axiom). We follow a classical schema for cryptographic protocol analysis which proves first a locality result and then reduces the insecurity problem to a symbolic constraint solving problem. This approach strongly relies on the correspondence between a monoidal theory E and a semiring S\"E which we use to deal with the symbolic constraints. We show that the well-defined symbolic constraints that are generated by reasonable protocols can be solved provided that unification in the monoidal theory satisfies some additional properties. The resolution process boils down to solving particular quadratic Diophantine equations that are reduced to linear Diophantine equations, thanks to linear algebra results and the well-definedness of the problem. Examples of theories that do not satisfy our additional properties appear to be undecidable, which suggests that our characterization is reasonably tight.",
"A mechanism is presented for reasoning about belief as a systematic way to understand the working of cryptographic protocols. The mechanism captures more features of such protocols than that given by M. (1989) to which the proposals are a substantial extension. The notion of possession incorporated in the approach assumes that principles can include in messages data they do not believe in, but merely possess. This also enables conclusions such as 'Q possesses the shared key', as in an example to be derived. The approach places a strong emphasis on the separation between the content and the meaning of messages. This can increase consistency in the analysis and, more importantly, introduce the ability to reason at more than one level. The final position in a given run will depend on the level of mutual trust of the specified principles participating in that run. >",
"We present decidability results for the verification of cryptographic protocols in the presence of equational theories corresponding to xor and Abelian groups. Since the perfect cryptography assumption is unrealistic for cryptographic primitives with visible algebraic properties such as xor, we extend the conventional Dolev-Yao model by permitting the intruder to exploit these properties. We show that the ground reachability problem in NP for the extended intruder theories in the cases of xor and Abelian groups. This result follows from a normal proof theorem. Then, we show how to lift this result in the xor case: we consider a symbolic constraint system expressing the reachability (e.g., secrecy) problem for a finite number of sessions. We prove that such a constraint system is decidable, relying in particular on an extension of combination algorithms for unification procedures. As a corollary, this enables automatic symbolic verification of cryptographic protocols employing xor for a fixed number of sessions.",
"The pioneering and well-known work of M. Burrows, M. Abadi and R. Needham (1989), (the BAN logic) which dominates the area of security protocol analysis is shown to take an approach which is not fully formal and which consequently permits approval of dangerous protocols. Measures to make the BAN logic formal are then proposed. The formalisation is found to be desirable not only for its potential in providing rigorous analysis of security protocols, but also for its readiness for supporting a computer-aided fashion of analysis. >",
"We formalize the Dolev-Yao model of security protocols, using a notation based on multiset rewriting with existentials. The goals are to provide a simple formal notation for describing security protocols, to formalize the assumptions of the Dolev-Yao model using this notation, and to analyze the complexity of the secrecy problem under various restrictions. We prove that, even for the case where we restrict the size of messages and the depth of message encryption, the secrecy problem is undecidable for the case of an unrestricted number of protocol roles and an unbounded number of new nonces. We also identify several decidable classes, including a DEXP-complete class when the number of nonces is restricted, and an NP-complete class when both the number of nonces and the number of roles is restricted. We point out a remaining open complexity problem, and discuss the implications these results have on the general topic of protocol analysis.",
"Cryptographic protocols are small programs which involve a high level of concurrency and which are difficult to analyze by hand. The most successful methods to verify such protocols are based on rewriting techniques and automated deduction in order to implement or mimic the process calculus describing the execution of a protocol. We are interested in the intruder deduction problem, that is vulnerability to passive attacks in presence of equational theories which model the protocol specification and properties of the cryptographic operators. In the present paper, we consider the case where the encryption distributes over the operator of an Abelian group or over an exclusive-or operator. We prove decidability of the intruder deduction problem in both cases. We obtain a PTIME decision procedure in a restricted case, the so-called binary case. These decision procedures are based on a careful analysis of the proof system modeling the deductive power of the intruder, taking into account the algebraic properties of the equational theories under consideration. The analysis of the deduction rules interacting with the equational theory relies on the manipulation of Z-modules in the general case, and on results from prefix rewriting in the binary case.",
"In the past few years a lot of attention has been paid to the use of special logics to analyse cryptographic protocols, foremost among these being the logic of Burrows, Abadi and Needham (the BAN logic). These logics have been successful in finding weaknesses in various examples. In this paper a limitation of the BAN logic is illustrated with two examples. These show that it is easy for the BAN logic to approve protocols that are in practice unsound.",
"Traditional security protocols are mainly concerned with authentication and key establishment and rely on predistributed keys and properties of cryptographic operators. In contrast, new application areas are emerging that establish and rely on properties of the physical world. Examples include protocols for secure localization, distance bounding, and secure time synchronization. We present a formal model for modeling and reasoning about such physical security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance between nodes. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than those of the standard Dolev-Yao intruder. We have formalized our model in Isabelle HOL and have used it to verify protocols for authenticated ranging, distance bounding, broadcast authentication based on delayed key disclosure, and time synchronization.",
"We present a formal model for modeling and reasoning about security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance traveled. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than the standard Dolev-Yao intruder. We have formalized our model in Isabelle HOL and used it to verify protocols for authenticated ranging, distance bounding, and broadcast authentication based on delayed key disclosure.",
"We construct a 1-round delegation scheme (i.e., argument system) for every language computable in time t = t(n), where the running time of the prover is poly(t) and the running time of the verifier is n · polylog(t). In particular, for every language in P we obtain a delegation scheme with almost linear time verification. Our construction relies on the existence of a computational sub-exponentially secure private information retrieval (PIR) scheme. The proof exploits a curious connection between the problem of computation delegation and the model of multi-prover interactive proofs that are sound against no-signaling (cheating) strategies, a model that was studied in the context of multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light. For any language computable in time t = t(n), we construct a multi-prover interactive proof (MIP) that is sound against no-signaling strategies, where the running time of the provers is poly(t), the number of provers is polylog(t), and the running time of the verifier is n · polylog(t). In particular, this shows that the class of languages that have polynomial-time MIPs that are sound against no-signaling strategies, is exactly EXP. Previously, this class was only known to contain PSPACE. To convert our MIP into a 1-round delegation scheme, we use the method suggested by (ICALP, 2000). This method relies on the existence of a sub-exponentially secure PIR scheme, and was proved secure by (STOC, 2013) assuming the underlying MIP is secure against no-signaling provers.",
"We provide a method for deciding the insecurity of cryptographic protocols in presence of the standard Dolev-Yao intruder (with a finite number of sessions) extended with so-called oracle rules, i.e., deduction rules that satisfy certain conditions. As an instance of this general framework, we ascertain that protocol insecurity is in NP for an intruder that can exploit the properties of the XOR operator. This operator is frequently used in cryptographic protocols but cannot be handled in most protocol models. An immediate consequence of our proof is that checking whether a message can be derived by an intruder (using XOR) is in P. We also apply our framework to an intruder that exploits properties of certain encryption modes such as cipher block chaining (CBC).",
"We present a mathematical construct which provides a cryptographic protocol to verifiably shuffle a sequence of k modular integers, and discuss its application to secure, universally verifiable, multi-authority election schemes. The output of the shuffle operation is another sequence of k modular integers, each of which is the same secret power of a corresponding input element, but the order of elements in the output is kept secret. Though it is a trivial matter for the \"shuffler\" (who chooses the permutation of the elements to be applied) to compute the output from the input, the construction is important because it provides a linear size proof of correctness for the output sequence (i.e. a proof that it is of the form claimed) that can be checked by an arbitrary verifiers. The complexity of the protocol improves on that of Furukawa-Sako[16] both measured by number of exponentiations and by overall size.The protocol is shown to be honest-verifier zeroknowledge in a special case, and is computational zeroknowledge in general. On the way to the final result, we also construct a generalization of the well known Chaum-Pedersen protocol for knowledge of discrete logarithm equality [10], [7]. In fact, the generalization specializes exactly to the Chaum-Pedersen protocol in the case k = 2. This result may be of interest on its own.An application to electronic voting is given that matches the features of the best current protocols with significant efficiency improvements. An alternative application to electronic voting is also given that introduces an entirely new paradigm for achieving Universally Verifiable elections."
]
} |
cs0605097 | 1681051970 | We introduce knowledge flow analysis, a simple and flexible formalism for checking cryptographic protocols. Knowledge flows provide a uniform language for expressing the actions of principals, assump- tions about intruders, and the properties of cryptographic primitives. Our approach enables a generalized two-phase analysis: we extend the two-phase theory by identifying the necessary and sufficient proper- ties of a broad class of cryptographic primitives for which the theory holds. We also contribute a library of standard primitives and show that they satisfy our criteria. | Specialized model checkers such as Casper @cite_18 , Mur @math @cite_28 , Brutus @cite_21 , TAPS @cite_24 , and ProVerif @cite_15 have been successfully used to analyze security protocols. These tools are based on state space exploration which leads to an exponential complexity. Athena @cite_11 is based on a modification of the strand space model @cite_16 . Even though it reduces the state space explosion problem, it remains exponential. Multiset rewriting @cite_19 in combination with tree automata is used in Timbuk @cite_12 . The relation between multiset rewriting and strand spaces is analyzed in @cite_41 . The relation between multiset rewriting and process algebras @cite_36 @cite_0 is analyzed in @cite_3 . | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_41",
"@cite_36",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"1569881051",
"2146099890",
"2127574686",
"2508919161",
"2118034427",
"2040060046",
"1796295358",
"2036722182",
"1971332773",
"2170546552",
"2135625884",
"2044334960",
"2884937557"
],
"abstract": [
"We propose a new efficient automatic verification technique, Athena, for security protocol analysis. It uses a new efficient representation - our extension to the Strand Space Model, and utilizes techniques from both model checking and theorem proving approaches. Athena is fully automatic and is able to prove the correctness of many security protocols with arbitrary number of concurrent runs. The run time for a typical protocol from the literature, like the Needham-Schroeder protocol, is often a fraction of a second. Athena exploits several different techniques that enable it to analyze infinite sets of protocol runs and achieve such efficiency. Our extended Strand Space Model is a natural and efficient representation for the problem domain. The security properties are specified in a simple logic which permits both efficient proof search algorithms and has enough expressive power to specify interesting properties. The automatic proof search procedure borrows some efficient techniques from both model checking and theorem proving. We believe that it is the right combination of the new compact representation and all the techniques that actually makes Athena successful in fast and automatic verification of security protocols.",
"In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a \"muggle\". The verifier should be super-efficient and run in nearly-linear time. These proof systems can be used for delegating computation: a server can run a computation for a client and interactively prove the correctness of the result. The client can verify the result's correctness in nearly-linear time (instead of running the entire computation itself). Previously, related questions were considered in the Holographic Proof setting by Babai, Fortnow, Levin and Szegedy, in the argument setting under computational assumptions by Kilian, and in the random oracle model by Micali. Our focus, however, is on the original interactive proof model where no assumptions are made on the computational power or adaptiveness of dishonest provers. Our main technical theorem gives a public coin interactive proof for any language computable by a log-space uniform boolean circuit with depth d and input length n. The verifier runs in time (n+d) • polylog(n) and space O(log(n)), the communication complexity is d • polylog(n), and the prover runs in time poly(n). In particular, for languages computable by log-space uniform NC (circuits of polylog(n) depth), the prover is efficient, the verifier runs in time n • polylog(n) and space O(log(n)), and the communication complexity is polylog(n). Using this theorem we make progress on several questions: We show how to construct short (polylog size) computationally sound non-interactive certificates of correctness for any log-space uniform NC computation, in the public-key model. The certificates can be verified in quasi-linear time and are for a designated verifier: each certificate is tailored to the verifier's public key. This result uses a recent transformation of Kalai and Raz from public-coin interactive proofs to one-round arguments. The soundness of the certificates is based on the existence of a PIR scheme with polylog communication. Interactive proofs with public-coin, log-space, poly-time verifiers for all of P. This settles an open question regarding the expressive power of proof systems with such verifiers. Zero-knowledge interactive proofs with communication complexity that is quasi-linear in the witness, length for any NP language verifiable in NC, based on the existence of one-way functions. Probabilistically checkable arguments (a model due to Kalai and Raz) of size polynomial in the witness length (rather than the instance length) for any NP language verifiable in NC, under computational assumptions.",
"The state explosion problem remains a major hurdle in applying symbolic model checking to large hardware designs. State space abstraction, having been essential for verifying designs of industrial complexity, is typically a manual process, requiring considerable creativity and insight.In this article, we present an automatic iterative abstraction-refinement methodology that extends symbolic model checking. In our method, the initial abstract model is generated by an automatic analysis of the control structures in the program to be verified. Abstract models may admit erroneous (or \"spurious\") counterexamples. We devise new symbolic techniques that analyze such counterexamples and refine the abstract model correspondingly. We describe aSMV, a prototype implementation of our methodology in NuSMV. Practical experiments including a large Fujitsu IP core design with about 500 latches and 10000 lines of SMV code confirm the effectiveness of our approach.",
"[See the paper for the full abstract.] We show tight upper and lower bounds for time-space trade-offs for the @math -Approximate Near Neighbor Search problem. For the @math -dimensional Euclidean space and @math -point datasets, we develop a data structure with space @math and query time @math for every @math such that: This is the first data structure that achieves sublinear query time and near-linear space for every approximation factor @math , improving upon [Kapralov, PODS 2015]. The data structure is a culmination of a long line of work on the problem for all space regimes; it builds on Spherical Locality-Sensitive Filtering [Becker, Ducas, Gama, Laarhoven, SODA 2016] and data-dependent hashing [Andoni, Indyk, Nguyen, Razenshteyn, SODA 2014] [Andoni, Razenshteyn, STOC 2015]. Our matching lower bounds are of two types: conditional and unconditional. First, we prove tightness of the whole above trade-off in a restricted model of computation, which captures all known hashing-based approaches. We then show unconditional cell-probe lower bounds for one and two probes that match the above trade-off for @math , improving upon the best known lower bounds from [Panigrahy, Talwar, Wieder, FOCS 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than the one-probe bound. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.",
"Formal analysis of security protocols is largely based on a set of assumptions commonly referred to as the Dolev-Yao model. Two formalisms that state the basic assumptions of this model are related here: strand spaces and multiset rewriting with existential quantification. Strand spaces provide a simple and economical approach to analysis of completed protocol runs by emphasizing causal interactions among protocol participants. The multiset rewriting formalism provides a very precise way of specifying finite-length protocols with unboundedly many instances of each protocol role, such as client, server, initiator, or responder. A number of modifications to each system are required to produce a meaningful comparison. In particular, we extend the strand formalism with a way of incrementally growing bundles in order to emulate an execution of a protocol with parametric strands. The correspondence between the modified formalisms directly relates the intruder theory from the multiset rewriting formalism to the penetrator strands. The relationship we illustrate here between multiset rewriting specifications and strand spaces thus suggests refinements to both frameworks, and deepens our understanding of the Dolev-Yao model.",
"This paper describes a translator called Java PathFinder (Jpf), which translates from Java to Promela, the modeling language of the Spin model checker. Jpf translates a given Java program into a Promela model, which then can be model checked using Spin. The Java program may contain assertions, which are translated into similar assertions in the Promela model. The Spin model checker will then look for deadlocks and violations of any stated assertions. Jpf generates a Promela model with the same state space characteristics as the Java program. Hence, the Java program must have a finite and tractable state space. This work should be seen in a broader attempt to make formal methods applicable within NASA’s areas such as space, aviation, and robotics. The work is a continuation of an effort to formally analyze, using Spin, a multi-threaded operating system for the Deep-Space 1 space craft, and of previous work in applying existing model checkers and theorem provers to real applications.",
"We are interested in finding algorithms which will allow an agent roaming between different electronic auction institutions to automatically verify the game-theoretic properties of a previously unseen auction protocol. A property may be that the protocol is robust to collusion or deception or that a given strategy is optimal. Model checking provides an automatic way of carrying out such proofs. However it may suffer from state space explosion for large models. To improve the performance of model checking, abstractions were used along with the Spin model checker. We considered two case studies: the Vickrey auction and a tractable combinatorial auction. Numerical results showed the limits of relying solely on Spin . To reduce the state space required by Spin , two property-preserving abstraction methods were applied: the first is the classical program slicing technique, which removes irrelevant variables with respect to the property; the second replaces large data, possibly infinite values of variables with smaller abstract values. This enabled us to model check the strategy-proofness property of the Vickrey auction for unbounded bid range and number of agents.",
"We present a nondeterministic model of computation based on reversing edge directions in weighted directed graphs with minimum in-flow constraints on vertices. Deciding whether this simple graph model can be manipulated in order to reverse the direction of a particular edge is shown to be PSPACE-complete by a reduction from Quantified Boolean Formulas. We prove this result in a variety of special cases including planar graphs and highly restricted vertex configurations, some of which correspond to a kind of passive constraint logic. Our framework is inspired by (and indeed a generalization of) the \"Generalized Rush Hour Logic\" developed by Flake and Baum [Theoret. Comput. Sci. 270(1-2) (2002) 8951.We illustrate the importance of our model of computation by giving simple reductions to show that several motion-planning problems are PSPACE-hard. Our main result along these lines is that classic unrestricted sliding-block puzzles are PSPACE-hard, even if the pieces are restricted to be all dominoes (1 × 2 blocks) and the goal is simply to move a particular piece. No prior complexity results were known about these puzzles. This result can be seen as a strengthening of the existing result that the restricted Rush HourTM puzzles are PSPACE-complete [Theoret. Comput. Sci. 270(1-2) (2002) 895], of which we also give a simpler proof. We also greatly strengthen the conditions for the PSPACE-hardness of the Warehouseman's Problem [Int. J. Robot. Res. 3(4) (1984) 76], a classic motion-planning problem. Finally, we strengthen the existing result that the pushing-blocks puzzle Sokoban is PSPACE-complete [In: Proc. Internat. Conf. on Fun with Algorithms, Elba, Italy, June 1998, pp. 65-76.], by showing that it is PSPACE-complete even if no barriers are allowed.",
"We address the verification problem of finite-state concurrent programs running under weak memory models. These models capture the reordering of program (read and write) operations done by modern multi-processor architectures for performance. The verification problem we study is crucial for the correctness of concurrency libraries and other performance-critical system services employing lock-free synchronization, as well as for the correctness of compiler backends that generate code targeted to run on such architectures. We consider in this paper combinations of three well-known program order relaxations. We consider first the \"write to read\" relaxation, which corresponds to the TSO (Total Store Ordering) model. This relaxation is used in most hardware architectures available today. Then, we consider models obtained by adding either (1) the \"write to write\" relaxation, leading to a model which is essentially PSO (Partial Store Ordering), or (2) the \"read to read write\" relaxation, or (3) both of them, as it is done in the RMO (Relaxed Memory Ordering) model for instance. We define abstract operational models for these weak memory models based on state machines with (potentially unbounded) FIFO buffers, and we investigate the decidability of their reachability and their repeated reachability problems. We prove that the reachability problem is decidable for the TSO model, as well as for its extension with \"write to write\" relaxation (PSO). Furthermore, we prove that the reachability problem becomes undecidable when the \"read to read write\" relaxation is added to either of these two memory models, and we give a condition under which this addition preserves the decidability of the reachability problem. We show also that the repeated reachability problem is undecidable for all the considered memory models.",
"Mulmuley [Mul12a] recently gave an explicit version of Noether’s Normalization lemma for ring of invariants of matrices under simultaneous conjugation, under the conjecture that there are deterministic black-box algorithms for polynomial identity testing (PIT). He argued that this gives evidence that constructing such algorithms for PIT is beyond current techniques. In this work, we show this is not the case. That is, we improve Mulmuley’s reduction and correspondingly weaken the conjecture regarding PIT needed to give explicit Noether Normalization. We then observe that the weaker conjecture has recently been nearly settled by the authors ([FS12]), who gave quasipolynomial size hitting sets for the class of read-once oblivious algebraic branching programs (ROABPs). This gives the desired explicit Noether Normalization unconditionally, up to quasipolynomial factors. As a consequence of our proof we give a deterministic parallel polynomial-time algorithm for deciding if two matrix tuples have intersecting orbit closures, under simultaneous conjugation. We also study the strength of conjectures that Mulmuley requires to obtain similar results as ours. We prove that his conjectures are stronger, in the sense that the computational model he needs PIT algorithms for is equivalent to the well-known algebraic branching program (ABP) model, which is provably stronger than the ROABP model. Finally, we consider the depth-3 diagonal circuit model as defined by Saxena [Sax08], as PIT algorithms for this model also have implications in Mulmuley’s work. Previous work (such as [ASS12] and [FS12]) have given quasipolynomial size hitting sets for this model. In this work, we give a much simpler construction of such hitting sets, using techniques of Shpilka and Volkovich [SV09].",
"1. Summary In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction. Almost all, if not all problems in induction can be put in this form. Some strong heuristic arguments have been obtained for the equivalence of the last three models. One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output. Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three. Few rigorous results are presented. Informal investigations are made of the properties of these models. There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods. In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction. Though some approximations are used, the first of these problems is treated most rigorously. The result is Laplace's rule of succession. The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations. The third application, using phrase structure grammars, is least exact of the three. First a formal solution is presented. Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it. This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings. The results that are obtained are plausible, but subject to the uncertainties of the approximation used.",
"We investigate the importance of space when solving problems based on graph distance in the streaming model. In this model, the input graph is presented as a stream of edges in an arbitrary order. The main computational restriction of the model is that we have limited space and therefore cannot store all the streamed data; we are forced to make space-efficient summaries of the data as we go along. For a graph of n vertices and m edges, we show that testing many graph properties, including connectivity (ergo any reasonable decision problem about distances) and bipartiteness, requires Ω(n) bits of space. Given this, we then investigate how the power of the model increases as we relax our space restriction. Our main result is an efficient randomized algorithm that constructs a (2t + 1)-spanner in one pass. With high probability, it uses O(t .n1+1 t log2n) bits of space and processes each edge in the stream in O(t2·n1 t log n) time. We find approximations to diameter and girth via the constructed spanner. For t = Ω(log n log log n), the space requirement of the algorithm is O(n .polylog n), and the per-edge processing time is O(polylog n). We also show a corresponding lower bound of t for the approximation ratio achievable when the space restriction is O(t.n1+1 t log2n).We then consider the scenario in which we are allowed multiple passes over the input stream. Here, we investigate whether allowing these extra passes will compensate for a given space restriction. We show that finding vertices at distance d from a particular vertex will always take d passes, for all d ∈ 1,...,t 2 , when the space restriction is o(n1+1 t). For girth, we show the existence of a direct trade-off between space and passes in the form of a lower bound on the product of the space requirement and number of passes. Finally, we conclude with two general techniques for speeding up the per-edge computation time of streaming algorithms while increasing the space by at most a log factor.",
"The secure information flow problem, which checks whether low-security outputs of a program are influenced by high-security inputs, has many applications in verifying security properties in programs. In this paper we present lazy self-composition, an approach for verifying secure information flow. It is based on self-composition, where two copies of a program are created on which a safety property is checked. However, rather than an eager duplication of the given program, it uses duplication lazily to reduce the cost of verification. This lazy self-composition is guided by an interplay between symbolic taint analysis on an abstract (single copy) model and safety verification on a refined (two copy) model. We propose two verification methods based on lazy self-composition. The first is a CEGAR-style procedure, where the abstract model associated with taint analysis is refined, on demand, by using a model generated by lazy self-composition. The second is a method based on bounded model checking, where taint queries are generated dynamically during program unrolling to guide lazy self-composition and to conclude an adequate bound for correctness. We have implemented these methods on top of the SeaHorn verification platform and our evaluations show the effectiveness of lazy self-composition."
]
} |
cs0605097 | 1681051970 | We introduce knowledge flow analysis, a simple and flexible formalism for checking cryptographic protocols. Knowledge flows provide a uniform language for expressing the actions of principals, assump- tions about intruders, and the properties of cryptographic primitives. Our approach enables a generalized two-phase analysis: we extend the two-phase theory by identifying the necessary and sufficient proper- ties of a broad class of cryptographic primitives for which the theory holds. We also contribute a library of standard primitives and show that they satisfy our criteria. | Proof building tools such as NRL, based on Prolog @cite_1 , have also been helpful for analyzing security protocols. However, they are not fully automatic and often require extensive user intervention. Model checkers lead to completely automated tools which generate counterexamples if a protocol is flawed. For theorem-proving-based approaches, counterexamples are hard to produce. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2003915781"
],
"abstract": [
"The NRL Protocol Analyzer is a prototype special-purpose verification tool, written in Prolog, that has been developed for the analysis of cryptographic protocols that are used to authenticate principals and services and distribute keys in a network. In this paper we give an overview of how the Analyzer works and describe its achievements so far. We also show how our use of the Prolog language benefited us in the design and implementation of the Analyzer."
]
} |
cs0605103 | 2951242165 | Time series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed. | While we focus on segmentation, there are many methods available for fitting models to continuous variables, such as a regression, regression decision trees, Neural Networks @cite_23 , Wavelets @cite_39 , Adaptive Multivariate Splines @cite_46 , Free-Knot Splines @cite_26 , Hybrid Adaptive Splines @cite_19 , etc. | {
"cite_N": [
"@cite_26",
"@cite_39",
"@cite_19",
"@cite_23",
"@cite_46"
],
"mid": [
"2068120653",
"2145316937",
"2049228615",
"1801768344",
"2142057089"
],
"abstract": [
"We describe a Bayesian method, for fitting curves to data drawn from an exponential family, that uses splines for which the number and locations of knots are free parameters. The method uses reversible jump Markov chain Monte Carlo to change the knot configurations and a locality heuristic to speed up mixing. For nonnormal models, we approximate the integrated likelihood ratios needed to compute acceptance probabilities by using the Bayesian information criterion, BIC, under priors that make this approximation accurate. Our technique is based on a marginalised chain on the knot number and locations, but we provide methods for inference about the regression coefficients, and functions of them, in both normal and nonnormal models. Simulation results suggest that the method performs well, and we illustrate the method in two neuroscience applications.",
"An increasing number of computer vision and pattern recognition problems require structured regression techniques. Problems like human pose estimation, unsegmented action recognition, emotion prediction and facial landmark detection have temporal or spatial output dependencies that regular regression techniques do not capture. In this paper we present continuous conditional neural fields (CCNF) – a novel structured regression model that can learn non-linear input-output dependencies, and model temporal and spatial output relationships of varying length sequences. We propose two instances of our CCNF framework: Chain-CCNF for time series modelling, and Grid-CCNF for spatial relationship modelling. We evaluate our model on five public datasets spanning three different regression problems: facial landmark detection in the wild, emotion prediction in music and facial action unit recognition. Our CCNF model demonstrates state-of-the-art performance on all of the datasets used.",
"The selection of variables in regression problems has occupied the minds of many statisticians. Several Bayesian variable selection methods have been developed, and we concentrate on the following methods: Kuo & Mallick, Gibbs Variable Selection (GVS), Stochastic Search Variable Selection (SSVS), adaptive shrinkage with Jefireys' prior or a Laplacian prior, and reversible jump MCMC. We review these methods, in the context of their difierent properties. We then implement the methods in BUGS, using both real and simulated data as examples, and investigate how the difierent methods perform in practice. Our results suggest that SSVS, reversible jump MCMC and adaptive shrinkage methods can all work well, but the choice of which method is better will depend on the priors that are used, and also on how they are implemented.",
"Time series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed.",
"Recommender problems with large and dynamic item pools are ubiquitous in web applications like content optimization, online advertising and web search. Despite the availability of rich item meta-data, excess heterogeneity at the item level often requires inclusion of item-specific \"factors\" (or weights) in the model. However, since estimating item factors is computationally intensive, it poses a challenge for time-sensitive recommender problems where it is important to rapidly learn factors for new items (e.g., news articles, event updates, tweets) in an online fashion. In this paper, we propose a novel method called FOBFM (Fast Online Bilinear Factor Model) to learn item-specific factors quickly through online regression. The online regression for each item can be performed independently and hence the procedure is fast, scalable and easily parallelizable. However, the convergence of these independent regressions can be slow due to high dimensionality. The central idea of our approach is to use a large amount of historical data to initialize the online models based on offline features and learn linear projections that can effectively reduce the dimensionality. We estimate the rank of our linear projections by taking recourse to online model selection based on optimizing predictive likelihood. Through extensive experiments, we show that our method significantly and uniformly outperforms other competitive methods and obtains relative lifts that are in the range of 10-15 in terms of predictive log-likelihood, 200-300 for a rank correlation metric on a proprietary My Yahoo! dataset; it obtains 9 reduction in root mean squared error over the previously best method on a benchmark MovieLens dataset using a time-based train test data split."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | El @cite_2 consider the wireless transmission problem when the packets have different power functions, giving an iterative algorithm that converges to an optimal solution. They also show how to extend their algorithm to handle the case when the buffer used to store active packets has bounded size and the case when packets have individual deadlines. Their algorithm can also be extended to schedule multiple transmitters, but this does not correspond to a processor scheduling problem. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2952044966"
],
"abstract": [
"We consider the optimal online packet scheduling problem in a single-user energy harvesting wireless communication system, where energy is harvested from natural renewable sources, making future energy arrivals instants and amounts random in nature. The most general case of arbitrary energy arrivals is considered where neither the future energy arrival instants or amount, nor their distribution is known. The problem considered is to adaptively change the transmission rate according to the causal energy arrival information, such that the time by which all packets are delivered is minimized. We assume that all bits have arrived and are ready at the source before the transmission begins. For a minimization problem, the utility of an online algorithm is tested by finding its competitive ratio or competitiveness that is defined to be the maximum of the ratio of the gain of the online algorithm with the optimal offline algorithm over all input sequences. We derive a lower and upper bound on the competitive ratio of any online algorithm to minimize the total transmission time in an energy harvesting system. The upper bound is obtained using a lazy' transmission policy that chooses its transmission power to minimize the transmission time assuming that no further energy arrivals are going to occur in future. The lazy transmission policy is shown to be strictly two-competitive. We also derive an adversarial lower bound that shows that competitive ratio of any online algorithm is at least 1.325."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | Pruhs, van Stee, and Uthaisombut @cite_10 consider the laptop problem version of minimizing makespan for jobs having precedence constraints where all jobs are released immediately and @math . Their main observation, which they call the power equality , is that the sum of the powers of the machines is constant over time in the optimal schedule. They use binary search to determine this value and then reduce the problem to scheduling on related fixed-speed machines. Previously-known @cite_8 @cite_1 approximations for the related fixed-speed machine problem then give an @math -approximation for power-aware makespan. This technique cannot be applied in our setting because the power equality does not hold for jobs with release dates. | {
"cite_N": [
"@cite_1",
"@cite_10",
"@cite_8"
],
"mid": [
"2562694639",
"2418875759",
"1964506478"
],
"abstract": [
"We consider the classical machine scheduling, where n jobs need to be scheduled on m machines, with the goal of minimizing the makespan, i.e., the maximum load of any machine in the schedule. We study inefficiency of schedules that are obtained when jobs arrive sequentially one by one, and choose themselves the machine on which they will be scheduled. Every job is only interested to be on a machine with a small load (and does not care about the loads of other machines). We measure the inefficiency of a schedule as the ratio of the makespan obtained in the worst-case equilibrium schedule, and of the optimum makespan. This ratio is known as the sequential price of anarchy. We also introduce alternative inefficiency measures, which allow for a favorable choice of the order in which the jobs make their decisions. We first disprove the conjecture of Hassin and Yovel (OR Letters, 2015) claiming that for unrelated machines, i.e., for the setting where every job can have a different processing time on every machine, the sequential price of anarchy for m = 2 machines is at most 3. We provide an answer for the setting with m = 2 and show that the sequential price of anarchy grows at least linearly with the number of players. Furthermore, we show that for a certain order of the jobs, the resulting makespan is at most linearly larger than the optimum makespan. Furthermore, we show that if an authority can change the order of the jobs adaptively to the decisions made by the jobs so far (but cannot influence the decisions of the jobs), then there exists an adaptive ordering in which the jobs end up in an optimum schedule. To the end we consider identical machines, i.e., the setting where every job has the same processing time on every machine, and provide matching lower bound examples to the existing upper bounds on the sequential price of anarchy.",
"Makespan scheduling on identical machines is one of the most basic and fundamental packing problems studied in the discrete optimization literature. It asks for an assignment of @math jobs to a set of @math identical machines that minimizes the makespan. The problem is strongly NP-hard, and thus we do not expect a @math -approximation algorithm with a running time that depends polynomially on @math . Furthermore, [3] recently showed that a running time of @math for any @math would imply that the Exponential Time Hypothesis (ETH) fails. A long sequence of algorithms have been developed that try to obtain low dependencies on @math , the better of which achieves a running time of @math [11]. In this paper we obtain an algorithm with a running time of @math , which is tight under ETH up to logarithmic factors on the exponent. Our main technical contribution is a new structural result on the configuration-IP. More precisely, we show the existence of a highly symmetric and sparse optimal solution, in which all but a constant number of machines are assigned a configuration with small support. This structure can then be exploited by integer programming techniques and enumeration. We believe that our structural result is of independent interest and should find applications to other settings. In particular, we show how the structure can be applied to the minimum makespan problem on related machines and to a larger class of objective functions on parallel machines. For all these cases we obtain an efficient PTAS with running time @math .",
"We consider the classical problem of minimizing the total weighted flow-time for unrelated machines in the online non-clairvoyant setting. In this problem, a set of jobs J arrive over time to be scheduled on a set of M machines. Each job J has processing length pj, weight wj, and is processed at a rate of lij when scheduled on machine i. The online scheduler knows the values of wj and lij upon arrival of the job, but is not aware of the quantity pj. We present the first online algorithm that is scalable ((1+e)-speed O(1 2)-competitive for any constant e > 0) for the total weighted flow-time objective. No non-trivial results were known for this setting, except for the most basic case of identical machines. Our result resolves a major open problem in online scheduling theory. Moreover, we also show that no job needs more than a logarithmic number of migrations. We further extend our result and give a scalable algorithm for the objective of minimizing total weighted flow-time plus energy cost for the case of unrelated machines. In this problem, each machine can be sped up by a factor of f-1i(P) when consuming power P, where fi is an arbitrary strictly convex power function. In particular, we get an O(γ2)-competitive algorithm when all power functions are of form sγ. These are the first non-trivial non-clairvoyant results in any setting with heterogeneous machines. The key algorithmic idea is to let jobs migrate selfishly until they converge to an equilibrium. Towards this end, we define a game where each job's utility which is closely tied to the instantaneous increase in the objective the job is responsible for, and each machine declares a policy that assigns priorities to jobs based on when they migrate to it, and the execution speeds. This has a spirit similar to coordination mechanisms that attempt to achieve near optimum welfare in the presence of selfish agents (jobs). To the best our knowledge, this is the first work that demonstrates the usefulness of ideas from coordination mechanisms and Nash equilibria for designing and analyzing online algorithms."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | Minimizing the makespan of tasks with precedence constraints has also been studied in the context of project management. Speed scaling is possible when additional resources can be used to shorten some of the tasks. Pinedo @cite_0 gives heuristics for some variations of this problem. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2085836112"
],
"abstract": [
"We consider the problem of speed scaling to conserve energy in a multiprocessor setting where there are precedence constraints between tasks, and where the performance measure is the makespan. That is, we consider an energy bounded version of the classic problem Pm|prec|Cmax . We extend the standard 3-field notation and denote this problem as Sm|prec, energy|Cmax . We show that, without loss of generality, one need only consider constant power schedules. We then show how to reduce this problem to the problem Qm|prec|Cmax to obtain a poly-log(m)-approximation algorithm."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | The only previous power-aware algorithm to minimize total flow is by Pruhs, Uthaisombut, and Woeginger @cite_20 , who consider scheduling equal-work jobs on a uniprocessor. In this setting, they observe that jobs can be run in order of release time and then prove the following relationships between the speed of each job in the optimal solution: | {
"cite_N": [
"@cite_20"
],
"mid": [
"2016555341"
],
"abstract": [
"We address the problem of sequential single machine scheduling of jobs with release times, where jobs are classified into types, and the machine must be properly configured to handle jobs of a given type. The objective is to minimize the maximum flow time (time from release until completion) of any job. We consider this problem under the assumptions of sequence independent set-up times and item availability with the objective of minimizing the maximum flow time. We present an online algorithm that is O(1)-competitive, that is, always gets within a constant factor of optimal. We also show that exact offline optimization of maximum flow time is NP-hard."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | The idea of power-aware scheduling was proposed by @cite_17 , who use trace-based simulations to estimate how much energy could be saved by slowing the processor to remove idle time. @cite_5 formalize this problem by assuming each job has a deadline and seeking the minimum-energy schedule that satisfies all deadlines. They give an optimal offline algorithm and propose two online algorithms. They show one is @math -competitive, i.e. it uses at most @math times the optimal energy. @cite_4 analyze the other, showing it is @math -competitive. @cite_4 also give another algorithm that is @math -competitive. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_17"
],
"mid": [
"1850884935",
"2173032108",
"1517415100"
],
"abstract": [
"We consider the power-aware problem of scheduling non-preemptively a set of jobs on a single speed-scalable processor so as to minimize the maximum lateness. We consider two variants of the problem: In the budget variant we aim in finding a schedule minimizing the maximum lateness for a given budget of energy, while in the aggregated variant our objective is to find a schedule minimizing a linear combination of maximum lateness and energy. We present polynomial time algorithms for both variants of the problem without release dates and we prove that both variants become strongly ( NP )-hard in the presence of arbitrary release dates. Moreover, we show that, for arbitrary release dates, there is no O(1)-competitive online algorithm for the budget variant and we propose a 2-competitive one for the aggregated variant.",
"We study an energy conservation problem where a variable-speed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadline-based scheduling, i.e. each job is specified by a release time, a deadline and a processing volume. For general convex power functions, [12] devised an offline 2-approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed Scrit that yields the smallest energy consumption while jobs are processed. For power functions P(s) = sα + γ, where s is the processor speed, [11] gave an (αα + 2)-competitive online algorithm. We investigate the offline setting of speed scaling with a sleep state. First we prove NP-hardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs Scrit-schedules, which execute jobs at speeds of at least scrit, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2. We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4 3. For power functions P(s) = βsα + γ, we obtain an approximation of 137 117 < 1.171. We finally show that our framework yields the best approximation guarantees for the class of Scrit-schedules. For general convex power functions, we give another 2-approximation algorithm. For functions P(s) = βsα + γ, we present tight upper and lower bounds on the best possible approximation factor. The ratio is exactly eW−1(−e−1−1 e) (eW−1(−e−1−1 e) + 1) < 1.211, where W--1 is the lower branch of the Lambert W function.",
"We study the problem of non-preemptive scheduling to minimize energy consumption for devices that allow dynamic voltage scaling. Specifically, consider a device that can process jobs in a non-preemptive manner. The input consists of (i) the set R of available speeds of the device, (ii) a set J of jobs, and (iii) a precedence constraint Π among J. Each job j in J, defined by its arrival time aj, deadline dj, and amount of computation cj, is supposed to be processed by the device at a speed in R. Under the assumption that a higher speed means higher energy consumption, the power-saving scheduling problem is to compute a feasible schedule with speed assignment for the jobs in J such that the required energy consumption is minimized. This paper focuses on the setting of weakly dynamic voltage scaling, i.e., speed change is not allowed in the middle of processing a job. To demonstrate that this restriction on many portable power-aware devices introduces hardness to the power-saving scheduling problem, we prove that the problem is NP-hard even if aj = aj ′ and dj = dj ′ hold for all j,j ′∈ Jand |R|=2. If |R|<∞, we also give fully polynomial-time approximation schemes for two cases of the general NP-hard problem: (a) all jobs share a common arrival time, and (b) Π = ∅ and for any j,j ′ ∈ J, aj ≤ aj ′ implies dj ≤ dj ′. To the best of our knowledge, there is no previously known approximation algorithm for any special case of the NP-hard problem."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | Power-aware scheduling of jobs with deadlines has also been considered with the goal of minimizing the CPU's maximum temperature. @cite_4 propose this problem and give an offline solution based on convex programming. Bansal and Pruhs @cite_3 analyze the online algorithms discussed above in the context of minimizing maximum temperature. | {
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2173032108",
"1517415100"
],
"abstract": [
"We study an energy conservation problem where a variable-speed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadline-based scheduling, i.e. each job is specified by a release time, a deadline and a processing volume. For general convex power functions, [12] devised an offline 2-approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed Scrit that yields the smallest energy consumption while jobs are processed. For power functions P(s) = sα + γ, where s is the processor speed, [11] gave an (αα + 2)-competitive online algorithm. We investigate the offline setting of speed scaling with a sleep state. First we prove NP-hardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs Scrit-schedules, which execute jobs at speeds of at least scrit, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2. We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4 3. For power functions P(s) = βsα + γ, we obtain an approximation of 137 117 < 1.171. We finally show that our framework yields the best approximation guarantees for the class of Scrit-schedules. For general convex power functions, we give another 2-approximation algorithm. For functions P(s) = βsα + γ, we present tight upper and lower bounds on the best possible approximation factor. The ratio is exactly eW−1(−e−1−1 e) (eW−1(−e−1−1 e) + 1) < 1.211, where W--1 is the lower branch of the Lambert W function.",
"We study the problem of non-preemptive scheduling to minimize energy consumption for devices that allow dynamic voltage scaling. Specifically, consider a device that can process jobs in a non-preemptive manner. The input consists of (i) the set R of available speeds of the device, (ii) a set J of jobs, and (iii) a precedence constraint Π among J. Each job j in J, defined by its arrival time aj, deadline dj, and amount of computation cj, is supposed to be processed by the device at a speed in R. Under the assumption that a higher speed means higher energy consumption, the power-saving scheduling problem is to compute a feasible schedule with speed assignment for the jobs in J such that the required energy consumption is minimized. This paper focuses on the setting of weakly dynamic voltage scaling, i.e., speed change is not allowed in the middle of processing a job. To demonstrate that this restriction on many portable power-aware devices introduces hardness to the power-saving scheduling problem, we prove that the problem is NP-hard even if aj = aj ′ and dj = dj ′ hold for all j,j ′∈ Jand |R|=2. If |R|<∞, we also give fully polynomial-time approximation schemes for two cases of the general NP-hard problem: (a) all jobs share a common arrival time, and (b) Π = ∅ and for any j,j ′ ∈ J, aj ≤ aj ′ implies dj ≤ dj ′. To the best of our knowledge, there is no previously known approximation algorithm for any special case of the NP-hard problem."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | A different variation is to assume that the processor can only choose between discrete speeds. @cite_7 show that minimizing energy consumption in this setting while meeting all deadlines is NP-hard, but give approximations for some special cases. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2133039052"
],
"abstract": [
"When peak performance is unnecessary, Dynamic Voltage Scaling (DVS) can be used to reduce the dynamic power consumption of embedded multiprocessors. In future technologies, however, static power consumption due to leakage current is expected to increase significantly. Then it will be more effective to limit the number of processors employed (i.e., turn some of them off), or to use a combination of DVS and processor shutdown. In this paper, leakage-aware scheduling heuristics are presented that determine the best trade-off between these three techniques: DVS, processor shutdown, and finding the optimal number of processors. Experimental results obtained using a public benchmark set of task graphs and real parallel applications show that our approach reduces the total energy consumption by up to 46 for tight deadlines (1.5× the critical path length) and by up to 73 for loose deadlines (8× the critical path length) compared to an approach that only employs DVS. We also compare the energy consumed by our scheduling algorithms to two absolute lower bounds, one for the case where all processors continuously run at the same frequency, and one for the case where the processors can run at different frequencies and these frequencies may change over time. The results show that the energy reduction achieved by our best approach is close to these theoretical limits."
]
} |
cs0605126 | 2951388088 | We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor. | Another algorithmic approach to power management is to identify times when the processor or parts of it can be partially or completely powered down. Irani and Pruhs @cite_15 survey work along these lines as well as approaches based on speed scaling. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2005012890"
],
"abstract": [
"We survey recent research that has appeared in the theoretical computer science literature on algorithmic problems related to power management. We will try to highlight some open problem that we feel are interesting. This survey places more concentration on lines of research of the authors: managing power using the techniques of speed scaling and power-down which are also currently the dominant techniques in practice."
]
} |
cs0605135 | 2952537745 | In this work we focus on the general relay channel. We investigate the application of estimate-and-forward (EAF) to different scenarios. Specifically, we consider assignments of the auxiliary random variables that always satisfy the feasibility constraints. We first consider the multiple relay channel and obtain an achievable rate without decoding at the relays. We demonstrate the benefits of this result via an explicit discrete memoryless multiple relay scenario where multi-relay EAF is superior to multi-relay decode-and-forward (DAF). We then consider the Gaussian relay channel with coded modulation, where we show that a three-level quantization outperforms the Gaussian quantization commonly used to evaluate the achievable rates in this scenario. Finally we consider the cooperative general broadcast scenario with a multi-step conference. We apply estimate-and-forward to obtain a general multi-step achievable rate region. We then give an explicit assignment of the auxiliary random variables, and use this result to obtain an explicit expression for the single common message broadcast scenario with a two-step conference. | An extension of the relay scenario to a hybrid broadcast relay system was introduced in @cite_24 in which the authors applied a combination of EAF and DAF strategies to the independent broadcast channel with a single common message, and then extended this strategy to the multi-step conference. In @cite_5 we used both a single-step and a two-step conference with orthogonal conferencing channels in the discrete memoryless framework. A thorough investigation of the broadcast-relay channel was done in @cite_25 , where the authors applied the DAF strategy to the case where only one user is helping the other user, and also presented an upper bound for this case. Then, the fully cooperative scenario was analyzed. The authors applied both the DAF and the EAF methods to that case. | {
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_25"
],
"mid": [
"2136209931",
"2152137554",
"2114973618"
],
"abstract": [
"We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels. The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff. For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1 N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.",
"In this paper, we consider a discrete memoryless state-dependent relay channel with non-causal Channel State Information (CSI). We investigate three different cases in which perfect channel states can be known non-causally: i) only to the source, ii) only to the relay or iii) both to the source and to the relay node. For these three cases we establish lower bounds on the channel capacity (achievable rates) based on using Gel'fand-Pinsker coding at the nodes where the CSI is available and using Compress-and-Forward (CF) strategy at the relay. Furthermore, for the general Gaussian relay channel with additive independent and identically distributed (i.i.d) states and noise, we obtain lower bounds on the capacity for the cases in which CSI is available at the source or at the relay. We also compare our derived bounds with the previously obtained results which were based on Decode-and-Forward (DF) strategy, and we show the cases in which our derived lower bounds outperform DF based bounds, and can achieve the rates close to the upper bound.",
"In this paper, we consider a three-terminal state-dependent relay channel (RC) with the channel state noncausally available at only the relay. Such a model may be useful for designing cooperative wireless networks with some terminals equipped with cognition capabilities, i.e., the relay in our setup. In the discrete memoryless (DM) case, we establish lower and upper bounds on channel capacity. The lower bound is obtained by a coding scheme at the relay that uses a combination of codeword splitting, Gel'fand-Pinsker binning, and decode-and-forward (DF) relaying. The upper bound improves upon that obtained by assuming that the channel state is available at the source, the relay, and the destination. For the Gaussian case, we also derive lower and upper bounds on the capacity. The lower bound is obtained by a coding scheme at the relay that uses a combination of codeword splitting, generalized dirty paper coding (DPC), and DF relaying; the upper bound is also better than that obtained by assuming that the channel state is available at the source, the relay, and the destination. In the case of degraded Gaussian channels, the lower bound meets with the upper bound for some special cases, and, so, the capacity is obtained for these cases. Furthermore, in the Gaussian case, we also extend the results to the case in which the relay operates in a half-duplex mode."
]
} |
quant-ph0604141 | 2949446743 | We show that quantum circuits cannot be made fault-tolerant against a depolarizing noise level of approximately 45 , thereby improving on a previous bound of 50 (due to Razborov). Our precise quantum circuit model enables perfect gates from the Clifford group (CNOT, Hadamard, S, X, Y, Z) and arbitrary additional one-qubit gates that are subject to that much depolarizing noise. We prove that this set of gates cannot be universal for arbitrary (even classical) computation, from which the upper bound on the noise threshold for fault-tolerant quantum computation follows. | Finally, we note that our work is related to, and partly stimulated by, the circle of ideas surrounding measurement-based quantum computation that was largely initiated by @cite_1 @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2027707976",
"1988174048"
],
"abstract": [
"The effects of any quantum measurement can be described by a collection of measurement operators l_brace M sub m r_brace acting on the quantum state of the measured system. However, the Hilbert space formalism tends to obscure the relationship between the measurement results and the physical properties of the measured system. In this paper, a characterization of measurement operators in terms of measurement resolution and disturbance is developed. It is then possible to formulate uncertainty relations for the measurement process that are valid for arbitrary input states. The motivation of these concepts is explained from a quantum communication viewpoint. It is shown that the intuitive interpretation of uncertainty as a relation between measurement resolution and disturbance provides a valid description of measurement back action. Possible applications to quantum cryptography, quantum cloning, and teleportation are discussed.",
"The most efficient way of obtaining information about the state of a quantum system is not always a direct measurement. It is sometimes preferable to extend the original Hilbert space of states into a larger space, and then to perform a quantum measurement in the enlarged space. Such an extension is always possible, by virtue of Neumark's theorem. The physical interpretation usually given to that theorem is the introduction of an auxiliary quantum system, prepared in a standard state, and the execution of a quantum measurement on both systems together. However, this widespread interpretation is unacceptable, because the statistical properties of the supposedly standard auxiliary system are inseparably entangled with those of the original, unknown system. A different method of preparing the auxiliary system is proposed, and shown to be physically acceptable."
]
} |
cs0604015 | 2953251814 | Although the Internet AS-level topology has been extensively studied over the past few years, little is known about the details of the AS taxonomy. An AS "node" can represent a wide variety of organizations, e.g., large ISP, or small private business, university, with vastly different network characteristics, external connectivity patterns, network growth tendencies, and other properties that we can hardly neglect while working on veracious Internet representations in simulation environments. In this paper, we introduce a radically new approach based on machine learning techniques to map all the ASes in the Internet into a natural AS taxonomy. We successfully classify 95.3 of ASes with expected accuracy of 78.1 . We release to the community the AS-level topology dataset augmented with: 1) the AS taxonomy information and 2) the set of AS attributes we used to classify ASes. We believe that this dataset will serve as an invaluable addition to further understanding of the structure and evolution of the Internet. | Several works have developed techniques decomposing the AS topology into different levels or tiers based on connectivity properties of BGP-derived AS graphs. Govindan and Reddy @cite_13 propose a classification of ASes into four levels based on their AS degree. Ge . @cite_5 classify ASes into seven tiers based on inferred customer-to-provider relationships. Their classification exploits the idea that provider ASes should be in higher tiers than their customers. Subramanian . @cite_0 classify ASes into five tiers based on inferred customer-to-provider as well as peer-to-peer relationships. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_13"
],
"mid": [
"2105742585",
"2160565743",
"2151972741"
],
"abstract": [
"The Internet connectivity in the autonomous system (AS) level reflects the commercial relationship between ASes. A connection between two ASes could be of type customer-provider when one AS is a provider of the other AS, or of type peer-peer, if they are peering ASes. This commercial relationship induces a global hierarchical structure which is a key ingredient in the ability to understand the topological structure of the AS connectivity graph. Unfortunately, it is very difficult to collect data regarding the actual type of the relationships between ASes, and in general this information is not part of the collected AS connectivity data. The Type of Relationship (ToR) problem attempts to address this shortcoming, by inferring the type of relationship between connected ASes based on their routing policies. However, the approaches presented so far are local in nature and do not capture the global hierarchical structure. In this work we define a novel way to infer this type of relationship from the collected data, taking into consideration both local policies and global hierarchy constrains. We define the Acyclic Type of Relationship AToR problem that captures this global hierarchy and present an efficient algorithm that allows determining if there is a hierarchical assignment without invalid paths. We then show that the related general optimization problem is NP-complete and present a 2 3 approximation algorithm where the objective function is to minimize the total number of local policy mismatches. We support our approach by extensive experiments and simulation results showing that our algorithms classify the type of relationship between ASes much better than all previous algorithms.",
"The delivery of IP traffic through the Internet depends on the complex interactions between thousands of autonomous systems (AS) that exchange routing information using the border gateway protocol (BGP). This paper investigates the topological structure of the Internet in terms of customer-provider and peer-peer relationships between autonomous systems, as manifested in BGP routing policies. We describe a technique for inferring AS relationships by exploiting partial views of the AS graph available from different vantage points. Next we apply the technique to a collection of ten BGP routing tables to infer the relationships between neighboring autonomous systems. Based on these results, we analyze the hierarchical structure of the Internet and propose a five-level classification of AS. Our characterization differs from previous studies by focusing on the commercial relationships between autonomous systems rather than simply the connectivity between the nodes.",
"The Internet consists of rapidly increasing number of hosts interconnected by constantly evolving networks of links and routers. Interdomain routing in the Internet is coordinated by the Border Gateway Protocol (BGP). The BGP allows each autonomous system (AS) to choose its own administrative policy in selecting routes and propagating reachability information to others. These routing policies are constrained by the contractual commercial agreements between administrative domains. For example, an AS sets its policy so that it does not provide transit services between its providers. Such policies imply that AS relationships are an important aspect of the Internet structure. We propose an augmented AS graph representation that classifies AS relationships into customer-provider, peering, and sibling relationships. We classify the types of routes that can appear in BGP routing tables based on the relationships between the ASs in the path and present heuristic algorithms that infer AS relationships from BGP routing tables. The algorithms are tested on publicly available BGP routing tables. We verify our inference results with AT&T internal information on its relationship with neighboring ASs. As much as 99.1 of our inference results are confirmed by the AT&T internal information. We also verify our inferred sibling relationships with the information acquired from the WHOIS lookup service. More than half of our inferred sibling-to-sibling relationships are confirmed by the WHOIS lookup service. To the best of our knowledge, there has been no publicly available information about AS relationships and this is the first attempt in understanding and inferring AS relationships in the Internet. We show evidence that some routing table entries stem from router misconfigurations."
]
} |
math0603097 | 2949404783 | We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with non-intersecting site-circles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra. | For a comprehensive bibliography on circle packings and circle patterns we refer to Stephenson's monograph @cite_17 . Here, we can only attempt to briefly discuss some of the the most important and most closely related results. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2082107757"
],
"abstract": [
"Part I. An Overview of Circle Packing: 1. A circle packing menagerie 2. Circle packings in the wild Part II. Rigidity: Maximal Packings: 3. Preliminaries: topology, combinatorics, and geometry 4. Statement of the fundamental result 5. Bookkeeping and monodromy 6. Proof for combinatorial closed discs 7. Proof for combinatorial spheres 8. Proof for combinatorial open discs 9. Proof for combinatorial surfaces Part III. Flexibility: Analytic Functions: 10. The intuitive landscape 11. Discrete analytic functions 12. Construction tools 13. Discrete analytic functions on the disc 14. Discrete entire functions 15. Discrete rational functions 16. Discrete analytic functions on Riemann surfaces 17. Discrete conformal structure 18. Random walks on circle packings Part IV: 19. Thurston's Conjecture 20. Extending the Rodin Sullivan theorem 21. Approximation of analytic functions 22. Approximation of conformal structures 23. Applications Appendix A. Primer on classical complex analysis Appendix B. The ring lemma Appendix C. Doyle spirals Appendix D. The brooks parameter Appendix E. Schwarz and buckyballs Appendix F. Inversive distance packings Appendix G. Graph embedding Appendix H. Square grid packings Appendix I. Experimenting with circle packings."
]
} |
math0603097 | 2949404783 | We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with non-intersecting site-circles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra. | Recently, Schlenker has treated weighted Delaunay triangulations in piecewise flat and piecewise hyperbolic surfaces using a deformation method @cite_12 . He obtains an existence and uniqueness theorem [Theorem 1.4] schlenker05b with the same scope as Theorem , but the conditions for existence are in terms of angle sums over paths like in Theorem . This seems to be the first time that this type of conditions was obtained for circle patterns with cone singularities. It would be interesting to show directly that the conditions of his theorem are equivalent to the conditions of Theorem . | {
"cite_N": [
"@cite_12"
],
"mid": [
"2623449798"
],
"abstract": [
"An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi–Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail."
]
} |
math0603097 | 2949404783 | We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with non-intersecting site-circles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra. | The research for this article was conducted almost entirely while I enjoyed the hospitality of the , where I participated in the Research in Pairs Program together with Jean-Marc Schlenker, who was working on his closely related paper @cite_12 . I am grateful for the excellent working conditions I experienced in Oberwolfach and for the extremely inspiring and fruitful discussions with Jean-Marc, who was closely involved in the work presented here. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2122575992"
],
"abstract": [
"This book arises from a series of workshops on collaborative learning, that gathered together 20 scholars from the disciplines of psychology, education and computer science. The series was part of a research program entitled 'Learning in Humans and Machines' (LHM), launched by Peter Reimann and Hans Spada, and funded by the European Science Foundation. This program aimed to develop a multidisciplinary dialogue on learning, involving mainly scholars from cognitive psychology, educational science, and artificial intelligence (including machine learning). During the preparation of the program, Agnes Blaye, Claire O'Malley, Michael Baker and I developed a theme on collaborative learning. When the program officially began, 12 members were selected to work on this theme and formed the so-called 'task force 5'. I became the coordinator of the group. This group organised two workshops, in Sitges (Spain, 1994) and Aix-en-Provence (France, 1995). In 1996, the group was enriched with new members to reach its final size. Around 20 members met in the subsequent workshops, at Samoens (France, 1996), Houthalen (Belgium, 1996) and Mannheim (Germany, 1997). Several individuals joined the group for some time but have not written a chapter. I would nevertheless like to acknowledge their contributions to our activities: George Bilchev, Stevan Harnad, Calle Jansson and Claire O'Malley."
]
} |
cs0603115 | 1539159366 | The Graphic Processing Unit (GPU) has evolved into a powerful and flexible processor. The latest graphic processors provide fully programmable vertex and pixel processing units that support vector operations up to single floating-point precision. This computational power is now being used for general-purpose computations. However, some applications require higher precision than single precision. This paper describes the emulation of a 44-bit floating-point number format and its corresponding operations. An implementation is presented along with performance and accuracy results. | * Libraries based on a floating-point representation The actual trend of CPUs is to have highly optimized floating-point operators. Some libraries, such as the MPFUN @cite_8 , exploit these floating-point operators by using an array of floating-point number. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2121344286"
],
"abstract": [
"The aggressive optimization of floating-point computations is an important problem in high-performance computing. Unfortunately, floating-point instruction sets have complicated semantics that often force compilers to preserve programs as written. We present a method that treats floating-point optimization as a stochastic search problem. We demonstrate the ability to generate reduced precision implementations of Intel's handwritten C numeric library which are up to 6 times faster than the original code, and achieve end-to-end speedups of over 30 on a direct numeric simulation and a ray tracer by optimizing kernels that can tolerate a loss of precision while still remaining correct. Because these optimizations are mostly not amenable to formal verification using the current state of the art, we present a stochastic search technique for characterizing maximum error. The technique comes with an asymptotic guarantee and provides strong evidence of correctness."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.