aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
math0304100
2115080784
The Shub-Smale Tau Conjecture is a hypothesis relating the number of integral roots of a polynomial f in one variable and the Straight-Line Program (SLP) complexity of f. A consequence of the truth of this conjecture is that, for the Blum-Shub-Smale model over the complex numbers, P differs from NP. We prove two weak versions of the Tau Conjecture and in so doing show that the Tau Conjecture follows from an even more plausible hypothesis. Our results follow from a new p-adic analogue of earlier work relating real algebraic geometry to additive complexity. For instance, we can show that a nonzero univariate polynomial of additive complexity s can have no more than 15+s^3(s+1)(7.5)^s s! =O(e^ s s ) roots in the 2-adic rational numbers Q_2, thus dramatically improving an earlier result of the author. This immediately implies the same bound on the number of ordinary rational roots, whereas the best previous upper bound via earlier techniques from real algebraic geometry was a quantity in Omega((22.6)^ s^2 ). This paper presents another step in the author's program of establishing an algorithmic arithmetic version of fewnomial theory.
As for earlier earlier approaches to the @math -conjecture, some investigations into @math were initiated by de Melo and Svaiter @cite_8 (in the special case of constant polynomials) and Moreira @cite_14 . Unfortunately, not much more is known than (a) @math is usually'' bounded below by @math (where @math is @math and the maximum ranges over all coefficients @math of @math ) and (b) @math for @math sufficiently large [Thm. 3] gugu .
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "2591592591", "30524954" ], "abstract": [ "This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given @math binary matrix @math , the GM-MDS conjecture, proposed by , states that if @math satisfies the so-called MDS condition, then for any field @math of size @math , there exists an @math MDS code whose generator matrix @math , with entries in @math , fits the matrix @math (i.e., @math is the support matrix of @math ). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by and , that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if @math satisfies the MDS condition, then the determinant of a transform matrix @math , such that @math fits @math , is not identically zero, where @math is a Vandermonde matrix with distinct parameters. In this work, we first reformulate the TM-MDS conjecture in terms of the Wronskian determinant, and then present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ( @math ) of @math is upper bounded by @math . For this class of special cases of @math where the only additional constraint is on @math , only cases with @math were previously proven theoretically, and the previously used proof techniques are not applicable to cases with @math .", "In this paper, we prove tight lower bounds on the smallest degree of a nonzero polynomial in the ideal generated by @math or @math in the polynomial ring @math , @math are coprime, which is called over @math . The immunity of @math is lower bounded by @math , which is achievable when @math is a multiple of @math ; the immunity of @math is exactly @math for every @math and @math . Our result improves the previous bound @math by Green. We observe how immunity over @math is related to @math circuit lower bound. For example, if the immunity of @math over @math is lower bounded by @math , and @math , then @math requires @math circuit of exponential size to compute." ] }
cs0304003
2949728882
Inevitability properties in branching temporal logics are of the syntax forall eventually , where is an arbitrary (timed) CTL formula. In the sense that "good things will happen", they are parallel to the "liveness" properties in linear temporal logics. Such inevitability properties in dense-time logics can be analyzed with greatest fixpoint calculation. We present algorithms to model-check inevitability properties both with and without requirement of non-Zeno computations. We discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non-Zeno computations on the evaluation of greatest fixpoints. We also discuss the TCTL subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. Finally, we report our implementation and experiments to show the plausibility of our ideas.
In @cite_32 , proposed an efficient symbolic model-checking algorithm for TCTL. However, the algorithm does not distinguish between Zeno and non-Zeno computations. Instead, the authors proposed to modify TAs with Zeno computations to ones without. In comparison, our greatest fixpoint evaluation algorithm is innately able to quantify over non-Zeno computations.
{ "cite_N": [ "@cite_32" ], "mid": [ "2086092403" ], "abstract": [ "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach." ] }
cs0304003
2949728882
Inevitability properties in branching temporal logics are of the syntax forall eventually , where is an arbitrary (timed) CTL formula. In the sense that "good things will happen", they are parallel to the "liveness" properties in linear temporal logics. Such inevitability properties in dense-time logics can be analyzed with greatest fixpoint calculation. We present algorithms to model-check inevitability properties both with and without requirement of non-Zeno computations. We discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non-Zeno computations on the evaluation of greatest fixpoints. We also discuss the TCTL subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. Finally, we report our implementation and experiments to show the plausibility of our ideas.
Several verification tools for TA have been devised and implemented so far @cite_8 @cite_24 @cite_2 @cite_34 @cite_14 @cite_16 @cite_28 @cite_21 @cite_6 @cite_35 @cite_9 . UPPAAL @cite_0 @cite_14 is one of the popular tool with DBM technology. It supports safety (reachability) analysis in forward reasoning techniques. Various state-space abstraction techniques and compact representation techniques have been developed @cite_31 @cite_4 . Recently, Moller has used UPPAAL with abstraction techniques to analyze restricted inevitability properties with no modal-formula nesting @cite_11 . The idea is to make model augmentations to speed up the verification performance. Moller also shows how to extend the idea to analyze TCTL with only universal quantifications. However, no experiment has been reported on the verification of nested modal-formulas.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_4", "@cite_8", "@cite_28", "@cite_9", "@cite_21", "@cite_16", "@cite_6", "@cite_24", "@cite_0", "@cite_2", "@cite_31", "@cite_34", "@cite_11" ], "mid": [ "1503128752", "2100570449", "2023949272", "2900861686", "2147650421", "2012816689", "2058033966", "169129071", "2021217869", "2124207853", "2086092403", "1515930456", "2626217303", "2167672803", "1511737804" ], "abstract": [ "We consider verification of safety properties for concurrent real-timed systems modelled as timed Petri nets by performing symbolic forward reachability analysis. We introduce a formalism, called region generators, for representing sets of markings of timed Petri nets. Region generators characterize downward closed sets of regions and provide exact abstractions of sets of reachable states with respect to safety properties. We show that the standard operations needed for performing symbolic reachability analysis are computable for region generators. Since forward reachability analysis is necessarily incomplete, we introduce an acceleration technique to make the procedure terminate more often on practical examples. We have implemented a prototype for analyzing timed Petri nets and used it to verify a parameterized version of Fischer's protocol, Lynch and Shavit's mutual exclusion protocol and a producer-consumer protocol. We also used the tool to extract finite-state abstractions of these protocols.", "This paper develops techniques for studying complexity classes that are not covered by known recursive enumerations of machines. Often, counting classes, probabilistic classes, and intersection classes lack such enumerations. Concentrating on the counting class UP, we show that there are relativizations for which UPA has no complete languages and other relativizations for which PB ≠ UPB ≠ NPB and UPB has complete languages. Among other results we show that P ≠ UP if and only if there exists a set S in P of Boolean formulas with at most one satisfying assignment such that S ∩ SAT is not in P. P ≠ UP ∩ coUP if and only if there exists a set S in P of uniquely satisfiable Boolean formulas such that no polynomial-time machine can compute the solutions for the formulas in S. If UP has complete languages then there exists a set R in P of Boolean formulas with at most one satisfying assignment so that SAT ∩ R is complete for UP. Finally, we indicate the wide applicability of our techniques to counting and probabilistic classes by using them to examine the probabilistic class BPP. There is a relativized world where BPPA has no complete languages. If BPP has complete languages then it has a complete language of the form B ∩ MAJORITY, where B ∈ P and MAJORITY = f | f is true for at least half of all assignments is the canonical PP-complete set.", "Reachability analysis has proved to be one of the most effective methods in verifying correctness of communication protocols based on the state transition model. Consequently, many protocol verification tools have been built based on the method of reachability analysis. Nevertheless, it is also well known that state space explosion is the most severe limitation to the applicability of this method. Although researchers in the field have proposed various strategies to relieve this intricate problem when building the tools, a survey and evaluation of these strategies has not been done in the literature. In searching for an appropriate approach to tackling such a problem for a grammar-based validation tool, we have collected and evaluated these relief strategies, and have decided to develop our own from yet another but more systematic approach. The results of our research are now reported in this paper. Essentially, the paper is to serve two purposes: first, to give a survey and evaluation of existing relief strategies; second, to propose a new strategy, called PROVAT (PROtocol VAlidation Testing), which is inspired by the heuristic search techniques in Artificial Intelligence. Preliminary results of incorporating the PROVAT strategy into our validation tool are reviewed in the paper. These results show the empirical evidence of the effectiveness of the PROVAT strategy.", "This paper demonstrates the improved power and electromagnetic (EM) side-channel attack (SCA) resistance of 128-bit Advanced Encryption Standard (AES) engines in 130-nm CMOS using random fast voltage dithering (RFVD) enabled by integrated voltage regulator (IVR) with the bond-wire inductors and an on-chip all-digital clock modulation (ADCM) circuit. RFVD scheme transforms the current signatures with random variations in AES input supply while adding random shifts in the clock edges in the presence of global and local supply noises. The measured power signatures at the supply node of the AES engines show upto 37 @math reduction in peak for higher order test vector leakage assessment (TVLA) metric and upto 692 @math increase in minimum traces required to disclose (MTD) the secret encryption key with correlation power analysis (CPA). Similarly, SCA on the measured EM signatures from the chip demonstrates a reduction of upto 11.3 @math in TVLA peak and upto 37 @math increase in correlation EM analysis (CEMA) MTD.", "We present the first verification of full functional correctness for a range of linked data structure implementations, including mutable lists, trees, graphs, and hash tables. Specifically, we present the use of the Jahob verification system to verify formal specifications, written in classical higher-order logic, that completely capture the desired behavior of the Java data structure implementations (with the exception of properties involving execution time and or memory consumption). Given that the desired correctness properties include intractable constructs such as quantifiers, transitive closure, and lambda abstraction, it is a challenge to successfully prove the generated verification conditions. Our Jahob verification system uses integrated reasoning to split each verification condition into a conjunction of simpler subformulas, then apply a diverse collection of specialized decision procedures, first-order theorem provers, and, in the worst case, interactive theorem provers to prove each subformula. Techniques such as replacing complex subformulas with stronger but simpler alternatives, exploiting structure inherently present in the verification conditions, and, when necessary, inserting verified lemmas and proof hints into the imperative source code make it possible to seamlessly integrate all of the specialized decision procedures and theorem provers into a single powerful integrated reasoning system. By appropriately applying multiple proof techniques to discharge different subformulas, this reasoning system can effectively prove the complex and challenging verification conditions that arise in this context.", "Proof, verification and analysis methods for termination all rely on two induction principles: (1) a variant function or induction on data ensuring progress towards the end and (2) some form of induction on the program structure. The abstract interpretation design principle is first illustrated for the design of new forward and backward proof, verification and analysis methods for safety. The safety collecting semantics defining the strongest safety property of programs is first expressed in a constructive fixpoint form. Safety proof and checking verification methods then immediately follow by fixpoint induction. Static analysis of abstract safety properties such as invariance are constructively designed by fixpoint abstraction (or approximation) to (automatically) infer safety properties. So far, no such clear design principle did exist for termination so that the existing approaches are scattered and largely not comparable with each other. For (1), we show that this design principle applies equally well to potential and definite termination. The trace-based termination collecting semantics is given a fixpoint definition. Its abstraction yields a fixpoint definition of the best variant function. By further abstraction of this best variant function, we derive the Floyd Turing termination proof method as well as new static analysis methods to effectively compute approximations of this best variant function. For (2), we introduce a generalization of the syntactic notion of struc- tural induction (as found in Hoare logic) into a semantic structural induction based on the new semantic concept of inductive trace cover covering execution traces by segments, a new basis for formulating program properties. Its abstractions allow for generalized recursive proof, verification and static analysis methods by induction on both program structure, control, and data. Examples of particular instances include Floyd's handling of loop cutpoints as well as nested loops, Burstall's intermittent assertion total correctness proof method, and Podelski-Rybalchenko transition invariants.", "This paper presents a new approach for automatically deriving worst-case resource bounds for C programs. The described technique combines ideas from amortized analysis and abstract interpretation in a unified framework to address four challenges for state-of-the-art techniques: compositionality, user interaction, generation of proof certificates, and scalability. Compositionality is achieved by incorporating the potential method of amortized analysis. It enables the derivation of global whole-program bounds with local derivation rules by naturally tracking size changes of variables in sequenced loops and function calls. The resource consumption of functions is described abstractly and a function call can be analyzed without access to the function body. User interaction is supported with a new mechanism that clearly separates qualitative and quantitative verification. A user can guide the analysis to derive complex non-linear bounds by using auxiliary variables and assertions. The assertions are separately proved using established qualitative techniques such as abstract interpretation or Hoare logic. Proof certificates are automatically generated from the local derivation rules. A soundness proof of the derivation system with respect to a formal cost semantics guarantees the validity of the certificates. Scalability is attained by an efficient reduction of bound inference to a linear optimization problem that can be solved by off-the-shelf LP solvers. The analysis framework is implemented in the publicly-available tool C4B. An experimental evaluation demonstrates the advantages of the new technique with a comparison of C4B with existing tools on challenging micro benchmarks and the analysis of more than 2900 lines of C code from the cBench benchmark suite.", "We study the use of formal modeling and verification techniques at an early stage in the development of safety-critical automotive products which are originally described in the domain specific architectural language EAST-ADL2. This architectural language only focuses on the structural definition of functional blocks. However, the behavior inside each functional block is not specified and that limits formal modeling and analysis of systems behaviors as well as efficient verification of safety properties. In this paper, we tackle this problem by proposing one modeling approach, which formally captures the behavioral execution inside each functional block and their interactions, and helps to improve the formal modeling and verification capability of EAST-ADL2: the behavior of each elementary function of EAST-ADL2 is specified in UPPAAL Timed Automata. The formal syntax and semantics are defined in order to specify the behavior model inside EAST-ADL2 and their interactions. A composition of the functional behaviors is considered a network of Timed Automata that enables us to verify behaviors of the entire system using the UPPAAL model checker. The method has been demonstrated by verifying the safety of the Brake-by-wire system design.", "A well known but incorrect piece of functional programming folklore is that ML expressions can be efficiently typed in polynomial time. In probing the truth of that folklore, various researchers, including Wand, Buneman, Kanellakis, and Mitchell, constructed simple counterexamples consisting of typable ML programs having length n , with principal types having O(2 cn ) distinct type variables and length O(2 2cn ). When the types associated with these ML constructions were represented as directed acyclic graphs, their sizes grew as O(2 cn ). The folklore was even more strongly contradicted by the recent result of Kanellakis and Mitchell that simply deciding whether or not an ML expression is typable is PSPACE-hard. We improve the latter result, showing that deciding ML typability is DEXPTIME-hard. As Kanellakis and Mitchell have shown containment in DEXPTIME, the problem is DEXPTIME-complete. The proof of DEXPTIME-hardness is carried out via a generic reduction: it consists of a very straightforward simulation of any deterministic one-tape Turing machine M with input k running in O ( c |k| ) time by a polynomial-sized ML formula P M,k , such that M accepts k iff P M,k is typable. The simulation of the transition function δ of the Turing Machine is realized uniquely through terms in the lambda calculus without the use of the polymorphic let construct. We use let for two purposes only: to generate an exponential amount of blank tape for the Turing Machine simulation to begin, and to compose an exponential number of applications of the ML formula simulating state transition. It is purely the expressive power of ML polymorphism to succinctly express function composition which results in a proof of DEXPTIME-hardness. We conjecture that lower bounds on deciding typability for extensions to the typed lambda calculus can be regarded precisely in terms of this expressive capacity for succinct function composition. To further understand this lower bound, we relate it to the problem of proving equality of type variables in a system of type equations generated from an ML expression with let-polymorphism. We show that given an oracle for solving this problem, deciding typability would be in PSPACE, as would be the actual computation of the principal type of the expression, were it indeed typable.", "We present a systematic approach to the automatic generation of platform-independent benchmarks of realistic structure and tailored complexity for evaluating verification tools for reactive systems. The idea is to mimic a systematic constraint-driven software development process by automatically transforming randomly generated temporal-logic-based requirement specifications on the basis of a sequence of property-preserving, randomly generated structural design decisions into executable source code of a chosen target language or platform. Our automated transformation process steps through dedicated representations in terms of Buchi automata, Mealy machines, decision diagram models, and code models. It comprises LTL synthesis, model checking, property-oriented expansion, path condition extraction, theorem proving, SAT solving, and code motion. This setup allows us to address different communities via a growing set of programming languages, tailored sets of programming constructs, different notions of observation, and the full variety of LTL properties--ranging from mere reachability over general safety properties to arbitrary liveness properties. The paper illustrates the corresponding tool chain along accompanying examples, emphasizes the current state of development, and sketches the envisioned potential and impact of our approach.", "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach.", "The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations in software systems). In this paper, we focus on the case where the background theory is the combination T1∪T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with a decision procedure for T1∪T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT(T1∪T2), called Delayed Theory Combination, which does not require a decision procedure for T1∪T2, but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental comparison.", "We present a novel approach to proving the absence of timing channels. The idea is to partition the programâ s execution traces in such a way that each partition component is checked for timing attack resilience by a time complexity analysis and that per-component resilience implies the resilience of the whole program. We construct a partition by splitting the program traces at secret-independent branches. This ensures that any pair of traces with the same public input has a component containing both traces. Crucially, the per-component checks can be normal safety properties expressed in terms of a single execution. Our approach is thus in contrast to prior approaches, such as self-composition, that aim to reason about multiple (kâ � 2) executions at once. We formalize the above as an approach called quotient partitioning, generalized to any k-safety property, and prove it to be sound. A key feature of our approach is a demand-driven partitioning strategy that uses a regex-like notion called trails to identify sets of execution traces, particularly those influenced by tainted (or secret) data. We have applied our technique in a prototype implementation tool called Blazer, based on WALA, PPL, and the brics automaton library. We have proved timing-channel freedom of (or synthesized an attack specification for) 24 programs written in Java bytecode, including 6 classic examples from the literature and 6 examples extracted from the DARPA STAC challenge problems.", "We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm.", "Motivated by applications to program verification, we study a decision procedure for satisfiability in an expressive fragment of a theory of arrays, which is parameterized by the theories of the array elements. The decision procedure reduces satisfiability of a formula of the fragment to satisfiability of an equisatisfiable quantifier-free formula in the combined theory of equality with uninterpreted functions (EUF), Presburger arithmetic, and the element theories. This fragment allows a constrained use of universal quantification, so that one quantifier alternation is allowed, with some syntactic restrictions. It allows expressing, for example, that an assertion holds for all elements in a given index range, that two arrays are equal in a given range, or that an array is sorted. We demonstrate its expressiveness through applications to verification of sorting algorithms and parameterized systems. We also prove that satisfiability is undecidable for several natural extensions to the fragment. Finally, we describe our implementation in the πVC verifying compiler." ] }
cs0304003
2949728882
Inevitability properties in branching temporal logics are of the syntax forall eventually , where is an arbitrary (timed) CTL formula. In the sense that "good things will happen", they are parallel to the "liveness" properties in linear temporal logics. Such inevitability properties in dense-time logics can be analyzed with greatest fixpoint calculation. We present algorithms to model-check inevitability properties both with and without requirement of non-Zeno computations. We discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non-Zeno computations on the evaluation of greatest fixpoints. We also discuss the TCTL subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. Finally, we report our implementation and experiments to show the plausibility of our ideas.
Kronos @cite_8 @cite_9 is a full TCTL model-checker with DBM technology and both forward and backward reasoning capability. Experiments to demonstrate how to use Kronos to verify several TCTL bounded inevitability properties is demonstrated in @cite_9 . ( Bouonded inevitabilities are those inevitabilities specified with a deadline.) But no report has been made on how to enhance the performance of general inevitability analysis. In comparison, we have discussed techniques like EDGF and abstractions which handle both bounded and unbounded inevitabilities.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2086092403", "2056358962" ], "abstract": [ "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach.", "In this paper, we focus on the problem of motion tracking in unknown environments using visual and inertial sensors. We term this estimation task visual-inertial odometry (VIO), in analogy to the well-known visual-odometry problem. We present a detailed study of extended Kalman filter (EKF)-based VIO algorithms, by comparing both their theoretical properties and empirical performance. We show that an EKF formulation where the state vector comprises a sliding window of poses (the multi-state-constraint Kalman filter (MSCKF)) attains better accuracy, consistency, and computational efficiency than the simultaneous localization and mapping (SLAM) formulation of the EKF, in which the state vector contains the current pose and the features seen by the camera. Moreover, we prove that both types of EKF approaches are inconsistent, due to the way in which Jacobians are computed. Specifically, we show that the observability properties of the EKF's linearized system models do not match those of the underlying system, which causes the filters to underestimate the uncertainty in the state estimates. Based on our analysis, we propose a novel, real-time EKF-based VIO algorithm, which achieves consistent estimation by (i) ensuring the correct observability properties of its linearized system model, and (ii) performing online estimation of the camera-to-inertial measurement unit (IMU) calibration parameters. This algorithm, which we term MSCKF 2.0, is shown to achieve accuracy and consistency higher than even an iterative, sliding-window fixed-lag smoother, in both Monte Carlo simulations and real-world testing." ] }
1708.02883
2742168128
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.
The first framework to be reviewed is pure-pixel search in HU in remote sensing @cite_27 or separable NMF in machine learning @cite_45 . Both assume that for every @math , there exists an index @math such that [ i_k = . ] The above assumption is called the pure-pixel assumption in HU or separability assumption in separable NMF. Figure (a) illustrates the geometry of @math under the pure-pixel assumption, where we see that the pure pixels @math are the vertices of the convex hull @math . This suggests that some kind of vertex search can lead to recovery of @math ---the key insight of almost all algorithms in this framework. The beauty of pure-pixel search or separable NMF is that under the pure-pixel assumption, SSMF can be accomplished either via simple algorithms @cite_10 @cite_35 or via convex optimization @cite_32 @cite_19 @cite_36 @cite_41 @cite_9 . Also, as shown in the aforementioned references, some of these algorithms are supported by theoretical analyses in terms of guarantees on recovery accuracies.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_41", "@cite_9", "@cite_32", "@cite_19", "@cite_27", "@cite_45", "@cite_10" ], "mid": [ "2952779762", "2523714292", "2963470893", "2040399008", "2170293972", "2095343758", "2937970997", "2050041778", "2149888534" ], "abstract": [ "The separability assumption (Donoho & Stodden, 2003; , 2012) turns non-negative matrix factorization (NMF) into a tractable problem. Recently, a new class of provably-correct NMF algorithms have emerged under this assumption. In this paper, we reformulate the separable NMF problem as that of finding the extreme rays of the conical hull of a finite set of vectors. From this geometric perspective, we derive new separable NMF algorithms that are highly scalable and empirically noise robust, and have several other favorable properties in relation to existing methods. A parallel implementation of our algorithm demonstrates high scalability on shared- and distributed-memory machines.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise-including its identification of the (unknown) number of endmembers-under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments.", "Non-negative matrix factorization (NMF), i.e. V ap WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However, imposing sparseness both improves the uniqueness of the decomposition and favors part based representation. Sparseness in the form of attaining as many zero elements in the solution as possible is appealing from a conceptional point of view and corresponds to minimizing reconstruction error with an L0 norm constraint. In general, solving for a given L0 norm is an NP hard problem thus convex relaxation to regularization by the L1 norm is often considered, i.e., minimizing (1 2||V - WH||F 2 + lambda||H||1).An open problem is to control the degree of sparsity lambda imposed. We here demonstrate that a full regularization path for the L1 norm regularized least squares NMF for fixed W can be calculated at the cost of an ordinary least squares solution based on a modification of the least angle regression and selection (LARS) algorithm forming a non-negativity constrained LARS (NLARS). With the full regularization path, the L1 regularization strength lambda that best approximates a given L0 can be directly accessed and in effect used to control the sparsity of H. The MATLAB code for the NLARS algorithm is available for download.", "Linear spectral unmixing aims at estimating the number of pure spectral substances, also called endmembers ,t heir spectral signatures, and their abundance fractions in remotely sensed hyperspectral images. This paper describes a method for unsupervised hyperspectral unmixing called minimum volume simplex analysis (MVSA) and introduces a new computationally efficient implementation. MVSA approaches hyperspectral un- mixing by fitting a minimum volume simplex to the hyperspec- tral data, constraining the abundance fractions to belong to the probability simplex. The resulting optimization problem, which is computationally complex, is solved in this paper by implementing a sequence of quadratically constrained subproblems using the interior point method, which is particularly effective from the computational viewpoint. The proposed implementation (available online: www.lx.it.pt 7ejun DemoMVSA.zip) is shown to exhibit state-of-the-art performance not only in terms of unmixing accu- racy, particularly in nonpure pixel scenarios, but also in terms of computational performance. Our experiments have been con- ducted using both synthetic and real data sets. An important assumption of MVSA is that pure pixels may not be present in the hyperspectral data, thus addressing a common situation in real scenarios which are often dominated by highly mixed pixels. In our experiments, we observe that MVSA yields competitive perfor- mance when compared with other available algorithms that work under the nonpure pixel regime. Our results also demonstrate that MVSA is well suited to problems involving a high number of endmembers (i.e., complex scenes) and also for problems involving a high number of pixels (i.e., large scenes).", "Many state-of-the-art approaches for object recognition reduce the problem to a 0-1 classification task. This allows one to leverage sophisticated machine learning techniques for training classifiers from labeled examples. However, these models are typically trained independently for each class using positive and negative examples cropped from images. At test-time, various post-processing heuristics such as non-maxima suppression (NMS) are required to reconcile multiple detections within and between different classes for each image. Though crucial to good performance on benchmarks, this post-processing is usually defined heuristically. We introduce a unified model for multi-class object recognition that casts the problem as a structured prediction task. Rather than predicting a binary label for each image window independently, our model simultaneously predicts a structured labeling of the entire image (Fig. 1). Our model learns statistics that capture the spatial arrangements of various object classes in real images, both in terms of which arrangements to suppress through NMS and which arrangements to favor through spatial co-occurrence statistics. We formulate parameter estimation in our model as a max-margin learning problem. Given training images with ground-truth object locations, we show how to formulate learning as a convex optimization problem. We employ the cutting plane algorithm of (Mach. Learn. 2009) to efficiently learn a model from thousands of training images. We show state-of-the-art results on the PASCAL VOC benchmark that indicate the benefits of learning a global model encapsulating the spatial layout of multiple object classes (a preliminary version of this work appeared in ICCV 2009, , IEEE international conference on computer vision, 2009).", "Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing hyperspectral unmixing algorithms were developed under a commonly used assumption that pure pixels exist. However, the pure-pixel assumption may be seriously violated for highly mixed data. Based on intuitive grounds, Craig reported an unmixing criterion without requiring the pure-pixel assumption, which estimates the endmembers by vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, we incorporate convex analysis and Craig's criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing. A cyclic minimization algorithm for approximating the MVES problem is developed using linear programs (LPs), which can be practically implemented by readily available LP solvers. We also provide a non-heuristic guarantee of our MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers. Some Monte Carlo simulations and real data experiments are presented to demonstrate the efficacy of the proposed MVES algorithm over several existing hyperspectral unmixing methods.", "We present a practical, stratified autocalibration algorithm with theoretical guarantees of global optimality. Given a projective reconstruction, the first stage of the algorithm upgrades it to affine by estimating the position of the plane at infinity. The plane at infinity is computed by globally minimizing a least squares formulation of the modulus constraints. In the second stage, the algorithm upgrades this affine reconstruction to a metric one by globally minimizing the infinite homography relation to compute the dual image of the absolute conic (DIAC). The positive semidefiniteness of the DIAC is explicitly enforced as part of the optimization process, rather than as a post-processing step. For each stage, we construct and minimize tight convex relaxations of the highly non-convex objective functions in a branch and bound optimization framework. We exploit the problem structure to restrict the search space for the DIAC and the plane at infinity to a small, fixed number of branching dimensions, independent of the number of views. Experimental evidence of the accuracy, speed and scalability of our algorithm is presented on synthetic and real data. MATLAB code for the implementation is made available to the community." ] }
1708.02883
2742168128
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.
To give insights into how the geometry of the pure-pixel case can be utilized for SSMF, we briefly describe a pure-pixel search framework based on maximum volume inscribed simplex (MVIS) @cite_4 @cite_47 . The MVIS framework considers the following problem where we seek to find a simplex @math such that it is inscribed in the data convex hull @math and its volume is the maximum; see Figure for an illustration. Intuitively, it seems true that the vertices of the MVIS, under the pure-pixel assumption, should be @math . In fact, this can be shown to be valid: It should be noted that the above theorem also reveals that the MVIS cannot correctly recover @math for no-pure-pixel or non-separable problem instances. Readers are also referred to @cite_47 for details on how the MVIS problem is handled in practice.
{ "cite_N": [ "@cite_47", "@cite_4" ], "mid": [ "2095343758", "2050041778" ], "abstract": [ "Linear spectral unmixing aims at estimating the number of pure spectral substances, also called endmembers ,t heir spectral signatures, and their abundance fractions in remotely sensed hyperspectral images. This paper describes a method for unsupervised hyperspectral unmixing called minimum volume simplex analysis (MVSA) and introduces a new computationally efficient implementation. MVSA approaches hyperspectral un- mixing by fitting a minimum volume simplex to the hyperspec- tral data, constraining the abundance fractions to belong to the probability simplex. The resulting optimization problem, which is computationally complex, is solved in this paper by implementing a sequence of quadratically constrained subproblems using the interior point method, which is particularly effective from the computational viewpoint. The proposed implementation (available online: www.lx.it.pt 7ejun DemoMVSA.zip) is shown to exhibit state-of-the-art performance not only in terms of unmixing accu- racy, particularly in nonpure pixel scenarios, but also in terms of computational performance. Our experiments have been con- ducted using both synthetic and real data sets. An important assumption of MVSA is that pure pixels may not be present in the hyperspectral data, thus addressing a common situation in real scenarios which are often dominated by highly mixed pixels. In our experiments, we observe that MVSA yields competitive perfor- mance when compared with other available algorithms that work under the nonpure pixel regime. Our results also demonstrate that MVSA is well suited to problems involving a high number of endmembers (i.e., complex scenes) and also for problems involving a high number of pixels (i.e., large scenes).", "Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing hyperspectral unmixing algorithms were developed under a commonly used assumption that pure pixels exist. However, the pure-pixel assumption may be seriously violated for highly mixed data. Based on intuitive grounds, Craig reported an unmixing criterion without requiring the pure-pixel assumption, which estimates the endmembers by vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, we incorporate convex analysis and Craig's criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing. A cyclic minimization algorithm for approximating the MVES problem is developed using linear programs (LPs), which can be practically implemented by readily available LP solvers. We also provide a non-heuristic guarantee of our MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers. Some Monte Carlo simulations and real data experiments are presented to demonstrate the efficacy of the proposed MVES algorithm over several existing hyperspectral unmixing methods." ] }
1708.02883
2742168128
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.
While MVES is appealing in its recovery guarantees, the pursuit of SSMF frameworks is arguably not over. The MVES problem is non-convex and NP-hard in general @cite_21 . Our numerical experience is that the convergence of an MVES algorithm to a good result could depend on the starting point. Hence, it is interesting to study alternative frameworks that can also go beyond the pure-pixel or separability case and can bring new perspective to the no-pure-pixel case---and this is the motivation for our development of the MVIE framework in the next section.
{ "cite_N": [ "@cite_21" ], "mid": [ "2050041778" ], "abstract": [ "Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing hyperspectral unmixing algorithms were developed under a commonly used assumption that pure pixels exist. However, the pure-pixel assumption may be seriously violated for highly mixed data. Based on intuitive grounds, Craig reported an unmixing criterion without requiring the pure-pixel assumption, which estimates the endmembers by vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, we incorporate convex analysis and Craig's criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing. A cyclic minimization algorithm for approximating the MVES problem is developed using linear programs (LPs), which can be practically implemented by readily available LP solvers. We also provide a non-heuristic guarantee of our MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers. Some Monte Carlo simulations and real data experiments are presented to demonstrate the efficacy of the proposed MVES algorithm over several existing hyperspectral unmixing methods." ] }
1708.02884
2951525085
The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best.
Size measurement is also well studied. Research investigating software size has shown that size can be used to assess productivity @cite_27 and defects @cite_25 in practice. @cite_18 have shown that simple metrics like lines of code are well suited to assess software complexity and maintainability in a similar environment. Hence, in this study, we consider size metrics as a powerful and established metric.
{ "cite_N": [ "@cite_27", "@cite_18", "@cite_25" ], "mid": [ "2134058295", "2127623179", "2160538621" ], "abstract": [ "Context: Many metrics are used in software engineering research as surrogates for maintainability of software systems. Aim: Our aim was to investigate whether such metrics are consistent among themselves and the extent to which they predict maintenance effort at the entire system level. Method: The Maintainability Index, a set of structural measures, two code smells (Feature Envy and God Class) and size were applied to a set of four functionally equivalent systems. The metrics were compared with each other and with the outcome of a study in which six developers were hired to perform three maintenance tasks on the same systems. Results: The metrics were not mutually consistent. Only system size and low cohesion were strongly associated with increased maintenance effort. Conclusion: Apart from size, surrogate maintainability measures may not reflect future maintenance effort. Surrogates need to be evaluated in the contexts for which they will be used. While traditional metrics are used to identify problematic areas in the code, the improvements of the worst areas may, inadvertently, lead to more problems for the entire system. Our results suggest that local improvements should be accompanied by an evaluation at the system level.", "There are lots of different software metrics discovered and used for defect prediction in the literature. Instead of dealing with so many metrics, it would be practical and easy if we could determine the set of metrics that are most important and focus on them more to predict defectiveness. We use Bayesian networks to determine the probabilistic influential relationships among software metrics and defect proneness. In addition to the metrics used in Promise data repository, we define two more metrics, i.e. NOD for the number of developers and LOCQ for the source code quality. We extract these metrics by inspecting the source code repositories of the selected Promise data repository data sets. At the end of our modeling, we learn the marginal defect proneness probability of the whole software system, the set of most effective metrics, and the influential relationships among metrics and defectiveness. Our experiments on nine open source Promise data repository data sets show that response for class (RFC), lines of code (LOC), and lack of coding quality (LOCQ) are the most effective metrics whereas coupling between objects (CBO), weighted method per class (WMC), and lack of cohesion of methods (LCOM) are less effective metrics on defect proneness. Furthermore, number of children (NOC) and depth of inheritance tree (DIT) have very limited effect and are untrustworthy. On the other hand, based on the experiments on Poi, Tomcat, and Xalan data sets, we observe that there is a positive correlation between the number of developers (NOD) and the level of defectiveness. However, further investigation involving a greater number of projects is needed to confirm our findings.", "Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness." ] }
1708.02884
2951525085
The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best.
Regarding the field of mining software repositories, many studies focus on predicting defects like Zimmermann @cite_19 . Other studies have used software repositories to investigate refactoring practices (cf. @cite_5 ). However, to the best knowledge of the authors, no studies have been reported so far combining aspects of software repository mining and measurements with prediction approaches including machine learning approaches that are evaluated in a realistic industrial context.
{ "cite_N": [ "@cite_19", "@cite_5" ], "mid": [ "2170866915", "2899407111" ], "abstract": [ "Software repositories with defect logs are main resource for defect prediction. In recent years, researchers have used the vast amount of data that is contained by software repositories to predict the location of defect in the code that caused problems. In this paper we evaluate the effectiveness of software fault prediction with Naive-Bayes classifiers and J48 classifier by integrating with supervised discretization algorithm developed by Fayyad and Irani. Public datasets from the promise repository have been explored for this purpose. The repository contains software metric data and error data at the function method level. Our experiment shows that integration of discretization method improves the software fault prediction accuracy when integrated with Naive-Bayes and J48 classifiers", "Software repositories contain historical and valuable information about the overall development of software systems. Mining software repositories (MSR) is nowadays considered one of the most interesting growing fields within software engineering. MSR focuses on extracting and analyzing data available in software repositories to uncover interesting, useful, and actionable information about the system. Even though MSR plays an important role in software engineering research, few tools have been created and made public to support developers in extracting information from Git repository. In this paper, we present PyDriller, a Python Framework that eases the process of mining Git. We compare our tool against the state-of-the-art Python Framework GitPython, demonstrating that PyDriller can achieve the same results with, on average, 50 less LOC and significantly lower complexity. URL: https: github.com ishepard pydriller Materials: https: doi.org 10.5281 zenodo.1327363 Pre-print: https: doi.org 10.5281 zenodo.1327411" ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
In 2012, @cite_11 released a dataset containing 20,000 sketches distributed over 250 object categories. These sketches are drawn from daily objects, but humans can only correctly identify the sketch category with the accuracy 73 Hand-crafted features share similar spirit with image classification methods, including feature extraction and classification. Most of the existing works regard sketch as texture image, such as HOG and SIFT @cite_22 @cite_23 . @cite_11 , SIFT feature are extracted in image patches. The method also takes into account that sketches do not have smooth gradients and are much sparser than images. Similarly, @cite_1 leverage Fisher Vectors and spatial pyramid pooling to represent SIFT features. @cite_16 employ multiple-kernel learning (MKL) to learn appropriate weights of different features. The method improve the performance a lot as different features are complementary to each other.
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_23", "@cite_16", "@cite_11" ], "mid": [ "1972420097", "2097817347", "2128543433", "2154209944", "2952320381" ], "abstract": [ "Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73 of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56 accuracy (chance is 0.4 ). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community.", "We present an approach for classifying images of charts based on the shape and spatial relationships of their primitives. Five categories are considered: bar-charts, curve-plots, pie-charts, scatter-plots and surface-plots. We introduce two novel features to represent the structural information based on (a) region segmentation and (b) curve saliency. The local shape is characterized using the Histograms of Oriented Gradients (HOG) and the Scale Invariant Feature Transform (SIFT) descriptors. Each image is represented by sets of feature vectors of each modality. The similarity between two images is measured by the overlap in the distribution of the features -measured using the Pyramid Match algorithm. A test image is classified based on its similarity with training images from the categories. The approach is tested with a database of images collected from the Internet.", "We present an image retrieval system for the interactive search of photo collections using free-hand sketches depicting shape. We describe Gradient Field HOG (GF-HOG); an adapted form of the HOG descriptor suitable for Sketch Based Image Retrieval (SBIR). We incorporate GF-HOG into a Bag of Visual Words (BoVW) retrieval framework, and demonstrate how this combination may be harnessed both for robust SBIR, and for localizing sketched objects within an image. We evaluate over a large Flickr sourced dataset comprising 33 shape categories, using queries from 10 non-expert sketchers. We compare GF-HOG against state-of-the-art descriptors with common distance measures and language models for image retrieval, and explore how affine deformation of the sketch impacts search performance. GF-HOG is shown to consistently outperform retrieval versus SIFT, multi-resolution HOG, Self Similarity, Shape Context and Structure Tensor. Further, we incorporate semantic keywords into our GF-HOG system to enable the use of annotated sketches for image search. A novel graph-based measure of semantic similarity is proposed and two applications explored: semantic sketch based image retrieval and a semantic photo montage.", "Inspired by theories of sparse representation and compressed sensing, this paper presents a simple, novel, yet very powerful approach for texture classification based on random projection, suitable for large texture database applications. At the feature extraction stage, a small set of random features is extracted from local image patches. The random features are embedded into a bag--of-words model to perform texture classification; thus, learning and classification are carried out in a compressed domain. The proposed unconventional random feature extraction is simple, yet by leveraging the sparse nature of texture images, our approach outperforms traditional feature extraction methods which involve careful design and complex steps. We have conducted extensive experiments on each of the CUReT, the Brodatz, and the MSRC databases, comparing the proposed approach to four state-of-the-art texture classification methods: Patch, Patch-MRF, MR8, and LBP. We show that our approach leads to significant improvements in classification accuracy and reductions in feature dimensionality.", "Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of \"best views\" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the \"best views\" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of \"best views\" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
Recently, deep neural networks (DNNs) have achieved great success @cite_2 by replacing hand-crafted representation with learning strategy. @cite_30 @cite_21 leverage a well-designed convolutional neural network(CNN) architecture for sketch recognition.According to their experimental results, their method surpass the best result achieved by human. According to their experimental results, their method surpass the best result achieved by human.@cite_12 @cite_1 , the sequential information of sketch is exploited.. @cite_25 explicitly uses sequential regularities in strokes with the help of recurrent neural work (RNN).
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_1", "@cite_2", "@cite_25", "@cite_12" ], "mid": [ "2549858052", "2509155366", "2462457117", "1884191083", "2792156255", "1934184906" ], "abstract": [ "The paper presents a deep Convolutional Neural Network (CNN) framework for free-hand sketch recognition. One of the main challenges in free-hand sketch recognition is to increase the recognition accuracy on sketches drawn by different people. To overcome this problem, we use deep Convolutional Neural Networks (CNNs) that have dominated top results in the field of image recognition. And we use the contours of natural images for training, because sketches drawn by different people may be very different and databases of the sketch images for training are very limited. We propose a CNN training on contours that performs well on sketch recognition over different databases of the sketch images. And we make some adjustments to the contours for training and reach higher recognition accuracy. Experimental results show the effectiveness of the proposed approach.", "Automatically detecting illustrations is needed for the target system.Deep Convolutional Neural Networks have been successful in computer vision tasks.DCNN with fine-tuning outperformed the other models including handcrafted features. Systems for aggregating illustrations require a function for automatically distinguishing illustrations from photographs as they crawl the network to collect images. A previous attempt to implement this functionality by designing basic features that were deemed useful for classification achieved an accuracy of only about 58 . On the other hand, deep neural networks had been successful in computer vision tasks, and convolutional neural networks (CNNs) had performed good at extracting such useful image features automatically. We evaluated alternative methods to implement this classification functionality with focus on deep neural networks. As the result of experiments, the method that fine-tuned deep convolutional neural network (DCNN) acquired 96.8 accuracy, outperforming the other models including the custom CNN models that were trained from scratch. We conclude that DCNN with fine-tuning is the best method for implementing a function for automatically distinguishing illustrations from photographs.", "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40 vs the best previous precision of 74.00 for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14 accuracy on CUB-2011.", "Abstract In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we’ve found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.", "Abstract Deep Neural Network (DNN) has recently achieved outstanding performance in a variety of computer vision tasks, including facial attribute classification. The great success of classifying facial attributes with DNN often relies on a massive amount of labelled data. However, in real-world applications, labelled data are only provided for some commonly used attributes (such as age, gender); whereas, unlabelled data are available for other attributes (such as attraction, hairline). To address the above problem, we propose a novel deep transfer neural network method based on multi-label learning for facial attribute classification, termed FMTNet, which consists of three sub-networks: the Face detection Network (FNet), the Multi-label learning Network (MNet) and the Transfer learning Network (TNet). Firstly, based on the Faster Region-based Convolutional Neural Network (Faster R-CNN), FNet is fine-tuned for face detection. Then, MNet is fine-tuned by FNet to predict multiple attributes with labelled data, where an effective loss weight scheme is developed to explicitly exploit the correlation between facial attributes based on attribute grouping. Finally, based on MNet, TNet is trained by taking advantage of unsupervised domain adaptation for unlabelled facial attribute classification. The three sub-networks are tightly coupled to perform effective facial attribute classification. A distinguishing characteristic of the proposed FMTNet method is that the three sub-networks (FNet, MNet and TNet) are constructed in a similar network structure. Extensive experimental results on challenging face datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art methods.", "In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
CNNs and RNNs are two branches of neural network. CNNs have obtained great success in many practical application @cite_2 @cite_27 . LeNet @cite_7 is a classical CNN architecture, which has been used in handwritten number recognition. Wang @cite_8 use Siamese network to retrieve 3D model by sketch. However, all the above methods take sketches as traditional image and ignore the sparse of sketches.
{ "cite_N": [ "@cite_8", "@cite_27", "@cite_7", "@cite_2" ], "mid": [ "2949314557", "2798735168", "1922658220", "2774989306" ], "abstract": [ "Convolutional Neural Networks (CNNs) can be shifted across 2D images or 3D videos to segment them. They have a fixed input size and typically perceive only small local contexts of the pixels to be classified as foreground or background. In contrast, Multi-Dimensional Recurrent NNs (MD-RNNs) can perceive the entire spatio-temporal context of each pixel in a few sweeps through all pixels, especially when the RNN is a Long Short-Term Memory (LSTM). Despite these theoretical advantages, however, unlike CNNs, previous MD-LSTM variants were hard to parallelize on GPUs. Here we re-arrange the traditional cuboid order of computations in MD-LSTM in pyramidal fashion. The resulting PyraMiD-LSTM is easy to parallelize, especially for 3D data such as stacks of brain slice images. PyraMiD-LSTM achieved best known pixel-wise brain image segmentation results on MRBrainS13 (and competitive results on EM-ISBI12).", "Due to the spatially variant blur caused by camera shake and object motions under different scene depths, deblurring images captured from dynamic scenes is challenging. Although recent works based on deep neural networks have shown great progress on this problem, their models are usually large and computationally expensive. In this paper, we propose a novel spatially variant neural network to address the problem. The proposed network is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). RNN is used as a deconvolution operator performed on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the weights for the RNN at every location. As a result, the RNN is spatially variant and could implicitly model the deblurring process with spatially variant kernels. The third CNN is used to reconstruct the final deblurred feature maps into restored image. The whole network is end-to-end trainable. Our analysis shows that the proposed network has a large receptive field even with a small model size. Quantitative and qualitative evaluations on public datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of accuracy, speed, and model size.", "In existing convolutional neural networks (CNNs), both convolution and pooling are locally performed for image regions separately, no contextual dependencies between different image regions have been taken into consideration. Such dependencies represent useful spatial structure information in images. Whereas recurrent neural networks (RNNs) are designed for learning contextual dependencies among sequential data by using the recurrent (feedback) connections. In this work, we propose the convolutional recurrent neural network (C-RNN), which learns the spatial dependencies between image regions to enhance the discriminative power of image representation. The C-RNN is trained in an end-to-end manner from raw pixel images. CNN layers are firstly processed to generate middle level features. RNN layer is then learned to encode spatial dependencies. The C-RNN can learn better image representation, especially for images with obvious spatial contextual dependencies. Our method achieves competitive performance on ILSVRC 2012, SUN 397, and MIT indoor.", "Convolutional neural networks (CNNs) have demonstrated their ability object detection of very high resolution remote sensing images. However, CNNs have obvious limitations for modeling geometric variations in remote sensing targets. In this paper, we introduced a CNN structure, namely deformable ConvNet, to address geometric modeling in object recognition. By adding offsets to the convolution layers, feature mapping of CNN can be applied to unfixed locations, enhancing CNNs’ visual appearance understanding. In our work, a deformable region-based fully convolutional networks (R-FCN) was constructed by substituting the regular convolution layer with a deformable convolution layer. To efficiently use this deformable convolutional neural network (ConvNet), a training mechanism is developed in our work. We first set the pre-trained R-FCN natural image model as the default network parameters in deformable R-FCN. Then, this deformable ConvNet was fine-tuned on very high resolution (VHR) remote sensing images. To remedy the increase in lines like false region proposals, we developed aspect ratio constrained non maximum suppression (arcNMS). The precision of deformable ConvNet for detecting objects was then improved. An end-to-end approach was then developed by combining deformable R-FCN, a smart fine-tuning strategy and aspect ratio constrained NMS. The developed method was better than a state-of-the-art benchmark in object detection without data augmentation." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
In order to exploit appropriate learning architecture for sketch recognition, @cite_30 @cite_21 develop a network named as Sketch-A-Net (SAN), which enlarges the pooling sizes and patches of filters to cater the sparse character of sketches. Different from traditional images, sketches involve inherent sequential property. @cite_30 and @cite_21 divide the strokes in several groups according to the strokes order. However, CNNs can not build connections between sequential strokes.
{ "cite_N": [ "@cite_30", "@cite_21" ], "mid": [ "2493181180", "2743832495" ], "abstract": [ "We propose a deep learning approach to free-hand sketch recognition that achieves state-of-the-art performance, significantly surpassing that of humans. Our superior performance is a result of modelling and exploiting the unique characteristics of free-hand sketches, i.e., consisting of an ordered set of strokes but lacking visual cues such as colour and texture, being highly iconic and abstract, and exhibiting extremely large appearance variations due to different levels of abstraction and deformation. Specifically, our deep neural network, termed Sketch-a-Net has the following novel components: (i) we propose a network architecture designed for sketch rather than natural photo statistics. (ii) Two novel data augmentation strategies are developed which exploit the unique sketch-domain properties to modify and synthesise sketch training data at multiple abstraction levels. Based on this idea we are able to both significantly increase the volume and diversity of sketches for training, and address the challenge of varying levels of sketching detail commonplace in free-hand sketches. (iii) We explore different network ensemble fusion strategies, including a re-purposed joint Bayesian scheme, to further improve recognition performance. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches. Furthermore, through visualising the learned filters, we offer useful insights in to where the superior performance of our network comes from.", "Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
The sequential property is not proprietary for sketches. Many works resort to RNN to improve the performance in speech recognition @cite_18 and text generation @cite_28 . RNN is specialized for processing input sequence, which can bridge the hidden units and deliver the outputs from former sequence to the latter. However, it has a significant limitation called 'vanishing gradient'. When the input sequence is quite long, RNN is difficult to propagate the gradients through deep layers of the neural network, which is easy to cause gradients vanishing and exploding problems @cite_3 . In order to overcome the limitation of RNN, long short term memory (LSTM) @cite_3 and gated recurrent unit (GRU) @cite_4 are proposed. GRU can be regarded as a light-weight version of LSTM, which outperforms LSTM in some certain cases by learning smaller number of parameters @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_28", "@cite_3", "@cite_10" ], "mid": [ "2791366550", "2227316993", "2953188482", "2319453305", "2785979806" ], "abstract": [ "Recurrent neural networks (RNNs) have been widely used for processing sequential data. However, RNNs are commonly difficult to train due to the well-known gradient vanishing and exploding problems and hard to learn long-term patterns. Long short-term memory (LSTM) and gated recurrent unit (GRU) were developed to address these problems, but the use of hyperbolic tangent and the sigmoid action functions results in gradient decay over layers. Consequently, construction of an efficiently trainable deep network is challenging. In addition, all the neurons in an RNN layer are entangled together and their behaviour is hard to interpret. To address these problems, a new type of RNN, referred to as independently recurrent neural network (IndRNN), is proposed in this paper, where neurons in the same layer are independent of each other and they are connected across layers. We have shown that an IndRNN can be easily regulated to prevent the gradient exploding and vanishing problems while allowing the network to learn long-term dependencies. Moreover, an IndRNN can work with non-saturated activation functions such as relu (rectified linear unit) and be still trained robustly. Multiple IndRNNs can be stacked to construct a network that is deeper than the existing RNNs. Experimental results have shown that the proposed IndRNN is able to process very long sequences (over 5000 time steps), can be used to construct very deep networks (21 layers used in the experiment) and still be trained robustly. Better performances have been achieved on various tasks by using IndRNNs compared with the traditional RNN and LSTM.", "Recurrent Neural Network (RNN) and one of its specific architectures, Long Short-Term Memory (LSTM), have been widely used for sequence labeling. In this paper, we first enhance LSTM-based sequence labeling to explicitly model label dependencies. Then we propose another enhancement to incorporate the global information spanning over the whole input sequence. The latter proposed method, encoder-labeler LSTM, first encodes the whole input sequence into a fixed length vector with the encoder LSTM, and then uses this encoded vector as the initial state of another LSTM for sequence labeling. Combining these methods, we can predict the label sequence with considering label dependencies and information of whole input sequence. In the experiments of a slot filling task, which is an essential component of natural language understanding, with using the standard ATIS corpus, we achieved the state-of-the-art F1-score of 95.66 .", "Recurrent neural networks (RNNs) have shown clear superiority in sequence modeling, particularly the ones with gated units, such as long short-term memory (LSTM) and gated recurrent unit (GRU). However, the dynamic properties behind the remarkable performance remain unclear in many applications, e.g., automatic speech recognition (ASR). This paper employs visualization techniques to study the behavior of LSTM and GRU when performing speech recognition tasks. Our experiments show some interesting patterns in the gated memory, and some of them have inspired simple yet effective modifications on the network structure. We report two of such modifications: (1) lazy cell update in LSTM, and (2) shortcut connections for residual learning. Both modifications lead to more comprehensible and powerful networks.", "Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many competing and complex hidden units, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). We propose a gated unit for RNN, named as minimal gated unit (MGU), since it only contains one gate, which is a minimal design among all gated hidden units. The design of MGU benefits from evaluation results on LSTM and GRU in the literature. Experiments on various sequence data show that MGU has comparable accuracy with GRU, but has a simpler structure, fewer parameters, and faster training. Hence, MGU is suitable in RNN's applications. Its simple architecture also means that it is easier to evaluate and tune, and in principle it is easier to study MGU's properties theoretically and empirically.", "The Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) has been demonstrated successful in handwritten text recognition of Western and Arabic scripts. It is totally segmentation free and can be trained directly from text line images. However, the application of LSTM-RNNs (including Multi-Dimensional LSTM-RNN (MDLSTM-RNN)) to Chinese text recognition has shown limited success, even when training them with large datasets and using pre-training on datasets of other languages. In this paper, we propose a handwritten Chinese text recognition method by using Separable MDLSTMRNN (SMDLSTM-RNN) modules, which extract contextual information in various directions, and consume much less computation efforts and resources compared with the traditional MDLSTMRNN. Experimental results on the ICDAR-2013 competition dataset show that the proposed method performs significantly better than the previous LSTM-based methods, and can compete with the state-of-the-art systems." ] }
1708.02716
2743832495
Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.
Sarvadevabhatla @cite_25 take orders of strokes as a sequence and feed their features to GRU. In this way, a long-term sequential and structural regularities of stoke can be exploited. As a result, they achieve the best recognition performance. However, stroke order varies severely in the same kind, which results in the fluctuation of the network.
{ "cite_N": [ "@cite_25" ], "mid": [ "2743832495" ], "abstract": [ "Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur." ] }
1708.02760
2743858606
The ability to ask questions is a powerful tool to gather information in order to learn about the world and resolve ambiguities. In this paper, we explore a novel problem of generating discriminative questions to help disambiguate visual instances. Our work can be seen as a complement and new extension to the rich research studies on image captioning and question answering. We introduce the first large-scale dataset with over 10,000 carefully annotated images-question tuples to facilitate benchmarking. In particular, each tuple consists of a pair of images and 4.6 discriminative questions (as positive samples) and 5.9 non-discriminative questions (as negative samples) on average. In addition, we present an effective method for visual discriminative question generation. The method can be trained in a weakly supervised manner without discriminative images-question tuples but just existing visual question answering datasets. Promising results are shown against representative baselines through quantitative evaluations and user studies.
. The goal of image captioning is to automatically generate natural language description of images @cite_33 . The CNN-LSTM framework has been commonly adopted and shows good performance @cite_25 @cite_17 @cite_0 @cite_44 @cite_18 . Xu al @cite_55 introduce attention mechanism to exploit spatial information from image context. Krishna al @cite_19 incorporate object detection @cite_6 to generate descriptions for dense regions. Jia al @cite_35 extracts semantic information from images as extra guide to caption generation. Krause al @cite_60 uses hierarchical RNN to generates entire paragraphs to describe images, which is more descriptive than single sentence caption. In contrast to these studies, we are interested in generating a question rather than a caption to distinguish two objects in images.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_33", "@cite_60", "@cite_55", "@cite_6", "@cite_0", "@cite_44", "@cite_19", "@cite_25", "@cite_17" ], "mid": [ "2885822952", "2951159095", "2556388456", "2552161745", "2950477994", "2963758027", "2743573407", "2952782394", "2963170456", "2799042952", "1714639292" ], "abstract": [ "Image captioning, which aims to automatically generate a sentence description for an image, has attracted much research attention in cognitive computing. The task is rather challenging, since it requires cognitively combining the techniques from both computer vision and natural language processing domains. Existing CNN-RNN framework-based methods suffer from two main problems: in the training phase, all the words of captions are treated equally without considering the importance of different words; in the caption generation phase, the semantic objects or scenes might be misrecognized. In our paper, we propose a method based on the encoder-decoder framework, named Reference based Long Short Term Memory (R-LSTM), aiming to lead the model to generate a more descriptive sentence for the given image by introducing reference information. Specifically, we assign different weights to the words according to the correlation between words and images during the training phase. We additionally maximize the consensus score between the captions generated by the captioning model and the reference information from the neighboring images of the target image, which can reduce the misrecognition problem. We have conducted extensive experiments and comparisons on the benchmark datasets MS COCO and Flickr30k. The results show that the proposed approach can outperform the state-of-the-art approaches on all metrics, especially achieving a 10.37 improvement in terms of CIDEr on MS COCO. By analyzing the quality of the generated captions, we come to a conclusion that through the introduction of reference information, our model can learn the key information of images and generate more trivial and relevant words for images.", "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD.", "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.", "Automatically describing an image with a natural language has been an emerging challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner. Particularly, the learning of attributes is strengthened by integrating inter-attribute correlations into Multiple Instance Learning (MIL). To incorporate attributes into captioning, we construct variants of architectures by feeding image representations and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework shows clear improvements when compared to state-of-the-art deep models. More remarkably, we obtain METEOR CIDEr-D of 25.5 100.2 on testing data of widely used and publicly available splits in [10] when extracting image representations by GoogleNet and achieve superior performance on COCO captioning Leaderboard.", "Automatically describing an image with a natural language has been an emerging challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner. To incorporate attributes, we construct variants of architectures by feeding image representations and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework achieves superior results when compared to state-of-the-art deep models. Most remarkably, we obtain METEOR CIDEr-D of 25.2 98.6 on testing data of widely used and publicly available splits in (Karpathy & Fei-Fei, 2015) when extracting image representations by GoogleNet and achieve to date top-1 performance on COCO captioning Leaderboard.", "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.", "Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.", "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-of-the-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.", "The aim of image captioning is to generate captions by machine to describe image contents. Despite many efforts, generating discriminative captions for images remains non-trivial. Most traditional approaches imitate the language structure patterns, thus tend to fall into a stereotype of replicating frequent phrases or sentences and neglect unique aspects of each image. In this work, we propose an image captioning framework with a self-retrieval module as training guidance, which encourages generating discriminative captions. It brings unique advantages: (1) the self-retrieval guidance can act as a metric and an evaluator of caption discriminativeness to assure the quality of generated captions. (2) The correspondence between generated captions and images are naturally incorporated in the generation process without human annotations, and hence our approach could utilize a large amount of unlabeled images to boost captioning performance with no additional annotations. We demonstrate the effectiveness of the proposed retrieval-guided method on COCO and Flickr30k captioning datasets, and show its superior captioning performance with more discriminative captions.", "Automatically describing a video with natural language is regarded as a fundamental challenge in computer vision. The problem nevertheless is not trivial especially when a video contains multiple events to be worthy of mention, which often happens in real videos. A valid question is how to temporally localize and then describe events, which is known as \"dense video captioning.\" In this paper, we present a novel framework for dense video captioning that unifies the localization of temporal event proposals and sentence generation of each proposal, by jointly training them in an end-to-end manner. To combine these two worlds, we integrate a new design, namely descriptiveness regression, into a single shot detection structure to infer the descriptive complexity of each detected proposal via sentence generation. This in turn adjusts the temporal locations of each event proposal. Our model differs from existing dense video captioning methods since we propose a joint and global optimization of detection and captioning, and the framework uniquely capitalizes on an attribute-augmented video captioning architecture. Extensive experiments are conducted on ActivityNet Captions dataset and our framework shows clear improvements when compared to the state-of-the-art techniques. More remarkably, we obtain a new record: METEOR of 12.96 on ActivityNet Captions official test set.", "Generating natural language descriptions for in-the-wild videos is a challenging task. Most state-of-the-art methods for solving this problem borrow existing deep convolutional neural network (CNN) architectures (AlexNet, GoogLeNet) to extract a visual representation of the input video. However, these deep CNN architectures are designed for single-label centered-positioned object classification. While they generate strong semantic features, they have no inherent structure allowing them to detect multiple objects of different sizes and locations in the frame. Our paper tries to solve this problem by integrating the base CNN into several fully convolutional neural networks (FCNs) to form a multi-scale network that handles multiple receptive field sizes in the original image. FCNs, previously applied to image segmentation, can generate class heat-maps efficiently compared to sliding window mechanisms, and can easily handle multiple scales. To further handle the ambiguity over multiple objects and locations, we incorporate the Multiple Instance Learning mechanism (MIL) to consider objects in different positions and at different scales simultaneously. We integrate our multi-scale multi-instance architecture with a sequence-to-sequence recurrent neural network to generate sentence descriptions based on the visual representation. Ours is the first end-to-end trainable architecture that is capable of multi-scale region processing. Evaluation on a Youtube video dataset shows the advantage of our approach compared to the original single-scale whole frame CNN model. Our flexible and efficient architecture can potentially be extended to support other video processing tasks." ] }
1708.02760
2743858606
The ability to ask questions is a powerful tool to gather information in order to learn about the world and resolve ambiguities. In this paper, we explore a novel problem of generating discriminative questions to help disambiguate visual instances. Our work can be seen as a complement and new extension to the rich research studies on image captioning and question answering. We introduce the first large-scale dataset with over 10,000 carefully annotated images-question tuples to facilitate benchmarking. In particular, each tuple consists of a pair of images and 4.6 discriminative questions (as positive samples) and 5.9 non-discriminative questions (as negative samples) on average. In addition, we present an effective method for visual discriminative question generation. The method can be trained in a weakly supervised manner without discriminative images-question tuples but just existing visual question answering datasets. Promising results are shown against representative baselines through quantitative evaluations and user studies.
. A closely related task to VDQG is REG, where the model is required to generate unambiguous object descriptions. Referring expression has been studied in Natural Language Processing (NLP) @cite_43 @cite_22 @cite_52 . Kazemzadeh al @cite_27 introduce the first large-scale dataset for the REG in real-world scenes. They use images from the ImageCLEF dataset @cite_49 , and collect referring expression annotations by developing a ReferIt game. The authors of @cite_16 @cite_14 build two larger REG datasets by using similar approaches on top of MS COCO @cite_28 . CNN-LSTM model has been shown effective in both generation @cite_16 @cite_14 and comprehension @cite_29 @cite_59 of REG. Mao al @cite_16 introduce a discriminative loss function based on Maximum Mutual Information. Yu al @cite_14 study the usage of context in REG task. Yu al @cite_41 propose a speaker-listener-reinforcer framework for REG, which is end-to-end trainable by reinforcement learning.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_41", "@cite_29", "@cite_52", "@cite_43", "@cite_27", "@cite_49", "@cite_59", "@cite_16" ], "mid": [ "2556388456", "2951159095", "2885822952", "2122514299", "2585635281", "2339652278", "2949257576", "2950401034", "2962764817", "2951423096", "2174492417" ], "abstract": [ "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.", "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD.", "Image captioning, which aims to automatically generate a sentence description for an image, has attracted much research attention in cognitive computing. The task is rather challenging, since it requires cognitively combining the techniques from both computer vision and natural language processing domains. Existing CNN-RNN framework-based methods suffer from two main problems: in the training phase, all the words of captions are treated equally without considering the importance of different words; in the caption generation phase, the semantic objects or scenes might be misrecognized. In our paper, we propose a method based on the encoder-decoder framework, named Reference based Long Short Term Memory (R-LSTM), aiming to lead the model to generate a more descriptive sentence for the given image by introducing reference information. Specifically, we assign different weights to the words according to the correlation between words and images during the training phase. We additionally maximize the consensus score between the captions generated by the captioning model and the reference information from the neighboring images of the target image, which can reduce the misrecognition problem. We have conducted extensive experiments and comparisons on the benchmark datasets MS COCO and Flickr30k. The results show that the proposed approach can outperform the state-of-the-art approaches on all metrics, especially achieving a 10.37 improvement in terms of CIDEr on MS COCO. By analyzing the quality of the generated captions, we come to a conclusion that through the introduction of reference information, our model can learn the key information of images and generate more trivial and relevant words for images.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction.", "The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at https: github.com layumi Person-reID_GAN.", "This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are proposed to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale and vertical mirror are proposed to prevent overfitting in training deep models. We visualize the evolution of bidirectional LSTM internal states over time and qualitatively analyze how our models \"translate\" image to sentence. Our proposed models are evaluated on caption generation and image-sentence retrieval tasks with three benchmark datasets: Flickr8K, Flickr30K and MSCOCO datasets. We demonstrate that bidirectional LSTM models achieve highly competitive performance to the state-of-the-art results on caption generation even without integrating additional mechanism (e.g. object detection, attention model etc.) and significantly outperform recent methods on retrieval task", "The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL", "We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of \"siamese cat\" and \"tiger cat\", we generate language that describes the \"siamese cat\" in a way that distinguishes it from \"tiger cat\". Our key novelty is that we show how to do joint inference over a language model that is context-agnostic and a listener which distinguishes closely-related concepts. We first apply our technique to a justification task, namely to describe why an image contains a particular fine-grained category as opposed to another closely-related category of the CUB-200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one of two semantically-similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.", "We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., \"largest elephant standing behind baby elephant\". This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context - visual attributes (e.g., \"largest\", \"baby\") and relationships (e.g., \"behind\") that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Our model exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings. The code is available at https: github.com yuleiniu vc .", "A new type of End-to-End system for text-dependent speaker verification is presented in this paper. Previously, using the phonetically discriminative speaker discriminative DNNs as feature extractors for speaker verification has shown promising results. The extracted frame-level (DNN bottleneck, posterior or d-vector) features are equally weighted and aggregated to compute an utterance-level speaker representation (d-vector or i-vector). In this work we use speaker discriminative CNNs to extract the noise-robust frame-level features. These features are smartly combined to form an utterance-level speaker vector through an attention mechanism. The proposed attention model takes the speaker discriminative information and the phonetic information to learn the weights. The whole system, including the CNN and attention model, is joint optimized using an end-to-end criterion. The training algorithm imitates exactly the evaluation process --- directly mapping a test utterance and a few target speaker utterances into a single verification score. The algorithm can automatically select the most similar impostor for each target speaker to train the network. We demonstrated the effectiveness of the proposed end-to-end system on Windows @math \"Hey Cortana\" speaker verification task.", "We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions." ] }
1708.02861
2745029067
Due to the difficulty of coordination in multi-cell random access, it is a practical challenge how to achieve the optimal throughput with decentralized transmission. In this paper, we propose a decentralized multi-cell-aware opportunistic random access (MA-ORA) protocol that achieves the optimal throughput scaling in an ultra-dense @math -cell random access network with one access point (AP) and @math users in each cell, which is suited for machine-type communications. Unlike opportunistic scheduling for cellular multiple access where users are selected by base stations, under our MA-ORA protocol, each user opportunistically transmits with a predefined physical layer data rate in a decentralized manner if the desired signal power to the serving AP is sufficiently large and the generating interference leakage power to the other APs is sufficiently small (i.e., two threshold conditions are fulfilled). As a main result, it is proved that the aggregate throughput scales as @math in a high signal-to-noise ratio (SNR) regime if @math scales faster than @math for small constants @math . Our analytical result is validated by computer simulations. In addition, numerical evaluation confirms that under a practical setting, the proposed MA-ORA protocol outperforms conventional opportunistic random access protocols in terms of throughput.
On the one hand, there are extensive studies on handling interference management of cellular networks with multiple base stations @cite_31 @cite_44 . While it has been elusive to find the optimal strategy with respect to the Shannon-theoretic capacity in multiuser cellular networks, interference alignment (IA) was recently proposed for fundamentally solving the interference problem when there are multiple communication pairs @cite_41 . It was shown that IA can asymptotically achieve the optimal degrees of freedom, which are equal to @math , in the @math -user interference channel with time-varying coefficients. Subsequent work showed that interference management schemes based on IA can be well applicable to various communication scenarios @cite_35 @cite_18 @cite_43 , including interfering multiple access networks @cite_42 @cite_22 @cite_48 . In addition to the multiple access scenarios in which collisions can be avoided, it is of significant importance how to manage interference in . For multi-cell random access networks, several studies were carried out to manage interference by performing IA @cite_34 @cite_7 @cite_6 or successive interference cancellation (SIC) @cite_40 @cite_36 . In @cite_11 @cite_13 , decentralized power allocation approaches were introduced by means of interference mitigation for random access with capabilities of multi-packet reception and SIC at the receiver.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_22", "@cite_7", "@cite_41", "@cite_48", "@cite_36", "@cite_42", "@cite_6", "@cite_44", "@cite_43", "@cite_40", "@cite_31", "@cite_34", "@cite_13", "@cite_11" ], "mid": [ "2068771370", "2097395379", "1889296357", "1981109871", "2725094597", "2170510856", "2952966314", "2066452435", "2292788438", "2066106876", "2540490069", "2051577697", "2055294446", "2138070037", "1501348191", "2102157096" ], "abstract": [ "This paper considers the optimum single cell power control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown that the optimum power allocation is binary, which means that links are either “on” or “off.” By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time division multiple access (TDMA) maximizes the aggregate communication rate are established. In a numerical study, we compare and contrast the performance achieved by the optimum binary power-control policy with other suboptimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near perfect interference cancellation efficiency. In this paper, we exploit the theory of majorization to obtain the aforementioned results. In the final part of this paper, we do so to solve power-control problems in the areas of femtocells and cognitive radio and find that, again, optimal solutions have a binary (or almost binary) character.", "We investigate spatial interference statistics for multigigabit outdoor mesh networks operating in the unlicensed 60-GHz \"millimeter (mm) wave\" band. The links in such networks are highly directional: Because of the small carrier wavelength (an order of magnitude smaller than those for existing cellular and wireless local area networks), narrow beams are essential for overcoming higher path loss and can be implemented using compact electronically steerable antenna arrays. Directionality drastically reduces interference, but it also leads to \"deafness,\" making implicit coordination using carrier sense infeasible. In this paper, we make a quantitative case for rethinking medium access control (MAC) design in such settings. Unlike existing MAC protocols for omnidirectional networks, where the focus is on interference management, we contend that MAC design for 60-GHz mesh networks can essentially ignore interference and must focus instead on the challenge of scheduling half-duplex transmissions with deaf neighbors. Our main contribution is an analytical framework for estimating the collision probability in such networks as a function of the antenna patterns and the density of simultaneously transmitting nodes. The numerical results from our interference analysis show that highly directional links can indeed be modeled as pseudowired, in that the collision probability is small even with a significant density of transmitters. Furthermore, simulation of a rudimentary directional slotted Aloha protocol shows that packet losses due to failed coordination are an order of magnitude higher than those due to collisions, confirming our analytical results and highlighting the need for more sophisticated coordination mechanisms.", "In this paper, we study resource allocation for multiuser multiple-input–single-output secondary communication systems with multiple system design objectives. We consider cognitive radio (CR) networks where the secondary receivers are able to harvest energy from the radio frequency when they are idle. The secondary system provides simultaneous wireless power and secure information transfer to the secondary receivers. We propose a multiobjective optimization framework for the design of a Pareto-optimal resource allocation algorithm based on the weighted Tchebycheff approach. In particular, the algorithm design incorporates three important system design objectives: total transmit power minimization, energy harvesting efficiency maximization, and interference-power-leakage-to-transmit-power ratio minimization. The proposed framework takes into account a quality-of-service (QoS) requirement regarding communication secrecy in the secondary system and the imperfection of the channel state information (CSI) of potential eavesdroppers (idle secondary receivers and primary receivers) at the secondary transmitter. The proposed framework includes total harvested power maximization and interference power leakage minimization as special cases. The adopted multiobjective optimization problem is nonconvex and is recast as a convex optimization problem via semidefinite programming (SDP) relaxation. It is shown that the global optimal solution of the original problem can be constructed by exploiting both the primal and the dual optimal solutions of the SDP-relaxed problem. Moreover, two suboptimal resource allocation schemes for the case when the solution of the dual problem is unavailable for constructing the optimal solution are proposed. Numerical results not only demonstrate the close-to-optimal performance of the proposed suboptimal schemes but unveil an interesting tradeoff among the considered conflicting system design objectives as well.", "This paper proposes a decentralized interference control scheme for an OFDMA (Orthogonal Frequency Division Multiple Access) cellular scenario where multiple macrocells and femtocells are deployed. In particular, we model the distributed resource allocation problem by means of a potential game, which is demonstrated to converge to a unique and pure Nash equilibrum. In this game, the femto base stations are the players, their actions are the resource blocks and the corresponding power levels to be allocated for downlink transmission, and the utility is designed to guarantee coexistence with the macro system, guaranteeing a fair tradeoff between femto and macro systems' performances. To do that, the utility function does not only consider the capacity of the femtocells, but also the different sources of inter-system interference: macrocell to femtocell, femtocell to femtocell, and femtocell to macrocell. The game is solved by applying a better response dynamic, which first selects the best resource blocks allocation policy for the femtousers, and then selects the most appropriate power levels for each resource block, through the solution of the corresponding convex optimization problem. Simulation results show that the introduction of femtocells in the scenario increases the total system capacity, and that the considered utility function provides an improvement in macro system performance of up to 40 , while reducing the femto capacity only up to 7 , with respect to the well known iterative waterfilling game.", "The Interfering Broadcast Channel (IBC) applies to the downlink of (cellular and or heterogeneous) multi-cell networks, which are limited by multi-user (MU) interference. The interference alignment (IA) concept has shown that interference does not need to be inevitable. In particular spatial IA in the MIMO IBC allows for low latency transmission. However, IA requires perfect and typically global Channel State Information at the Transmitter(s) (CSIT), whose acquisition does not scale well with network size. Also, the design of transmitters (Txs) and receivers (Rxs) is coupled and hence needs to be centralized (cloud) or duplicated (distributed approach). CSIT, which is crucial in MU systems, is always imperfect in practice. We consider the joint optimal exploitation of mean (channel estimates) and covariance Gaussian partial CSIT. Indeed, in a Massive MIMO (MaMIMO) setting (esp. when combined with mmWave) the channel covariances may exhibit low rank and zero-forcing might be possible by just exploiting the covariance subspaces. But the question is the optimization of beamformers for the expected weighted sum rate (EWSR) at finite SNR. We propose explicit beamforming solutions and indicate that existing large system analysis can be extended to handle optimized beamformers with the more general partial CSIT considered here.", "In this paper we introduce a novel linear precoding technique. The approach used for the design of the precoding matrix is general and the resulting algorithm can address several optimization criteria with an arbitrary number of antennas at the user terminals. We have achieved this by designing the precoding matrices in two steps. In the first step we minimize the overlap of the row spaces spanned by the effective channel matrices of different users using a new cost function. In the next step, we optimize the system performance with respect to specific optimization criteria assuming a set of parallel single- user MIMO channels. By combining the closed form solution with Tomlinson-Harashima precoding we reach the maximum sum-rate capacity when the total number of antennas at the user terminals is less or equal to the number of antennas at the base station. By iterating the closed form solution with appropriate power loading we are able to extract the full diversity in the system and reach the maximum sum-rate capacity in case of high multi-user interference. Joint processing over a group of multi-user MIMO channels in different frequency and time slots yields maximum diversity regardless of the level of multi-user interference.", "A generalization of the Gaussian dirty-paper problem to a multiple access setup is considered. There are two additive interference signals, one known to each transmitter but none to the receiver. The rates achievable using Costa's strategies (i.e. by a random binning scheme induced by Costa's auxiliary random variables) vanish in the limit when the interference signals are strong. In contrast, it is shown that lattice strategies (\"lattice precoding\") can achieve positive rates independent of the interferences, and in fact in some cases - which depend on the noise variance and power constraints - they are optimal. In particular, lattice strategies are optimal in the limit of high SNR. It is also shown that the gap between the achievable rate region and the capacity region is at most 0.167 bit. Thus, the dirty MAC is another instance of a network setup, like the Korner-Marton modulo-two sum problem, where linear coding is potentially better than random binning. Lattice transmission schemes and conditions for optimality for the asymmetric case, where there is only one interference which is known to one of the users (who serves as a \"helper\" to the other user), and for the \"common interference\" case are also derived. In the former case the gap between the helper achievable rate and its capacity is at most 0.085 bit.", "Interference plays a crucial role for performance degradation in communication networks nowadays. An appealing approach to interference avoidance is the Interference Cancellation (IC) methodology. Particularly, the Successive IC (SIC) method represents the most effective IC-based reception technique in terms of Bit-Error-Rate (BER) performance and, thus, yielding to the overall system robustness. Moreover, SIC in conjunction with Orthogonal Frequency Division Multiplexing (OFDM), in the context of SIC-OFDM, is shown to approach the Shannon capacity when single-antenna infrastructures are applied while this capacity limit can be further extended with the aid of multiple antennas. Recently, SIC-based reception has studied for Orthogonal Frequency and Code Division Multiplexing or (spread-OFDM systems), namely OFCDM. Such systems provide extremely high error resilience and robustness, especially in multi-user environments. In this paper, we present a comprehensive survey on the performance of SIC for single- and multiple-antenna OFDM and spread OFDM (OFCDM) systems. Thereby, we focus on all the possible OFDM formats that have been developed so far. We study the performance of SIC by examining closely two major aspects, namely the BER performance and the computational complexity of the reception process, thus striving for the provision and optimization of SIC. Our main objective is to point out the state-of-the-art on research activity for SIC-OF(C)DM systems, applied on a variety of well-known network implementations, such as cellular, ad hoc and infrastructure-based platforms. Furthermore, we introduce a Performance-Complexity Tradeoff (PCT) in order to indicate the contribution of the approaches studied in this paper. Finally, we provide analytical performance comparison tables regarding to the surveyed techniques with respect to the PCT level.", "In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.", "A new interference management strategy is proposed to enhance the overall capacity of cellular networks (CNs) and device-to-device (D2D) systems. We consider M out of K cellular user equipments (CUEs) and one D2D pair exploiting the same resources in the uplink (UL) period under the assumption of M multiple antennas at the base station (BS). First, we use the conventional mechanism which limits the maximum transmit power of the D2D transmitter so as not to generate harmful interference from D2D systems to CNs. Second, we propose a δD-interference limited area (ILA) control scheme to manage interference from CNs to D2D systems. The method does not allow the coexistence (i.e., use of the same resources) of CUEs and a D2D pair if the CUEs are located in the δD-ILA defined as the area in which the interference to signal ratio (ISR) at the D2D receiver is greater than the predetermined threshold, δD. Next, we analyze the coverage of the δD-ILA and derive the lower bound of the ergodic capacity as a closed form. Numerical results show that the δD-ILA based D2D gain is much greater than the conventional D2D gain, whereas the capacity loss to the CNs caused by using the δD-ILA is negligibly small.", "In this paper, we study resource allocation for a multicarrier-based cognitive radio (CR) network. More specifically, we investigate the problem of secondary users’ energy-efficiency (EE) maximization problem under secondary total power and primary interference constraints. First, assuming cooperation among the secondary base stations (BSs), a centralized approach is considered to solve the EE optimization problem for the CR network where the primary and secondary users are using either orthogonal frequency-division multiplexing (OFDM) or filter bank based multicarrier (FBMC) modulations. We propose an alternating-based approach to solve the joint power-subcarrier allocation problem. More importantly, in the first place, subcarriers are allocated using a heuristic method for a given feasible power allocation. Then, we conservatively approximate the nonconvex power control problem and propose a joint successive convex approximation-Dinkelbach algorithm (SCADA) to efficiently obtain a solution to the nonconvex power control problem. The proposed algorithm is shown to converge to a solution that coincides with the stationary point of the original nonconvex power control subproblem. Moreover, we propose a dual decomposition-based decentralized version of the SCADA. Second, under the assumption of no cooperation among the secondary BSs, we propose a fully distributed power control algorithm from the perspective of game theory. The proposed algorithm is shown to converge to a Nash-equilibrium (NE) point. Moreover, we propose a sufficient condition that guarantees the uniqueness of the achieved NE. Extensive simulation analyses are further provided to highlight the advantages and demonstrate the efficiency of our proposed schemes.", "We study in this paper the network spectral efficiency of a multiple-input multiple-output (MIMO) ad hoc network with K simultaneous communicating transmitter-receiver pairs. Assuming that each transmitter is equipped with t antennas and each receiver with r antennas and each receiver implements single-user detection, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network spectral efficiency is limited by r nats s Hz as Krarrinfin and is independent of t and the transmit power. With CSI corresponding to the intended receiver available at the transmitter, we demonstrate that the asymptotic spectral efficiency is at least t+r+2radictr nats s Hz. Asymptotically optimum signaling is also derived under the same CSI assumption, i.e., each transmitter knows the channel corresponding to its desired receiver only. Further capacity improvement is possible with stronger CSI assumption; we demonstrate this using a heuristic interference suppression transmit beamforming approach. The conventional orthogonal transmission approach is also analyzed. In particular, we show that with idealized medium access control, the channelized transmission has unbounded asymptotic spectral efficiency under the constant per-user power constraint. The impact of different power constraints on the asymptotic spectral efficiency is also carefully examined. Finally, numerical examples are given that confirm our analysis", "In this paper, we analyze the feasibility of linear interference alignment (IA) for multi-input-multi-output (MIMO) interference broadcast channel (MIMO-IBC) with constant coefficients. We pose and prove the necessary conditions of linear IA feasibility for general MIMO-IBC. Except for the proper condition, we find another necessary condition to ensure a kind of irreducible interference to be eliminated. We then prove the necessary and sufficient conditions for a special class of MIMO-IBC, where the numbers of antennas are divisible by the number of data streams per user. Since finding an invertible Jacobian matrix is crucial for the sufficiency proof, we first analyze the impact of sparse structure and repeated structure of the Jacobian matrix. Considering that for the MIMO-IBC the sub-matrices of the Jacobian matrix corresponding to the transmit and receive matrices have different repeated structure, we find an invertible Jacobian matrix by constructing the two sub-matrices separately. We show that for the MIMO-IBC where each user has one desired data stream, a proper system is feasible. For symmetric MIMO-IBC, we provide proper but infeasible region of antenna configurations by analyzing the difference between the necessary conditions and the sufficient conditions of linear IA feasibility.", "Interference is a major performance bottleneck in Heterogeneous Network (HetNet) due to its multi-tier structure. We propose almost blank resource block (ABRB) for interference control in HetNet. When an ABRB is scheduled in a macro BS, a resource block (RB) with blank payload is transmitted and this eliminates the interference from this macro BS to the pico BSs. We study a two timescale hierarchical radio resource management (RRM) scheme for HetNet with dynamic ABRB control. The long term controls, such as dynamic ABRB, are adaptive to the large scale fading at a RRM server for co-Tier and cross-Tier interference control. The short term control (user scheduling) is adaptive to the local channel state information at each BS to exploit the multi-user diversity. The two timescale optimization problem is challenging due to the exponentially large solution space. We exploit the sparsity in the interference graph of the HetNet topology and derive structural properties for the optimal ABRB control. Based on that, we propose a two timescale alternative optimization solution for user scheduling and ABRB control. The solution has low complexity and is asymptotically optimal at high SNR. Simulations show that the proposed solution has significant gain over various baselines.", "At present, operators address the explosive growth of mobile data demand by densification of the cellular network so as to reduce the transmitter-receiver distance and to achieve higher spectral efficiency. Due to such network densification and the intense proliferation of wireless devices, modern wireless networks are interference-limited, which motivates the use of interference mitigation and coordination techniques. In this work, we develop a statistical framework to evaluate the performance of multi-tier heterogeneous networks with successive interference cancellation (SIC) capabilities, accounting for the computational complexity of the cancellation scheme and relevant network related parameters such as random location of the access points (APs) and mobile users, and the characteristics of the wireless propagation channel. We explicitly model the consecutive events of canceling interferers and we derive the success probability to cancel the n-th strongest signal and to decode the signal of interest after n cancellations. When users are connected to the AP which provides the maximum average received signal power, the analysis indicates that the performance gains of SIC diminish quickly with n and the benefits are modest for realistic values of the signal-to-interference ratio (SIR). We extend the statistical model to include several association policies where distinct gains of SIC are expected: (i) minimum load association, (ii) maxi- mum instantaneous SIR association, and (iii) range expansion. Numerical results show the effectiveness of SIC for the considered association policies. This work deepens the understanding of SIC by defining the achievable gains for different association policies in multi-tier heterogeneous networks.", "Most cellular radio systems provide for the use of transmitter power control to reduce cochannel interference for a given channel allocation. Efficient interference management aims at achieving acceptable carrier-to-interference ratios in all active communication links in the system. Such schemes for the control of cochannel interference are investigated. The effect of adjacent channel interference is neglected. As a performance measure, the interference (outage) probability is used, i.e., the probability that a randomly chosen link is subject to excessive interference. In order to derive upper performance bounds for transmitter power control schemes, algorithms that are optimum in the sense that the interference probability is minimized are suggested. Numerical results indicate that these upper bounds exceed the performance of conventional systems by an order of magnitude regarding interference suppression and by a factor of 3 to 4 regarding the system capacity. The structure of the optimum algorithm shows that efficient power control and dynamic channel assignment algorithms are closely related. >" ] }
1708.03020
2963492344
We consider a non-stationary sequential stochastic optimization problem, in which the underlying cost functions change over time under a variation budget constraint. We propose an @math -variation functional to quantify the change, which yields less variation for dynamic function sequences whose changes are constrained to short time periods or small subsets of input domain. Under the @math -variation constraint, we derive both upper and matching lower regret bounds for smooth and strongly convex function sequences, which generalize previous results in (2015). Furthermore, we provide an upper bound for general convex function sequences with noisy gradient feedback, which matches the optimal rate as @math . Our results reveal some surprising phenomena under this general variation functional, such as the curse of dimensionality of the function domain. The key technical novelties in our analysis include affinity lemmas that characterize the distance of the minimizers of two convex functions with bounded Lp difference, and a cubic spline based construction that attains matching lower bounds.
Bandit convex optimization is a combination of stochastic optimization and online convex optimization, where the stationary benchmark in hindsight of a sequence of arbitrary convex functions @math is used to evaluate regrets. At each time @math , only the function evaluation at the queried point @math (or its noisy version) is revealed to the learning algorithm. Despite its similarity to stochastic and or online convex optimization, convex bandits are considerably harder due to its lack of first-order information and the arbitrary change of functions. @cite_4 proposed a novel finite-difference gradient estimator, which was adapted by @cite_6 to an ellipsoidal gradient estimator that achieves @math regret for constrained smooth and strongly convex bandits problems. For the non-smooth and non-strongly convex bandits problem, the recent work of @cite_10 attains @math regret with an explicit algorithm whose regret and running time both depend polynomially on dimension @math .
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_6" ], "mid": [ "153281708", "2473549844", "2284345772" ], "abstract": [ "Bandit convex optimization is a special case of online convex optimization with partial information. In this setting, a player attempts to minimize a sequence of adversarially generated convex loss functions, while only observing the value of each function at a single point. In some cases, the minimax regret of these problems is known to be strictly worse than the minimax regret in the corresponding full information setting. We introduce the multi-point bandit setting, in which the player can query each loss function at multiple points. When the player is allowed to query each function at two points, we prove regret bounds that closely resemble bounds for the full information case. This suggests that knowing the value of each loss function at two points is almost as useful as knowing the value of each function everywhere. When the player is allowed to query each function at d+1 points (d being the dimension of the space), we prove regret bounds that are exactly equivalent to full information bounds for smooth functions.", "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .", "The study of online convex optimization in the bandit setting was initiated by Kleinberg (2004) and (2005). Such a setting models a decision maker that has to make decisions in the face of adversarially chosen convex loss functions. Moreover, the only information the decision maker receives are the losses. The identities of the loss functions themselves are not revealed. In this setting, we reduce the gap between the best known lower and upper bounds for the class of smooth convex functions, i.e. convex functions with a Lipschitz continuous gradient. Building upon existing work on selfconcordant regularizers and one-point gradient estimation, we give the first algorithm whose expected regret is O(T ), ignoring constant and logarithmic factors." ] }
1906.00097
2947328144
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future.
Parameters in two architectures are if they have some learnable tensor in common. An alignment across architectures implies how tasks are related, and how much they are related. The goal of DMTL is to improve performance across tasks through joint training of aligned architectures, exploiting inter-task regularities. In recent years, DMTL has been applied within areas such as vision @cite_62 @cite_2 @cite_52 @cite_5 @cite_45 @cite_26 , natural language @cite_46 @cite_36 @cite_60 @cite_67 @cite_0 , speech @cite_43 @cite_65 @cite_63 , and reinforcement learning @cite_61 @cite_23 @cite_25 . The rest of this section reviews existing DMTL methods, showing that none of these methods satisfy both conditions (1) and (2).
{ "cite_N": [ "@cite_61", "@cite_67", "@cite_62", "@cite_26", "@cite_60", "@cite_36", "@cite_65", "@cite_52", "@cite_0", "@cite_43", "@cite_45", "@cite_63", "@cite_2", "@cite_23", "@cite_5", "@cite_46", "@cite_25" ], "mid": [ "2900964459", "2966182616", "2891303672", "2737590896", "2963704251", "2884886306", "2006462346", "2896060389", "2479919622", "2583200575", "2410931070", "2118099552", "2121495183", "2734075827", "2174358748", "2766910785", "2186054958" ], "abstract": [ "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Multi-Task Learning (MTL) is appealing for deep learning regularization. In this paper, we tackle a specific MTL context denoted as primary MTL, where the ultimate goal is to improve the performance of a given primary task by leveraging several other auxiliary tasks. Our main methodological contribution is to introduce ROCK, a new generic multi-modal fusion block for deep learning tailored to the primary MTL context. ROCK architecture is based on a residual connection, which makes forward prediction explicitly impacted by the intermediate auxiliary representations. The auxiliary predictor's architecture is also specifically designed to our primary MTL context, by incorporating intensive pooling operators for maximizing complementarity of intermediate representations. Extensive experiments on NYUv2 dataset (object detection with scene classification, depth prediction, and surface normal estimation as auxiliary tasks) validate the relevance of the approach and its superiority to flat MTL approaches. Our method outperforms state-of-the-art object detection models on NYUv2 by a large margin, and is also able to handle large-scale heterogeneous inputs (real and synthetic images) and missing annotation modalities.", "This paper proposes a data-driven approach for image alignment. Our main contribution is a novel network architecture that combines the strengths of convolutional neural networks (CNNs) and the Lucas-Kanade algorithm. The main component of this architecture is a Lucas-Kanade layer that performs the inverse compositional algorithm on convolutional feature maps. To train our network, we develop a cascaded feature learning method that incorporates the coarse-to-fine strategy into the training process. This method learns a pyramid representation of convolutional features in a cascaded manner and yields a cascaded network that performs coarse-to-fine alignment on the feature pyramids. We apply our model to the task of homography estimation, and perform training and evaluation on a large labeled dataset generated from the MS-COCO dataset. Experimental results show that the proposed approach significantly outperforms the other methods.", "Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.", "Multi-task learning has been widely adopted in many computer vision tasks to improve overall computation efficiency or boost the performance of individual tasks, under the assumption that those tasks are correlated and complementary to each other. However, the relationships between the tasks are complicated in practice, especially when the number of involved tasks scales up. When two tasks are of weak relevance, they may compete or even distract each other during joint training of shared parameters, and as a consequence undermine the learning of all the tasks. This will raise destructive interference which decreases learning efficiency of shared parameters and lead to low quality loss local optimum w.r.t. shared parameters. To address the this problem, we propose a general modulation module, which can be inserted into any convolutional neural network architecture, to encourage the coupling and feature sharing of relevant tasks while disentangling the learning of irrelevant tasks with minor parameters addition. Equipped with this module, gradient directions from different tasks can be enforced to be consistent for those shared parameters, which benefits multi-task joint training. The module is end-to-end learnable without ad-hoc design for specific tasks, and can naturally handle many tasks at the same time. We apply our approach on two retrieval tasks, face retrieval on the CelebA dataset [12] and product retrieval on the UT-Zappos50K dataset [34, 35], and demonstrate its advantage over other multi-task learning methods in both accuracy and storage efficiency.", "As one of the key technologies for future wireless communication systems, user cooperation has a great potential in improving the system performance. In this paper, the diversity benefit of user cooperation is evaluated using diversity-multiplexing tradeoff (DMT) as the performance metric. Ways of exploiting the diversity benefit are devised. We investigate two typical channel models, namely, the multiple-access channel (MAC) and the interference channel (IFC). For the MAC, all the terminals are equipped with multiple antennas and the K(K ≥ 2) sources cooperate in a full-duplex mode. We determine the optimal DMT of the channel and propose a compressed-and-forward (CF) based cooperation scheme to achieve the optimal DMT. For the IFC, we consider a two-user channel with single-antenna terminals and full-duplex source cooperation. A DMT upper bound and two DMT lower bounds are derived. The upper bound is derived based on a channel capacity upper bound, and the lower bounds are derived using decode-and-forward (DF) based cooperation schemes. In most channel scenarios, the bounds are found to be tight. This implies that under certain channel scenarios, the optimal DMT of the channel can be achieved by DF-based cooperation schemes.", "The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then out-performed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English to French and English to German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.", "In this paper we aim to determine the location and orientation of a ground-level query image by matching to a reference database of overhead (e.g. satellite) images. For this task we collect a new dataset with one million pairs of street view and overhead images sampled from eleven U.S. cities. We explore several deep CNN architectures for cross-domain matching – Classification, Hybrid, Siamese, and Triplet networks. Classification and Hybrid architectures are accurate but slow since they allow only partial feature precomputation. We propose a new loss function which significantly improves the accuracy of Siamese and Triplet embedding networks while maintaining their applicability to large-scale retrieval tasks like image geolocalization. This image matching task is challenging not just because of the dramatic viewpoint difference between ground-level and overhead imagery but because the orientation (i.e. azimuth) of the street views is unknown making correspondence even more difficult. We examine several mechanisms to match in spite of this – training for rotation invariance, sampling possible rotations at query time, and explicitly predicting relative rotation of ground and overhead images with our deep networks. It turns out that explicit orientation supervision also improves location prediction accuracy. Our best performing architectures are roughly 2.5 times as accurate as the commonly used Siamese network baseline.", "Many real-world problems, such as web image analysis, document categorization and product recommendation, often exhibit dual-heterogeneity: heterogeneous features obtained in multiple views, and multiple tasks might be related to each other through one or more shared views. To address these Multi-Task Multi-View (MTMV) problems, we propose a tensor-based framework for learning the predictive multilinear structure from the full-order feature interactions within the heterogeneous data. The usage of tensor structure is to strengthen and capture the complex relationships between multiple tasks with multiple views. We further develop efficient multilinear factorization machines (MFMs) that can learn the task-specific feature map and the task-view shared multilinear structures, without physically building the tensor. In the proposed method, a joint factorization is applied to the full-order interactions such that the consensus representation can be learned. In this manner, it can deal with the partially incomplete data without difficulty as the learning procedure does not simply rely on any particular view. Furthermore, the complexity of MFMs is linear in the number of parameters, which makes MFMs suitable to large-scale real-world problems. Extensive experiments on four real-world datasets demonstrate that the proposed method significantly outperforms several state-of-the-art methods in a wide variety of MTMV problems.", "A new deep multitask learning based metric learning method is proposed.An appropriate architecture is proposed for offline signature verification.The knowledge of other signers' samples is transfered for better accuracy.The proposed method outperforms SVM and Discriminative Deep Metric Learning.It outperforms the accuracy of other published reports. Display Omitted This paper presents a novel classification method, Deep Multitask Metric Learning (DMML), for offline signature verification. Unlike existing methods that to verify questioned signatures of an individual merely consider the training samples of that class, DMML uses the knowledge from the similarities and dissimilarities between the genuine and forged samples of other classes too. To this end, using the idea of multitask and transfer learning, DMML train a distance metric for each class together with other classes simultaneously. DMML has a structure with a shared layer acting as a writer-independent approach, that is followed by separated layers which learn writer-dependent factors. We compare the proposed method against SVM, writer-dependent and writer-independent Discriminative Deep Metric Learning method on four offline signature datasets (UTSig, MCYT-75, GPDSsynthetic, and GPDS960GraySignatures) using Histogram of Oriented Gradients (HOG) and Discrete Radon Transform (DRT) features. Results of our experiments show that DMML achieves better performance compared to other methods in verifying genuine signatures, skilled and random forgeries.", "Multi-task learning (MTL) learns multiple related tasks simultaneously to improve generalization performance. Alternating structure optimization (ASO) is a popular MTL method that learns a shared low-dimensional predictive structure on hypothesis spaces from multiple related tasks. It has been applied successfully in many real world applications. As an alternative MTL approach, clustered multi-task learning (CMTL) assumes that multiple tasks follow a clustered structure, i.e., tasks are partitioned into a set of groups where tasks in the same group are similar to each other, and that such a clustered structure is unknown a priori. The objectives in ASO and CMTL differ in how multiple tasks are related. Interestingly, we show in this paper the equivalence relationship between ASO and CMTL, providing significant new insights into ASO and CMTL as well as their inherent relationship. The CMTL formulation is non-convex, and we adopt a convex relaxation to the CMTL formulation. We further establish the equivalence relationship between the proposed convex relaxation of CMTL and an existing convex relaxation of ASO, and show that the proposed convex CMTL formulation is significantly more efficient especially for high-dimensional data. In addition, we present three algorithms for solving the convex CMTL formulation. We report experimental results on benchmark datasets to demonstrate the efficiency of the proposed algorithms.", "The alignment problem---establishing links between corresponding phrases in two related sentences---is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2 in F1 over a representative NLI aligner and 10.5 over GIZA++.", "Recently, several methods based on generative adversarial network (GAN) have been proposed for the task of aligning cross-domain images or learning a joint distribution of cross-domain images. One of the methods is to use conditional GAN for alignment. However, previous attempts of adopting conditional GAN do not perform as well as other methods. In this work we present an approach for improving the capability of the methods which are based on conditional GAN. We evaluate the proposed method on numerous tasks and the experimental results show that it is able to align the cross-domain images successfully in absence of paired samples. Furthermore, we also propose another model which conditions on multiple information such as domain information and label information. Conditioning on domain information and label information, we are able to conduct label propagation from the source domain to the target domain. A 2-step alternating training algorithm is proposed to learn this model.", "Deep networks are increasingly being applied to problems involving image synthesis, e.g., generating images from textual descriptions and reconstructing an input image from a compact representation. Supervised training of image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the mismatch between a generated image and its corresponding target image. We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM). Because MS-SSIM is differentiable, it is easily incorporated into gradient-descent learning. We compare the consequences of using MS-SSIM versus PL loss on training deterministic and stochastic autoencoders. For three different architectures, we collected human judgments of the quality of image reconstructions. Observers reliably prefer images synthesized by MS-SSIM-optimized models over those synthesized by PL-optimized models, for two distinct PL measures ( @math and @math distances). We also explore the effect of training objective on image encoding and analyze conditions under which perceptually-optimized representations yield better performance on image classification. Finally, we demonstrate the superiority of perceptually-optimized networks for super-resolution imaging. Just as computer vision has advanced through the use of convolutional architectures that mimic the structure of the mammalian visual system, we argue that significant additional advances can be made in modeling images through the use of training objectives that are well aligned to characteristics of human perception.", "It is known that the inconsistent distribution and representation of different modalities, such as image and text, cause the heterogeneity gap that makes it challenging to correlate such heterogeneous data. Generative adversarial networks (GANs) have shown its strong ability of modeling data distribution and learning discriminative representation, existing GANs-based works mainly focus on generative problem to generate new data. We have different goal, aim to correlate heterogeneous data, by utilizing the power of GANs to model cross-modal joint distribution. Thus, we propose Cross-modal GANs to learn discriminative common representation for bridging heterogeneity gap. The main contributions are: (1) Cross-modal GANs architecture is proposed to model joint distribution over data of different modalities. The inter-modality and intra-modality correlation can be explored simultaneously in generative and discriminative models. Both of them beat each other to promote cross-modal correlation learning. (2) Cross-modal convolutional autoencoders with weight-sharing constraint are proposed to form generative model. They can not only exploit cross-modal correlation for learning common representation, but also preserve reconstruction information for capturing semantic consistency within each modality. (3) Cross-modal adversarial mechanism is proposed, which utilizes two kinds of discriminative models to simultaneously conduct intra-modality and inter-modality discrimination. They can mutually boost to make common representation more discriminative by adversarial training process. To the best of our knowledge, our proposed CM-GANs approach is the first to utilize GANs to perform cross-modal common representation learning. Experiments are conducted to verify the performance of our proposed approach on cross-modal retrieval paradigm, compared with 10 methods on 3 cross-modal datasets.", "In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining \"with whom\" each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm mono-tonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature." ] }
1906.00097
2947328144
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future.
The classical approach to DMTL considers a joint model across tasks in which some aligned layers are shared completely across tasks, and the remaining layers remain task-specific @cite_1 . In practice, the most common approach is to share all layers except for the final classification layers @cite_61 @cite_22 @cite_50 @cite_43 @cite_39 @cite_13 @cite_47 @cite_3 @cite_24 . A more flexible approach is to not share parameters exactly across shared layers, but to factorize layer parameters into shared and task-specific factors @cite_31 @cite_14 @cite_27 @cite_54 @cite_7 @cite_45 . Such approaches work for any set of architectures that have a known set of aligned layers. However, these methods only apply when such alignment is known . That is, they do not meet condition (2).
{ "cite_N": [ "@cite_61", "@cite_31", "@cite_14", "@cite_22", "@cite_7", "@cite_54", "@cite_1", "@cite_3", "@cite_39", "@cite_24", "@cite_43", "@cite_27", "@cite_45", "@cite_50", "@cite_47", "@cite_13" ], "mid": [ "2963704251", "2407277018", "2900964459", "2966182616", "2186054958", "2131479143", "2583200575", "2491457278", "2153086202", "2737590896", "2949201716", "2885775891", "2099857870", "2032612424", "1942758450", "2006462346" ], "abstract": [ "Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.", "Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining \"with whom\" each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm mono-tonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature.", "Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.", "Many real-world problems, such as web image analysis, document categorization and product recommendation, often exhibit dual-heterogeneity: heterogeneous features obtained in multiple views, and multiple tasks might be related to each other through one or more shared views. To address these Multi-Task Multi-View (MTMV) problems, we propose a tensor-based framework for learning the predictive multilinear structure from the full-order feature interactions within the heterogeneous data. The usage of tensor structure is to strengthen and capture the complex relationships between multiple tasks with multiple views. We further develop efficient multilinear factorization machines (MFMs) that can learn the task-specific feature map and the task-view shared multilinear structures, without physically building the tensor. In the proposed method, a joint factorization is applied to the full-order interactions such that the consensus representation can be learned. In this manner, it can deal with the partially incomplete data without difficulty as the learning procedure does not simply rely on any particular view. Furthermore, the complexity of MFMs is linear in the number of parameters, which makes MFMs suitable to large-scale real-world problems. Extensive experiments on four real-world datasets demonstrate that the proposed method significantly outperforms several state-of-the-art methods in a wide variety of MTMV problems.", "This paper presents a new approach for facial attribute classification using a multi-task learning approach. Unlike other approaches that uses hand engineered features, our model learns a shared feature representation that is wellsuited for multiple attribute classification. Learning a joint feature representation enables interaction between different tasks. For learning this shared feature representation we use a Restricted Boltzmann Machine (RBM) based model, enhanced with a factored multi-task component to become Multi-Task Restricted Boltzmann Machine (MT-RBM). Our approach operates directly on faces and facial landmark points to learn a joint feature representation over all the available attributes. We use an iterative learning approach consisting of a bottom-up top-down pass to learn the shared representation of our multi-task model and at inference we use a bottom-up pass to predict the different tasks. Our approach is not restricted to any type of attributes, however, for this paper we focus only on facial attributes. We evaluate our approach on three publicly available datasets, the Celebrity Faces (CelebA), the Multi-task Facial Landmarks (MTFL), and the ChaLearn challenge dataset. We show superior classification performance improvement over the state-of-the-art.", "When we have several related tasks, solving them simultaneously is shown to be more effective than solving them individually. This approach is called multi-task learning (MTL) and has been studied extensively. Existing approaches to MTL often treat all the tasks as uniformly related to each other and the relatedness of the tasks is controlled globally. For this reason, the existing methods can lead to undesired solutions when some tasks are not highly related to each other, and some pairs of related tasks can have significantly different solutions. In this paper, we propose a novel MTL algorithm that can overcome these problems. Our method makes use of a task network, which describes the relation structure among tasks. This allows us to deal with intricate relation structures in a systematic way. Furthermore, we control the relatedness of the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines (SVMs) and show that the optimization problem can be cast as a second order cone program, which is convex and can be solved efficiently. The usefulness of our approach is demonstrated through simulations with protein super-family classification and ordinal regression problems.", "This paper proposes a data-driven approach for image alignment. Our main contribution is a novel network architecture that combines the strengths of convolutional neural networks (CNNs) and the Lucas-Kanade algorithm. The main component of this architecture is a Lucas-Kanade layer that performs the inverse compositional algorithm on convolutional feature maps. To train our network, we develop a cascaded feature learning method that incorporates the coarse-to-fine strategy into the training process. This method learns a pyramid representation of convolutional features in a cascaded manner and yields a cascaded network that performs coarse-to-fine alignment on the feature pyramids. We apply our model to the task of homography estimation, and perform training and evaluation on a large labeled dataset generated from the MS-COCO dataset. Experimental results show that the proposed approach significantly outperforms the other methods.", "In the paradigm of multi-task learning, mul- tiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learn- ing that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combi- nation of a finite number of underlying basis tasks. The coefficients of the linear combina- tion are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on on the assumption that task pa- rameters within a group lie in a low dimen- sional subspace but allows the tasks in differ- ent groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.", "We present an approach named JSFusion (Joint Sequence Fusion) that can measure semantic similarity between any pairs of multimodal sequence data (e.g. a video clip and a language sentence). Our multimodal matching network consists of two key components. First, the Joint Semantic Tensor composes a dense pairwise representation of two sequence data into a 3D tensor. Then, the Convolutional Hierarchical Decoder computes their similarity score by discovering hidden hierarchical matches between the two sequence modalities. Both modules leverage hierarchical attention mechanisms that learn to promote well-matched representation patterns while prune out misaligned ones in a bottom-up manner. Although the JSFusion is a universal model to be applicable to any multimodal sequence data, this work focuses on video-language tasks including multimodal retrieval and video QA. We evaluate the JSFusion model in three retrieval and VQA tasks in LSMDC, for which our model achieves the best performance reported so far. We also perform multiple-choice and movie retrieval tasks for the MSR-VTT dataset, on which our approach outperforms many state-of-the-art methods.", "We consider a general multiple-antenna network with multiple sources, multiple destinations, and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or nonclustered). We first study the multiple-antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link are increased, whereas CF continues to perform optimally. We also study the multiple-antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the ideal assumption of full-duplex relays and a clustered network, this virtual multiple-input multiple-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains in effect.", "Multi-task learning (MTL) aims to improve the performance of multiple related tasks by exploiting the intrinsic relationships among them. Recently, multi-task feature learning algorithms have received increasing attention and they have been successfully applied to many applications involving high dimensional data. However, they assume that all tasks share a common set of features, which is too restrictive and may not hold in real-world applications, since outlier tasks often exist. In this paper, we propose a Robust Multi-Task Feature Learning algorithm (rMTFL) which simultaneously captures a common set of features among relevant tasks and identifies outlier tasks. Specifically, we decompose the weight (model) matrix for all tasks into two components. We impose the well-known group Lasso penalty on row groups of the first component for capturing the shared features among relevant tasks. To simultaneously identify the outlier tasks, we impose the same group Lasso penalty but on column groups of the second component. We propose to employ the accelerated gradient descent to efficiently solve the optimization problem in rMTFL, and show that the proposed algorithm is scalable to large-size problems. In addition, we provide a detailed theoretical analysis on the proposed rMTFL formulation. Specifically, we present a theoretical bound to measure how well our proposed rMTFL approximates the true evaluation, and provide bounds to measure the error between the estimated weights of rMTFL and the underlying true weights. Moreover, by assuming that the underlying true weights are above the noise level, we present a sound theoretical result to show how to obtain the underlying true shared features and outlier tasks (sparsity patterns). Empirical studies on both synthetic and real-world data demonstrate that our proposed rMTFL is capable of simultaneously capturing shared features among tasks and identifying outlier tasks.", "In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.", "As one of the key technologies for future wireless communication systems, user cooperation has a great potential in improving the system performance. In this paper, the diversity benefit of user cooperation is evaluated using diversity-multiplexing tradeoff (DMT) as the performance metric. Ways of exploiting the diversity benefit are devised. We investigate two typical channel models, namely, the multiple-access channel (MAC) and the interference channel (IFC). For the MAC, all the terminals are equipped with multiple antennas and the K(K ≥ 2) sources cooperate in a full-duplex mode. We determine the optimal DMT of the channel and propose a compressed-and-forward (CF) based cooperation scheme to achieve the optimal DMT. For the IFC, we consider a two-user channel with single-antenna terminals and full-duplex source cooperation. A DMT upper bound and two DMT lower bounds are derived. The upper bound is derived based on a channel capacity upper bound, and the lower bounds are derived using decode-and-forward (DF) based cooperation schemes. In most channel scenarios, the bounds are found to be tight. This implies that under certain channel scenarios, the optimal DMT of the channel can be achieved by DF-based cooperation schemes." ] }
1906.00097
2947328144
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future.
One approach to overcome the alignment problem is to design an entirely new architecture that integrates information from different tasks and is maximally shared across tasks @cite_62 @cite_21 @cite_53 . Such an approach can even be used to share knowledge across disparate modalities @cite_53 . However, by disregarding task-specific architectures, this approach does not meet condition (1). Related approaches attempts to learn how to assemble a set of shared modules in different ways to solve different tasks, whether by gradient descent @cite_16 , reinforcement learning @cite_28 , or evolutionary architecture search @cite_8 . These methods also construct new architectures, so they do not meet condition (1); however, they have shown that including a small number of location-specific parameters is crucial to sharing functionality across diverse locations.
{ "cite_N": [ "@cite_62", "@cite_8", "@cite_28", "@cite_53", "@cite_21", "@cite_16" ], "mid": [ "2122308921", "2068206659", "2951104886", "2479919622", "2114113040", "2737590896" ], "abstract": [ "An architecture is described for designing systems that acquire and ma nipulate large amounts of unsystematized, or so-called commonsense, knowledge. Its aim is to exploit to the full those aspects of computational learning that are known to offer powerful solutions in the acquisition and maintenance of robust knowledge bases. The architecture makes explicit the requirements on the basic computational tasks that are to be performed and is designed to make this computationally tractable even for very large databases. The main claims are that (i) the basic learning and deduction tasks are provably tractable and (ii) tractable learning offers viable approaches to a range of issues that have been previously identified as problematic for artificial intelligence systems that are programmed. Among the issues that learning offers to resolve are robustness to inconsistencies, robustness to incomplete information and resolving among alternatives. Attribute-efficient learning algorithms, which allow learning from few examples in large dimensional systems, are fundamental to the approach. Underpinning the overall architecture is a new principled approach to manipulating relations in learning systems. This approach, of independently quantified arguments, allows propositional learning algorithms to be applied systematically to learning relational concepts in polynomial time and in modular fashion.", "This paper advocates a new architecture for textual inference in which finding a good alignment is separated from evaluating entailment. Current approaches to semantic inference in question answering and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, using a locally decomposable matching score. We argue that there are significant weaknesses in this approach, including flawed assumptions of monotonicity and locality. Instead we propose a pipelined approach where alignment is followed by a classification step, in which we extract features representing high-level characteristics of the entailment problem, and pass the resulting feature vector to a statistical classifier trained on development data. We report results on data from the 2005 Pascal RTE Challenge which surpass previously reported results for alignment-based systems.", "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.", "In this paper we aim to determine the location and orientation of a ground-level query image by matching to a reference database of overhead (e.g. satellite) images. For this task we collect a new dataset with one million pairs of street view and overhead images sampled from eleven U.S. cities. We explore several deep CNN architectures for cross-domain matching – Classification, Hybrid, Siamese, and Triplet networks. Classification and Hybrid architectures are accurate but slow since they allow only partial feature precomputation. We propose a new loss function which significantly improves the accuracy of Siamese and Triplet embedding networks while maintaining their applicability to large-scale retrieval tasks like image geolocalization. This image matching task is challenging not just because of the dramatic viewpoint difference between ground-level and overhead imagery but because the orientation (i.e. azimuth) of the street views is unknown making correspondence even more difficult. We examine several mechanisms to match in spite of this – training for rotation invariance, sampling possible rotations at query time, and explicitly predicting relative rotation of ground and overhead images with our deep networks. It turns out that explicit orientation supervision also improves location prediction accuracy. Our best performing architectures are roughly 2.5 times as accurate as the commonly used Siamese network baseline.", "This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of l1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.", "This paper proposes a data-driven approach for image alignment. Our main contribution is a novel network architecture that combines the strengths of convolutional neural networks (CNNs) and the Lucas-Kanade algorithm. The main component of this architecture is a Lucas-Kanade layer that performs the inverse compositional algorithm on convolutional feature maps. To train our network, we develop a cascaded feature learning method that incorporates the coarse-to-fine strategy into the training process. This method learns a pyramid representation of convolutional features in a cascaded manner and yields a cascaded network that performs coarse-to-fine alignment on the feature pyramids. We apply our model to the task of homography estimation, and perform training and evaluation on a large labeled dataset generated from the MS-COCO dataset. Experimental results show that the proposed approach significantly outperforms the other methods." ] }
1906.00110
2947039732
The Price of Anarchy (PoA) is a well-established game-theoretic concept to shed light on coordination issues arising in open distributed systems. Leaving agents to selfishly optimize comes with the risk of ending up in sub-optimal states (in terms of performance and or costs), compared to a centralized system design. However, the PoA relies on strong assumptions about agents' rationality (e.g., resources and information) and interactions, whereas in many distributed systems agents interact locally with bounded resources. They do so repeatedly over time (in contrast to "one-shot games"), and their strategies may evolve. Using a more realistic evolutionary game model, this paper introduces a realized evolutionary Price of Anarchy (ePoA). The ePoA allows an exploration of equilibrium selection in dynamic distributed systems with multiple equilibria, based on local interactions of simple memoryless agents. Considering a fundamental game related to virus propagation on networks, we present analytical bounds on the ePoA in basic network topologies and for different strategy update dynamics. In particular, deriving stationary distributions of the stochastic evolutionary process, we find that the Nash equilibria are not always the most abundant states, and that different processes can feature significant off-equilibrium behavior, leading to a significantly higher ePoA compared to the PoA studied traditionally in the literature.
While in many text book examples, Nash equilibria are typically bad'' and highlight a failure of cooperation (e.g., Prisoner's Dilemma, tragedy of the commons, etc.), research shows that in many application domains, even the worst game-theoretic equilibria are often fairly close to the optimal outcome @cite_18 . However, these examples also have in common that users have information about the game. While the study of information games also has a long tradition @cite_0 , much less is known today about the price of anarchy in games with partial information based on local interactions. In general, it is believed that similar extensions are challenging @cite_18 . An interesting line of recent work initiated the study of Bayes-Nash equilibria @cite_36 in games of incomplete information (e.g., @cite_36 ), and in particular the Bayes-Nash Price of Anarchy @cite_18 .
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_36" ], "mid": [ "2133243407", "2294025081", "2072113937" ], "abstract": [ "The price of anarchy (POA) is a worst-case measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium. This drawback motivates the search for inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash and correlated equilibria; or to sequences of outcomes generated by natural experimentation strategies, such as successive best responses or simultaneous regret-minimization. We prove a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective. First, we identify a \"canonical sufficient condition\" for an upper bound of the POA for pure Nash equilibria, which we call a smoothness argument. Second, we show that every bound derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of regret-minimizing players (or \"price of total anarchy\"). Smoothness arguments also have automatic implications for the inefficiency of approximate and Bayesian-Nash equilibria and, under mild additional assumptions, for bicriteria bounds and for polynomial-length best-response sequences. We also identify classes of games --- most notably, congestion games with cost functions restricted to an arbitrary fixed set --- that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worst-case upper bound on the POA, even for the smallest set of interest (pure Nash equilibria). Byproducts of our proof of this result include the first tight bounds on the POA in congestion games with non-polynomial cost functions, and the first structural characterization of atomic congestion games that are universal worst-case examples for the POA.", "The price of anarchy, defined as the ratio of the worst-case objective function value of a Nash equilibrium of a game and that of an optimal outcome, quantifies the inefficiency of selfish behavior. Remarkably good bounds on this measure are known for a wide range of application domains. However, such bounds are meaningful only if a game's participants successfully reach a Nash equilibrium. This drawback motivates inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash equilibria and correlated equilibria, and to sequences of outcomes generated by natural experimentation strategies, such as successive best responses and simultaneous regret-minimization. We establish a general and fundamental connection between the price of anarchy and its seemingly more general relatives. First, we identify a “canonical sufficient condition” for an upper bound on the price of anarchy of pure Nash equilibria, which we call a smoothness argument. Second, we prove an “extension theorem”: every bound on the price of anarchy that is derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of every outcome sequence generated by no-regret learners. Smoothness arguments also have automatic implications for the inefficiency of approximate equilibria, for bicriteria bounds, and, under additional assumptions, for polynomial-length best-response sequences. Third, we prove that in congestion games, smoothness arguments are “complete” in a proof-theoretic sense: despite their automatic generality, they are guaranteed to produce optimal worst-case upper bounds on the price of anarchy.", "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria." ] }
1906.00110
2947039732
The Price of Anarchy (PoA) is a well-established game-theoretic concept to shed light on coordination issues arising in open distributed systems. Leaving agents to selfishly optimize comes with the risk of ending up in sub-optimal states (in terms of performance and or costs), compared to a centralized system design. However, the PoA relies on strong assumptions about agents' rationality (e.g., resources and information) and interactions, whereas in many distributed systems agents interact locally with bounded resources. They do so repeatedly over time (in contrast to "one-shot games"), and their strategies may evolve. Using a more realistic evolutionary game model, this paper introduces a realized evolutionary Price of Anarchy (ePoA). The ePoA allows an exploration of equilibrium selection in dynamic distributed systems with multiple equilibria, based on local interactions of simple memoryless agents. Considering a fundamental game related to virus propagation on networks, we present analytical bounds on the ePoA in basic network topologies and for different strategy update dynamics. In particular, deriving stationary distributions of the stochastic evolutionary process, we find that the Nash equilibria are not always the most abundant states, and that different processes can feature significant off-equilibrium behavior, leading to a significantly higher ePoA compared to the PoA studied traditionally in the literature.
More specifically, we revisited 's virus inoculation game @cite_32 in this paper. Traditional virus propagation models studied virus infection in terms of birth and death rates of viruses @cite_2 @cite_16 as well as through local interactions on networks @cite_6 @cite_3 @cite_43 (e.g., the Internet @cite_11 @cite_1 ). An interesting study is due to Montanari and Saberi @cite_15 who compare game-theoretic and epidemic models, investigating differences between the viral behavior of the spread of viruses, new technologies, and new political or social beliefs.
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_3", "@cite_6", "@cite_43", "@cite_2", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2124486211", "2161728228", "2030539428", "2108050790", "2398538542", "2110919848", "1987511126", "2171031021", "2086627840" ], "abstract": [ "The strong analogy between biological viruses and their computational counterparts has motivated the authors to adapt the techniques of mathematical epidemiology to the study of computer virus propagation. In order to allow for the most general patterns of program sharing, a standard epidemiological model is extended by placing it on a directed graph and a combination of analysis and simulation is used to study its behavior. The conditions under which epidemics are likely to occur are determined, and, in cases where they do, the dynamics of the expected number of infected individuals are examined as a function of time. It is concluded that an imperfect defense against computer viruses can still be highly effective in preventing their widespread proliferation, provided that the infection rate does not exceed a well-defined critical epidemic threshold. >", "Many models for the spread of infectious diseases in populations have been analyzed mathematically and applied to specific diseases. Threshold theorems involving the basic reproduction number @math , the contact number @math , and the replacement number @math are reviewed for the classic SIR epidemic and endemic models. Similar results with new expressions for @math are obtained for MSEIR and SEIR endemic models with either continuous age or age groups. Values of @math and @math are estimated for various diseases including measles in Niger and pertussis in the United States. Previous models with age structure, heterogeneity, and spatial structure are surveyed.", "The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible infective removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks.", "To understand the current extent of the computer virus problem and predict its future course, the authors have conducted a statistical analysis of computer virus incidents in a large, stable sample population of PCs and developed new epidemiological models of computer virus spread. Only a small fraction of all known viruses have appeared in real incidents, partly because many viruses are below the theoretical epidemic threshold. The observed sub-exponential rate of viral spread can be explained by models of localized software exchange. A surprisingly small fraction of machines in well-protected business environments are infected. This may be explained by a model in which, once a machine is found to be infected, neighboring machines are checked for viruses. This kill signal idea could be implemented in networks to greatly reduce the threat of viral spread. A similar principle has been incorporated into a cost-effective anti-virus policy for organizations which works quite well in practice. >", "The spread of epidemics and malware is commonly modeled by diffusion processes on networks. Protective interventions such as vaccinations or installing anti-virus software are used to contain their spread. Typically, each node in the network has to decide its own strategy of securing itself, and its benefit depends on which other nodes are secure, making this a natural game-theoretic setting. There has been a lot of work on network security game models, but most of the focus has been either on simplified epidemic models or homogeneous network structure. We develop a new formulation for an epidemic containment game, which relies on the characterization of the SIS model in terms of the spectral radius of the network. We show in this model that pure Nash equilibria (NE) always exist, and can be found by a best response strategy. We analyze the complexity of finding NE, and derive rigorous bounds on their costs and the Price of Anarchy or PoA (the ratio of the cost of the worst NE to the optimum social cost) in general graphs as well as in random graph models. In particular, for arbitrary power-law graphs with exponent β > 2, we show that the PoA is bounded by O(T2(β -1))), where T = γ α is the ratio of the recovery rate to the transmission rate in the SIS model. We prove that this bound is tight up to a constant factor for the Chung-Lu random power-law graph model. We study the characteristics of Nash equilibria empirically in different real communication and infrastructure networks, and find that our analytical results can help explain some of the empirical observations.", "The spread of diseases in human and other populations can be described in terms of networks, where individuals are represented by nodes and modes of contact by edges. Similar models can be applied to the spread of viruses on the Internet. In their Perspective, Lloyd and May discuss the similarities and differences between the dynamics of computer viruses and infections of human and other populations.", "The effect of virus spreading in a telecommunication network, where a certain curing strategy is deployed, can be captured by epidemic models. In the N-intertwined model proposed and studied in [1], [2], the probability of each node to be infected depends on the curing and infection rate of its neighbors. In this paper, we consider the case where all infection rates are equal and different values of curing rates can be deployed within a given budget, in order to minimize the overall infection of the network. We investigate this difficult optimization together with a related problem where the curing budget must be minimized within a given level of network infection. Some properties of these problems are derived and several solution algorithms are proposed. These algorithms are compared on two real world network instances, while Erdos-Renyi graphs and some special graphs such as the cycle, the star, the wheel and the complete bipartite graph are also addressed.", "Suppose we have two competing ideas products viruses, that propagate over a social or other network. Suppose that they are strong virulent enough, so that each, if left alone, could lead to an epidemic. What will happen when both operate on the network? Earlier models assume that there is perfect competition: if a user buys product 'A' (or gets infected with virus 'X'), she will never buy product 'B' (or virus 'Y'). This is not always true: for example, a user could install and use both Firefox and Google Chrome as browsers. Similarly, one type of flu may give partial immunity against some other similar disease. In the case of full competition, it is known that 'winner takes all,' that is the weaker virus product will become extinct. In the case of no competition, both viruses survive, ignoring each other. What happens in-between these two extremes? We show that there is a phase transition: if the competition is harsher than a critical level, then 'winner takes all;' otherwise, the weaker virus survives. These are the contributions of this paper (a) the problem definition, which is novel even in epidemiology literature (b) the phase-transition result and (c) experiments on real data, illustrating the suitability of our results.", "The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics." ] }
1906.00184
2947908969
Image-to-image translation models have shown remarkable ability on transferring images among different domains. Most of existing work follows the setting that the source domain and target domain keep the same at training and inference phases, which cannot be generalized to the scenarios for translating an image from an unseen domain to an another unseen domain. In this work, we propose the Unsupervised Zero-Shot Image-to-image Translation (UZSIT) problem, which aims to learn a model that can transfer translation knowledge from seen domains to unseen domains. Accordingly, we propose a framework called ZstGAN: By introducing an adversarial training scheme, ZstGAN learns to model each domain with domain-specific feature distribution that is semantically consistent on vision and attribute modalities. Then the domain-invariant features are disentangled with an shared encoder for image generation. We carry out extensive experiments on CUB and FLO datasets, and the results demonstrate the effectiveness of proposed method on UZSIT task. Moreover, ZstGAN shows significant accuracy improvements over state-of-the-art zero-shot learning methods on CUB and FLO.
Image generation has been widely investigated in recent years. Most of works focus on modeling the natural image distribution. Generative Adversarial Network (GAN) @cite_10 was firstly proposed to generate images from random variables by a two-player minimax game: a generator G tries to create fake but plausible images, while a discriminator D is trained to distinguish difference between real and fake images. To address the stability issues in GAN, Wasserstein-GAN (WGAN) @cite_26 was proposed to optimize an approximation of the Wasserstein distance. To further improve the vanishing and exploding gradient problems of WGAN, @cite_33 proposed a WGAN-GP that uses gradient penalty instead of the weight clipping to enforce the Lipschitz constrain in WGAN. @cite_16 also proposed a LSGAN and found that optimizing the least square cost function is the same as optimizing a Pearson @math divergence. In this paper, we combine with WGAN-GP @cite_33 to generate domain-specific features and translation images.
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_33", "@cite_16" ], "mid": [ "2787223504", "2687693326", "2963981733", "2962900302" ], "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the \"Frechet Inception Distance\" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.", "Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the Frechet Inception Distance'' (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.", "We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database." ] }
1906.00184
2947908969
Image-to-image translation models have shown remarkable ability on transferring images among different domains. Most of existing work follows the setting that the source domain and target domain keep the same at training and inference phases, which cannot be generalized to the scenarios for translating an image from an unseen domain to an another unseen domain. In this work, we propose the Unsupervised Zero-Shot Image-to-image Translation (UZSIT) problem, which aims to learn a model that can transfer translation knowledge from seen domains to unseen domains. Accordingly, we propose a framework called ZstGAN: By introducing an adversarial training scheme, ZstGAN learns to model each domain with domain-specific feature distribution that is semantically consistent on vision and attribute modalities. Then the domain-invariant features are disentangled with an shared encoder for image generation. We carry out extensive experiments on CUB and FLO datasets, and the results demonstrate the effectiveness of proposed method on UZSIT task. Moreover, ZstGAN shows significant accuracy improvements over state-of-the-art zero-shot learning methods on CUB and FLO.
Zero-Shot Learning (ZSL) was first introduced by @cite_0 , where train and test classes are disjoint for object recognition. Traditional methods for ZSL are based on learning an embedding from the visual space to the semantic space. In the test period, the semantic vector of an unseen sample is extracted and the most likely class is predicted by nearest neighbor method @cite_5 @cite_4 @cite_19 . Recent works on ZSL have widely explored the idea of generative models. @cite_25 presented a deep generative model for ZSL based on VAE @cite_28 . Due to the rapidly developed GANs, other approaches used GANs to synthesize visual representations for the seen and unseen classes @cite_15 @cite_24 . However, the generate images usually lack sufficient quality to train a classifier for both the seen and unseen classes. Hence authors @cite_20 @cite_12 used GANs to synthesizes CNN features rather than image pixels conditioned on class-level semantic information. On the other hand, considering that ZSL is a domain shift problem, @cite_13 @cite_27 presented the Generalized ZSL (GZSL) that leverages both seen and unseen classes at test time.
{ "cite_N": [ "@cite_12", "@cite_4", "@cite_28", "@cite_0", "@cite_19", "@cite_24", "@cite_27", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "2949533609", "2516449915", "2951478085", "2963960318", "2771119646", "2209594346", "2741634395", "2949823873", "2520613337", "2611345819", "2400717490", "2609698233" ], "abstract": [ "Zero-shot learning (ZSL) aims to recognize unseen object classes without any training samples, which can be regarded as a form of transfer learning from seen classes to unseen ones. This is made possible by learning a projection between a feature space and a semantic space (e.g. attribute space). Key to ZSL is thus to learn a projection function that is robust against the often large domain gap between the seen and unseen classes. In this paper, we propose a novel ZSL model termed domain-invariant projection learning (DIPL). Our model has two novel components: (1) A domain-invariant feature self-reconstruction task is introduced to the seen unseen class data, resulting in a simple linear formulation that casts ZSL into a min-min optimization problem. Solving the problem is non-trivial, and a novel iterative algorithm is formulated as the solver, with rigorous theoretic algorithm analysis provided. (2) To further align the two domains via the learned projection, shared semantic structure among seen and unseen classes is explored via forming superclasses in the semantic space. Extensive experiments show that our model outperforms the state-of-the-art alternatives by significant margins.", "With the sustaining bloom of multimedia data, Zero-shot Learning (ZSL) techniques have attracted much attention in recent years for its ability to train learning models that can handle “unseen” categories. Existing ZSL algorithms mainly take advantages of attribute-based semantic space and only focus on static image data. Besides, most ZSL studies merely consider the semantic embedded labels and fail to address domain shift problem. In this paper, we purpose a deep two-output model for video ZSL and action recognition tasks by computing both spatial and temporal features from video contents through distinct Convolutional Neural Networks (CNNs) and training a Multi-layer Perceptron (MLP) upon extracted features to map videos to semantic embedding word vectors. Moreover, we introduce a domain adaptation strategy named “ConSSEV” — by combining outputs from two distinct output layers of our MLP to improve the results of zero-shot learning. Our experiments on UCF101 dataset demonstrate the purposed model has more advantages associated with more complex video embedding schemes, and outperforms the state-of-the-art zero-shot learning techniques.", "Zero-shot learning (ZSL) aims to recognize objects of novel classes without any training samples of specific classes, which is achieved by exploiting the semantic information and auxiliary datasets. Recently most ZSL approaches focus on learning visual-semantic embeddings to transfer knowledge from the auxiliary datasets to the novel classes. However, few works study whether the semantic information is discriminative or not for the recognition task. To tackle such problem, we propose a coupled dictionary learning approach to align the visual-semantic structures using the class prototypes, where the discriminative information lying in the visual space is utilized to improve the less discriminative semantic space. Then, zero-shot recognition can be performed in different spaces by the simple nearest neighbor approach using the learned class prototypes. Extensive experiments on four benchmark datasets show the effectiveness of the proposed approach.", "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets - CUB, FLO, SUN, AWA and ImageNet - in both the zero-shot learning and generalized zero-shot learning settings.", "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets -- CUB, FLO, SUN, AWA and ImageNet -- in both the zero-shot learning and generalized zero-shot learning settings.", "Zero-shot learning (ZSL) can be considered as a special case of transfer learning where the source and target domains have different tasks label spaces and the target domain is unlabelled, providing little guidance for the knowledge transfer. A ZSL method typically assumes that the two domains share a common semantic representation space, where a visual feature vector extracted from an image video can be projected embedded using a projection function. Existing approaches learn the projection function from the source domain and apply it without adaptation to the target domain. They are thus based on naive knowledge transfer and the learned projections are prone to the domain shift problem. In this paper a novel ZSL method is proposed based on unsupervised domain adaptation. Specifically, we formulate a novel regularised sparse coding framework which uses the target domain class labels' projections in the semantic space to regularise the learned target domain projection thus effectively overcoming the projection domain shift problem. Extensive experiments on four object and action recognition benchmark datasets show that the proposed ZSL method significantly outperforms the state-of-the-arts.", "Zero-shot learning (ZSL) is to construct recognition models for unseen target classes that have no labeled samples for training. It utilizes the class attributes or semantic vectors as side information and transfers supervision information from related source classes with abundant labeled samples. Existing ZSL approaches adopt an intermediary embedding space to measure the similarity between a sample and the attributes of a target class to perform zero-shot classification. However, this way may suffer from the information loss caused by the embedding process and the similarity measure cannot fully make use of the data distribution. In this paper, we propose a novel approach which turns the ZSL problem into a conventional supervised learning problem by synthesizing samples for the unseen classes. Firstly, the probability distribution of an unseen class is estimated by using the knowledge from seen classes and the class attributes. Secondly, the samples are synthesized based on the distribution for the unseen class. Finally, we can train any supervised classifiers based on the synthesized samples. Extensive experiments on benchmarks demonstrate the superiority of the proposed approach to the state-of-the-art ZSL approaches.", "We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional ZSL that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of the classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this tradeoff. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet Full 2011 with 21,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper-bound on the performance limit of GZSL. There, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving zero-shot learning.", "Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. Re-using the learned mapping to project target videos into an embedding space thus allows novel-classes to be recognised by nearest neighbour inference. However, existing ZSL methods suffer from auxiliary-target domain shift intrinsically induced by assuming the same mapping for the disjoint auxiliary and target classes. This compromises the generalisation accuracy of ZSL recognition on the target data. In this work, we improve the ability of ZSL to generalise across this domain shift in both model- and data-centric ways by formulating a visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes. Specifically: (1) We introduce a multi-task visual-semantic mapping to improve generalisation by constraining the semantic mapping parameters to lie on a low-dimensional manifold, (2) We explore prioritised data augmentation by expanding the pool of auxiliary data with additional instances weighted by relevance to the target domain. The proposed new model is applied to the challenging zero-shot action recognition problem to demonstrate its advantages over existing ZSL models.", "We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attribute-class relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets.", "We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional zero-shot learning (ZSL) that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this trade-off. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet with more than 20,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper bound on the performance limit of GZSL. In particular, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving ZSL.", "By transferring knowledge from the abundant labeled samples of known source classes, zero-shot learning (ZSL) makes it possible to train recognition models for novel target classes that have no labeled samples. Conventional ZSL approaches usually adopt a two-step recognition strategy, in which the test sample is projected into an intermediary space in the first step, and then the recognition is carried out by considering the similarity between the sample and target classes in the intermediary space. Due to this redundant intermediate transformation, information loss is unavoidable, thus degrading the performance of overall system. Rather than adopting this two-step strategy, in this paper, we propose a novel one-step recognition framework that is able to perform recognition in the original feature space by using directly trained classifiers. To address the lack of labeled samples for training supervised classifiers for the target classes, we propose to transfer samples from source classes with pseudo labels assigned, in which the transferred samples are selected based on their transferability and diversity. Moreover, to account for the unreliability of pseudo labels of transferred samples, we modify the standard support vector machine formulation such that the unreliable positive samples can be recognized and suppressed in the training phase. The entire framework is fairly general with the possibility of further extensions to several common ZSL settings. Extensive experiments on four benchmark data sets demonstrate the superiority of the proposed framework, compared with the state-of-the-art approaches, in various settings." ] }
1906.00117
2947794021
Recently, a method [7] was proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), to generate contrastive explanations for classification model where one is able to query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on structured data [13]. Moreover, to obtain meaningful explanations we propose a principled approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of the different data types of this nature was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively and qualitatively validate our approach over 5 public datasets covering diverse domains.
Trust and transparency of AI systems has received a lot of attention recently @cite_35 . Explainability is considered to be one of the cornerstones for building trustworthy systems and has been of particular focus in the research community @cite_16 @cite_37 . Researchers are trying to build better performing interpretable models @cite_5 @cite_14 @cite_9 @cite_32 @cite_36 @cite_24 as well as improved methods to understand black box models such as deep neural networks @cite_2 @cite_19 @cite_13 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_14", "@cite_36", "@cite_9", "@cite_32", "@cite_24", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_13" ], "mid": [ "2762968537", "2808925008", "2769542353", "2803532212", "2963847595", "2788362053", "2806422867", "2120312633", "2946940672", "2963041618", "2949690500", "2420245003" ], "abstract": [ "We characterize three notions of explainable AI that cut across research fields: opaque systems that offer no insight into its algo- rithmic mechanisms; interpretable systems where users can mathemat- ically analyze its algorithmic mechanisms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. The paper is motivated by a corpus analysis of NIPS, ACL, COGSCI, and ICCV ECCV paper titles showing differences in how work on explainable AI is positioned in various fields. We close by introducing a fourth notion: truly explainable systems, where automated reasoning is central to output crafted explanations without requiring human post processing as final step of the generative process.", "The widescale use of machine learning algorithms to drive decision-making has highlighted the critical importance of ensuring the interpretability of such models in order to engender trust in their output. The state-of-the-art recommendation systems use black-box latent factor models that provide no explanation of why a recommendation has been made, as they abstract their decision processes to a high-dimensional latent space which is beyond the direct comprehension of humans. We propose a novel approach for extracting explanations from latent factor recommendation systems by training association rules on the output of a matrix factorisation black-box model. By taking advantage of the interpretable structure of association rules, we demonstrate that predictive accuracy of the recommendation model can be maintained whilst yielding explanations with high fidelity to the black-box model on a unique industry dataset. Our approach mitigates the accuracy-interpretability trade-off whilst avoiding the need to sacrifice flexibility or use external data sources. We also contribute to the ill-defined problem of evaluating interpretability.", "Transparency, user trust, and human comprehension are popular ethical motivations for interpretable machine learning. In support of these goals, researchers evaluate model explanation performance using humans and real world applications. This alone presents a challenge in many areas of artificial intelligence. In this position paper, we propose a distinction between descriptive and persuasive explanations. We discuss reasoning suggesting that functional interpretability may be correlated with cognitive function and user preferences. If this is indeed the case, evaluation and optimization using functional metrics could perpetuate implicit cognitive bias in explanations that threaten transparency. Finally, we propose two potential research directions to disambiguate cognitive function and explanation models, retaining control over the tradeoff between accuracy and interpretability.", "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.", "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.", "Despite a growing body of research focused on creating interpretable machine learning methods, there have been few empirical studies verifying whether interpretable methods achieve their intended effects on end users. We present a framework for assessing the effects of model interpretability on users via pre-registered experiments in which participants are shown functionally identical models that vary in factors thought to influence interpretability. Using this framework, we ran a sequence of large-scale randomized experiments, varying two putative drivers of interpretability: the number of features and the model transparency (clear or black-box). We measured how these factors impact trust in model predictions, the ability to simulate a model, and the ability to detect a model's mistakes. We found that participants who were shown a clear model with a small number of features were better able to simulate the model's predictions. However, we found no difference in multiple measures of trust and found that clear models did not improve the ability to correct mistakes. These findings suggest that interpretability research could benefit from more emphasis on empirically verifying that interpretable models achieve all their intended effects.", "During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. Model-free learners and model-based solvers have close parallels with Systems 1 and 2 in current theories of the human mind: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between model-free learners and model-based solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general.", "The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is a recent discipline oriented to increase the reliability and performance of electronic communities. Computer science has moved from the paradigm of isolated machines to the paradigm of networks and distributed computing. Likewise, artificial intelligence is quickly moving from the paradigm of isolated and non-situated intelligence to the paradigm of situated, social and collective intelligence. The new paradigm of the so called intelligent or autonomous agents and multi-agent systems (MAS) together with the spectacular emergence of the information society technologies (specially reflected by the popularization of electronic commerce) are responsible for the increasing interest on trust and reputation mechanisms applied to electronic societies. This review wants to offer a panoramic view on current computational trust and reputation models.", "Explaining decisions of deep neural networks is a hot research topic with applications in medical imaging, video surveillance, and self driving cars. Many methods have been proposed in literature to explain these decisions by identifying relevance of different pixels. In this paper, we propose a method that can generate contrastive explanations for such data where we not only highlight aspects that are in themselves sufficient to justify the classification by the deep model, but also new aspects which if added will change the classification. One of our key contributions is how we define \"addition\" for such rich data in a formal yet humanly interpretable way that leads to meaningful results. This was one of the open questions laid out in Dhurandhar this http URL. (2018) [5], which proposed a general framework for creating (local) contrastive explanations for deep models. We showcase the efficacy of our approach on CelebA and Fashion-MNIST in creating intuitive explanations that are also quantitatively superior compared with other state-of-the-art interpretability methods.", "Model interpretability is a requirement in many applications in which crucial decisions are made by users relying on a model's outputs. The recent movement for “algorithmic fairness” also stipulates explainability, and therefore interpretability of learning models. And yet the most successful contemporary Machine Learning approaches, the Deep Neural Networks, produce models that are highly non-interpretable. We attempt to address this challenge by proposing a technique called CNN-INTE to interpret deep Convolutional Neural Networks (CNN) via meta-learning. In this work, we interpret a specific hidden layer of the deep CNN model on the MNIST image dataset. We use a clustering algorithm in a two-level structure to find the meta-level training data and Random Forest as base learning algorithms to generate the meta-level test data. The interpretation results are displayed visually via diagrams, which clearly indicates how a specific test instance is classified. Our method achieves global interpretation for all the test instances on the hidden layers without sacrificing the accuracy obtained by the original deep CNN model. This means our model is faithful to the original deep CNN model, which leads to reliable interpretations.", "A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable 'explanations' of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model -- its responses as well as failures -- more predictable to a human. Surprisingly, we find that they do not. On the other hand, we find that human-in-the-loop approaches that treat the model as a black-box do.", "Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as black-box functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
In a @math -VAE @cite_14 , a free parameter @math multiplies the @math term in @math above. This objective @math remains a lower bound on the evidence.
{ "cite_N": [ "@cite_14" ], "mid": [ "1964207488" ], "abstract": [ "We investigate nonparametric multiproduct pricing problems, in which we want to find revenue maximizing prices for products @math based on a set of customer samples @math . We mostly focus on the unit-demand case, in which products constitute strict substitutes and each customer aims to purchase a single product. In this setting a customer sample consists of a number of nonzero values for different products and possibly an additional product ranking. Once prices are fixed, each customer chooses to buy one of the products she can afford based on some predefined selection rule. We distinguish between the min-buying, max-buying, and rank-buying models. Some of our results also extend to single-minded pricing, in which case products are strict complements and every customer seeks to buy a single set of products, which she purchases if the sum of prices is below her valuation for that set. For the min-buying model we show that the revenue maximization problem is not approximable within factor @math for some constant @math , unless @math , thereby almost closing the gap between the known algorithmic results and previous lower bounds. We also prove inapproximability within @math , @math being an upper bound on the number of nonzero values per customer, and @math under slightly stronger assumptions and provide matching upper bounds. Surprisingly, these hardness results hold even if a price ladder constraint, i.e., a predefined order on the prices of all products, is given. Without the price ladder constraint we obtain similar hardness results for the special case of uniform valuations, i.e., the case that every customer has identical values for all the products she is interested in, assuming specific hardness of the balanced bipartite independent set problem in constant degree graphs or hardness of refuting random 3CNF formulas. Introducing a slightly more general problem definition in which customers are given as an explicit probability distribution, we obtain inapproximability within @math assuming @math . These results apply to single-minded pricing as well. For the max-buying model a polynomial-time approximation scheme exists if a price ladder is given. We give a matching lower bound by proving strong NP-hardness. Assuming limited product supply, we analyze a generic local search algorithm and prove that it is 2-approximate. Finally, we discuss implications for the rank-buying model." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
A is the total correlation (TC) for @math , a generalisation of mutual information to multiple variables @cite_25 . With this mean-field @math , Factor and @math -TCVAEs upweight this term, so we have an objective @math . @cite_28 gives a differentiable, stochastic approximation to @math , rendering this decomposition simple to use as a training objective using stochastic gradient descent. We also note that A , the total correlation, is also the objective in Independent Component Analysis (ICA) @cite_11 @cite_13 .
{ "cite_N": [ "@cite_28", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2787273002", "2964127395", "2583932153", "2950162031" ], "abstract": [ "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our @math -TCVAE (Total Correlation Variational Autoencoder), a refinement of the state-of-the-art @math -VAE objective for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the latent variables model is trained using our framework.", "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the beta-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the beta-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.", "We study a generalization of Wyner's Common Information toWatanabe's Total Correlation. The first minimizes the description size required for a variable that can make two other random variables conditionally independent. If independence is unattainable, Watanabe's total (conditional) correlation is measure to check just how independent they have become. Following up on earlier work for scalar Gaussians, we discuss the minimization of total correlation for Gaussian vector sources. Using Gaussian auxiliaries, we show one should transform two vectors of length d into d independent pairs, after which a reverse water filling procedure distributes the minimization over all these pairs. Lastly, we show how this minimization of total conditional correlation fits a lossy coding problem by using the Gray-Wyner network as a model for a caching problem.", "We investigate the clustering performances of the relaxed @math means in the setting of sub-Gaussian Mixture Model (sGMM) and Stochastic Block Model (SBM). After identifying the appropriate signal-to-noise ratio (SNR), we prove that the misclassification error decay exponentially fast with respect to this SNR. These partial recovery bounds for the relaxed @math means improve upon results currently known in the sGMM setting. In the SBM setting, applying the relaxed @math means SDP allows to handle general connection probabilities whereas other SDPs investigated in the literature are restricted to the assortative case (where within group probabilities are larger than between group probabilities). Again, this partial recovery bound complements the state-of-the-art results. All together, these results put forward the versatility of the relaxed @math means." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
We now have a set of @math layers of @math variables: @math . The evidence lower bound for models of this form is: The simplest VAE with a hierarchy of conditional stochastic variables in the generative model is the Deep Latent Gaussian Model @cite_5 . The forward model factorises as a chain: Each @math is a Gaussian distribution with mean and variance parameterised by deep nets. @math is a unit isotropic Gaussian.
{ "cite_N": [ "@cite_5" ], "mid": [ "2542071751" ], "abstract": [ "Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are @math observed contexts and @math arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality @math ( @math ). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the @math mean reward matrix @math (for each context in @math and each arm in @math ) factorizes into non-negative factors @math ( @math ) and @math ( @math ). This insight enables us to propose an @math -greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of @math at time @math , as compared to @math for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of @math . These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
To perform amortised variational inference one introduces a recognition network, which can be any directed acyclic graph where each node, each distribution over each @math , is Gaussian conditioned on its parents. This could be a chain, as in @cite_5 : Again, marginalising out intermediate @math layers, we see @math @math is a non-Gaussian, highly flexible, distribution.
{ "cite_N": [ "@cite_5" ], "mid": [ "2949767141" ], "abstract": [ "Representations offered by deep generative models are fundamentally tied to their inference method from data. Variational inference methods require a rich family of approximating distributions. We construct the variational Gaussian process (VGP), a Bayesian nonparametric model which adapts its shape to match complex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random non-linear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity. We prove a universal approximation theorem for the VGP, demonstrating its representative power for learning any model. For inference we present a variational objective inspired by autoencoders and perform black box inference over a wide class of models. The VGP achieves new state-of-the-art results for unsupervised learning, inferring models such as the deep latent Gaussian model and the recently proposed DRAW." ] }
1906.00230
2947920725
This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as @math -TCVAE (, 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks ( (2016); Gondim- (2018); (2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.
This is the hierarchical version of the collapse of @math units in a single-layer VAE @cite_17 , but now the collapse is over entire layers @math . It is part of the motivation for the Ladder VAE @cite_1 and BIVA @cite_35 . 2pt
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_17" ], "mid": [ "1995406764", "2915002151", "2038198231" ], "abstract": [ "For @math -dimensional tensors with possibly large @math , an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.", "Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation with latent variables. In this paper, we investigate several multi-level structures to learn a VAE model to generate long, and coherent text. In particular, we use a hierarchy of stochastic layers between the encoder and decoder networks to generate more informative latent codes. We also investigate a multi-level decoder structure to learn a coherent long-term structure by generating intermediate sentence representations as high-level plan vectors. Empirical results demonstrate that a multi-level VAE model produces more coherent and less repetitive long text compared to the standard VAE models and can further mitigate the posterior-collapse issue.", "We define the hierarchical singular value decomposition (SVD) for tensors of order @math . This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in @math ), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format ( @math -Tucker) which requires only @math parameters, where @math is the order of the tensor, @math the size of the modes, and @math the (hierarchical) rank. The @math -Tucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank @math tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank @math tensors) is in @math and the attainable accuracy is just 2-3 digits less than machine precision." ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
One of the first optimizations was the DHP algorithm proposed by @cite_25 . This algorithm uses a hashing scheme to collect upper bounds on the frequencies of the candidate patterns for the following pass. Patterns of which it is already known that they will turn up infrequent can then be eliminated from further consideration. The effectiveness of this technique only showed for the first few passes. Since our upper bound can be used to eliminate passes at the end, both techniques can be combined in the same algorithm.
{ "cite_N": [ "@cite_25" ], "mid": [ "2103442564" ], "abstract": [ "We propose a novel unsupervised method for discovering recurring patterns from a single view. A key contribution of our approach is the formulation and validation of a joint assignment optimization problem where multiple visual words and object instances of a potential recurring pattern are considered simultaneously. The optimization is achieved by a greedy randomized adaptive search procedure (GRASP) with moves specifically designed for fast convergence. We have quantified systematically the performance of our approach under stressed conditions of the input (missing features, geometric distortions). We demonstrate that our proposed algorithm outperforms state of the art methods for recurring pattern discovery on a diverse set of 400+ real world and synthesized test images." ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
The sampling algorithm proposed by Toivonen @cite_3 performs at most two scans through the database by picking a random sample from the database, then finding all frequent patterns that probably hold in the whole database, and then verifying the results with the rest of the database. In the cases where the sampling method does not produce all frequent patterns, the missing patterns can be found by generating all remaining potentially frequent patterns and verifying their frequencies during a second pass through the database. The probability of such a failure can be kept small by decreasing the minimal support threshold. However, for a reasonably small probability of failure, the threshold must be drastically decreased, which can again cause a combinatorial explosion of the number of candidate patterns.
{ "cite_N": [ "@cite_3" ], "mid": [ "2122735658" ], "abstract": [ "A constructive solution of the irregular sampling problem for band- limited functions is given. We show how a band-limited function can be com- pletely reconstructed from any random sampling set whose density is higher than the Nyquist rate, and give precise estimates for the speed of convergence of this iteration method. Variations of this algorithm allow for irregular sampling with derivatives, reconstruction of band-limited functions from local averages, and irregular sampling of multivariate band-limited functions. In the irregular sampling problem one is asked whether and how a band- limited function can be completely reconstructed from its irregularly sam- pled values f(xi). This has many applications in signal and image processing, seismology, meteorology, medical imaging, etc. Finding constructive solutions of this problem has received considerable attention among mathematicians and engineers. The mathematical literature provides several uniqueness results (1, 2, 17, 18, 19). It is now part of the folklore that for stable sampling the sampling rate must be at least the Nyquist rate (18). These results, as deep as they are, have had little impact for the applied sciences, because they were not constructive. If the sampling set is just a perturbation of the regular oversampling, then a reconstruction method has been obtained in a seminal paper by Duffin and Schaeffer (6) (see also (29)): if for some L > 0, a > 0, and o > 0 the sampling points xk , k e Z , satisfy (a) - ok a, k ^ I, then the norm equivalence A iR (x) ) with w < n o. This norm equivalence implies that it is possible to reconstruct through an iterative procedure, the so-called frame method. Most of the later work on constructive methods consists of variations of this method (3, 21, 22, 26). The above conditions on the sampling set exclude random irregular sampling sets, e.g., sets with regions of higher sampling density. A partial, but undesirable remedy, to handle highly irregular sampling sets, would be to force the above conditions by throwing away information on part of the points and accept a very slow convergence of the iteration." ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
The DIC algorithm, proposed by @cite_12 , tries to reduce the number of passes over the database by dividing the database into intervals of a specific size. First, all candidate patterns of size @math are generated. The frequencies of the candidate sets are then counted over the first interval of the database. Based on these frequencies, candidate patterns of size @math are generated and are counted over the next interval together with the patterns of size @math . In general, after every interval @math , candidate patterns of size @math are generated and counted. The algorithm stops if no more candidates can be generated. Again, this technique can be combined with our technique in the same algorithm.
{ "cite_N": [ "@cite_12" ], "mid": [ "2787728049" ], "abstract": [ "The well-known @math -disjoint path problem ( @math -DPP) asks for pairwise vertex-disjoint paths between @math specified pairs of vertices @math in a given graph, if they exist. The decision version of the shortest @math -DPP asks for the length of the shortest (in terms of total length) such paths. Similarly the search and counting versions ask for one such and the number of such shortest set of paths, respectively. We restrict attention to the shortest @math -DPP instances on undirected planar graphs where all sources and sinks lie on a single face or on a pair of faces. We provide efficient sequential and parallel algorithms for the search versions of the problem answering one of the main open questions raised by Colin de Verdiere and Schrijver for the general one-face problem. We do so by providing a randomised @math algorithm along with an @math time randomised sequential algorithm. We also obtain deterministic algorithms with similar resource bounds for the counting and search versions. In contrast, previously, only the sequential complexity of decision and search versions of the \"well-ordered\" case has been studied. For the one-face case, sequential versions of our routines have better running times for constantly many terminals. In addition, the earlier best known sequential algorithms (e.g. ) were randomised while ours are also deterministic. The algorithms are based on a bijection between a shortest @math -tuple of disjoint paths in the given graph and cycle covers in a related digraph. This allows us to non-trivially modify established techniques relating counting cycle covers to the determinant. We further need to do a controlled inclusion-exclusion to produce a polynomial sum of determinants such that all \"bad\" cycle covers cancel out in the sum allowing us to count \"good\" cycle covers." ] }
cs0112007
2949074981
In the context of mining for frequent patterns using the standard levelwise algorithm, the following question arises: given the current level and the current set of frequent patterns, what is the maximal number of candidate patterns that can be generated on the next level? We answer this question by providing a tight upper bound, derived from a combinatorial result from the sixties by Kruskal and Katona. Our result is useful to reduce the number of database scans.
Also, current state-of-the-art algorithms for frequent itemset mining, such as Opportunistic Project @cite_6 and DCI @cite_14 use several techniques within the same algorithm and switch between these techniques using several simple, but not waterproof heuristics.
{ "cite_N": [ "@cite_14", "@cite_6" ], "mid": [ "2151953639", "2048105574" ], "abstract": [ "Efficient algorithms for mining frequent itemsets are crucial for mining association rules as well as for many other data mining tasks. Methods for mining frequent itemsets have been implemented using a prefix-tree structure, known as an FP-tree, for storing compressed information about frequent itemsets. Numerous experimental results have demonstrated that these algorithms perform extremely well. In this paper, we present a novel FP-array technique that greatly reduces the need to traverse FP-trees, thus obtaining significantly improved performance for FP-tree-based algorithms. Our technique works especially well for sparse data sets. Furthermore, we present new algorithms for mining all, maximal, and closed frequent itemsets. Our algorithms use the FP-tree data structure in combination with the FP-array technique efficiently and incorporate various optimization techniques. We also present experimental results comparing our methods with existing algorithms. The results show that our methods are the fastest for many cases. Even though the algorithms consume much memory when the data sets are sparse, they are still the fastest ones when the minimum support is low. Moreover, they are always among the fastest algorithms and consume less memory than other methods when the data sets are dense.", "Mining frequent itemsets from transactional data streams is challenging due to the nature of the exponential explosion of itemsets and the limit memory space required for mining frequent itemsets. Given a domain of I unique items, the possible number of itemsets can be up to 2^I-1. When the length of data streams approaches to a very large number N, the possibility of an itemset to be frequent becomes larger and difficult to track with limited memory. The existing studies on finding frequent items from high speed data streams are false-positive oriented. That is, they control memory consumption in the counting processes by an error parameter @e, and allow items with support below the specified minimum support s but above [email protected] counted as frequent ones. However, such false-positive oriented approaches cannot be effectively applied to frequent itemsets mining for two reasons. First, false-positive items found increase the number of false-positive frequent itemsets exponentially. Second, minimization of the number of false-positive items found, by using a small @e, will make memory consumption large. Therefore, such approaches may make the problem computationally intractable with bounded memory consumption. In this paper, we developed algorithms that can effectively mine frequent item(set)s from high speed transactional data streams with a bound of memory consumption. Our algorithms are based on Chernoff bound in which we use a running error parameter to prune item(set)s and use a reliability parameter to control memory. While our algorithms are false-negative oriented, that is, certain frequent itemsets may not appear in the results, the number of false-negative itemsets can be controlled by a predefined parameter so that desired recall rate of frequent itemsets can be guaranteed. Our extensive experimental studies show that the proposed algorithms have high accuracy, require less memory, and consume less CPU time. They significantly outperform the existing false-positive algorithms." ] }
cs0112011
1632815424
We investigate ways to support interactive mining sessions, in the setting of association rule mining. In such sessions, users specify conditions (queries) on the associations to be generated. Our approach is a combination of the integration of querying conditions inside the mining phase, and the incremental querying of already generated associations. We present several concrete algorithms and compare their performance.
The idea that queries can be integrated in the mining algorithm was initially launched by Srikant, Vu, and Agrawal @cite_24 , who considered queries that are Boolean expressions over the presence or absence of certain items in the rules. Queries specifically as bodies or heads were not discussed. The authors considered three different approaches to the problem. The proposed algorithms are not optimal: they generate and test several itemsets that do not satisfy the query, and their optimizations also do not always become more efficient for more specific queries.
{ "cite_N": [ "@cite_24" ], "mid": [ "2151502664" ], "abstract": [ "Recent advances in information extraction have led to huge knowledge bases (KBs), which capture knowledge in a machine-readable format. Inductive Logic Programming (ILP) can be used to mine logical rules from the KB. These rules can help deduce and add missing knowledge to the KB. While ILP is a mature field, mining logical rules from KBs is different in two aspects: First, current rule mining systems are easily overwhelmed by the amount of data (state-of-the art systems cannot even run on today's KBs). Second, ILP usually requires counterexamples. KBs, however, implement the open world assumption (OWA), meaning that absent data cannot be used as counterexamples. In this paper, we develop a rule mining model that is explicitly tailored to support the OWA scenario. It is inspired by association rule mining and introduces a novel measure for confidence. Our extensive experiments show that our approach outperforms state-of-the-art approaches in terms of precision and coverage. Furthermore, our system, AMIE, mines rules orders of magnitude faster than state-of-the-art approaches." ] }
cs0112011
1632815424
We investigate ways to support interactive mining sessions, in the setting of association rule mining. In such sessions, users specify conditions (queries) on the associations to be generated. Our approach is a combination of the integration of querying conditions inside the mining phase, and the incremental querying of already generated associations. We present several concrete algorithms and compare their performance.
Also Lakshmanan, Ng, Han and Pang worked on the integration of constraints on itemsets in mining, considering conjunctions of conditions on itemsets such as those considered here, as well as others (arbitrary Boolean combinations were not discussed) @cite_9 @cite_7 . Of the various strategies for the so-called CAP'' algorithm they present, the one that can handle the queries considered in the present paper is their strategy II''. Again, this strategy generates and tests itemsets that do not satisfy the query. Also, their algorithms implement a rule-query by separately mining for possible heads and for possible bodies, while we tightly couple the querying of rules with the querying of sets. This work has also been further studied by Pei, Han and Lakshmanan @cite_23 @cite_2 , and employed within the FPgrowth algorithm.
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_7", "@cite_2" ], "mid": [ "2030969394", "2125714474", "2200550062", "2148693963" ], "abstract": [ "In this paper, we examine the issue of mining association rules among items in a large database of sales transactions. The mining of association rules can be mapped into the problem of discovering large itemsets where a large itemset is a group of items which appear in a sufficient number of transactions. The problem of discovering large itemsets can be solved by constructing a candidate set of itemsets first and then, identifying, within this candidate set, those itemsets that meet the large itemset requirement. Generally this is done iteratively for each large k-itemset in increasing order of k where a large k-itemset is a large itemset with k items. To determine large itemsets from a huge number of candidate large itemsets in early iterations is usually the dominating factor for the overall data mining performance. To address this issue, we propose an effective hash-based algorithm for the candidate set generation. Explicitly, the number of candidate 2-itemsets generated by the proposed algorithm is, in orders of magnitude, smaller than that by previous methods, thus resolving the performance bottleneck. Note that the generation of smaller candidate sets enables us to effectively trim the transaction database size at a much earlier stage of the iterations, thereby reducing the computational cost for later iterations significantly. Extensive simulation study is conducted to evaluate performance of the proposed algorithm.", "Currently, there is tremendous interest in providing ad-hoc mining capabilities in database management systems. As a first step towards this goal, in [15] we proposed an architecture for supporting constraint-based, human-centered, exploratory mining of various kinds of rules including associations, introduced the notion of constrained frequent set queries (CFQs), and developed effective pruning optimizations for CFQs with 1-variable (1-var) constraints. While 1-var constraints are useful for constraining the antecedent and consequent separately, many natural examples of CFQs illustrate the need for constraining the antecedent and consequent jointly, for which 2-variable (2-var) constraints are indispensable. Developing pruning optimizations for CFQs with 2-var constraints is the subject of this paper. But this is a difficult problem because: (i) in 2-var constraints, both variables keep changing and, unlike 1-var constraints, there is no fixed target for pruning; (ii) as we show, “conventional” monotonicity-based optimization techniques do not apply effectively to 2-var constraints. The contributions are as follows. (1) We introduce a notion of quasi-succinctness , which allows a quasi-succinct 2-var constraint to be reduced to two succinct 1-var constraints for pruning. (2) We characterize the class of 2-var constraints that are quasi-succinct. (3) We develop heuristic techniques for non-quasi-succinct constraints. Experimental results show the effectiveness of all our techniques. (4) We propose a query optimizer for CFQs and show that for a large class of constraints, the computation strategy generated by the optimizer is ccc-optimal , i.e., minimizing the effort incurred w.r.t. constraint checking and support counting.", "There has been an increased growth in a number of applications that naturally generate large volumes of uncertain data. By the advent of such applications, the support of advanced analysis queries such as the skyline and its variant operators for big uncertain data has become important. In this paper, we propose the effective parallel algorithms using MapReduce to process the probabilistic skyline queries for uncertain data modeled by both discrete and continuous models. We present three filtering methods to identify probabilistic non-skyline objects in advance. We next develop a single MapReduce phase algorithm PS-QP-MR by utilizing space partitioning based on a variant of quadtrees to distribute the instances of objects effectively and the enhanced algorithm PS-QPF-MR by applying the three filtering methods additionally. We also propose the workload balancing technique to balance the workload of reduce functions based on the number of machines available. Finally, we present the brute-force algorithms PS-BR-MR and PS-BRF-MR with partitioning randomly and applying the filtering methods. In our experiments, we demonstrate the efficiency and scalability of PS-QPF-MR compared to the other algorithms.", "Recent work has highlighted the importance of the constraint based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. The authors study constraints which cannot be handled with existing theory and techniques. For example, avg(S) spl theta spl nu , median(S) spl theta spl nu , sum(S) spl theta spl nu (S can contain items of arbitrary values) ( spl theta spl isin spl ges , spl les ), are customarily regarded as \"tough\" constraints in that they cannot be pushed inside an algorithm such as a priori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed." ] }
cs0202008
2952049303
Recently the problem of indexing and locating content in peer-to-peer networks has received much attention. Previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries, but until now there has been little focus on how to maintain these intermediate caches. This paper proposes CUP, a new comprehensive architecture for Controlled Update Propagation in peer-to-peer networks. CUP asynchronously builds caches of index entries while answering search queries. It then propagates updates of index entries to maintain these caches. Under unfavorable conditions, when compared with standard caching based on expiration times, CUP reduces the average miss latency by as much as a factor of three. Under favorable conditions, CUP can reduce the average miss latency by more than a factor of ten. CUP refreshes intermediate caches, reduces query latency, and reduces network load by coalescing bursts of queries for the same item. CUP controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries. CUP gives peer-to-peer nodes the flexibility to use their own incentive-based policies to determine when to receive and when to propagate updates. Finally, the small propagation overhead incurred by CUP is more than compensated for by its savings in cache misses.
Chord @cite_0 and CFS @cite_7 suggest alternatives to making the query response travel down the reverse query path back to the query issuer. Chord suggests iterative searches where the query issuer contacts each node on the query path one-by-one for the item of interest until one of the nodes is found to have the item. CFS suggests that the query be forwarded from node to node until a node is found to have the item. This node then directly sends the query response back to the issuer. Both of these approaches help avoid some of the long latencies that may occur as the query response traverses the reverse query path. CUP is advantageous regardless of whether the response is delivered directly to the issuer or through the reverse query path. However, to make this work for direct response delivery, CUP must not coalesce queries for the same item at a node into one query since each query would need to explicitly carry the return address information of the query issuer.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2047977378", "2048653843" ], "abstract": [ "While sensor networks are going to be deployed in diverse application specific contexts, one unifying view is to treat them essentially as distributed databases. The simplest mechanism to obtain information from this kind of a database is to flood queries for named data within the network and obtain the relevant responses from sources. However, if the queries are (a) complex, (b) one-shot, and (c) for replicated data, this simple approach can be highly inefficient. In the context of energy-starved sensor networks, alternative strategies need to be examined for such queries. We propose a novel and efficient mechanism for obtaining information in sensor networks which we refer to as ACtive QUery forwarding In sensoR nEtworks (ACQUIRE). The basic principle behind ACQUIRE is to consider the query as an active entity that is forwarded through the network (either randomly or in some directed manner) in search of the solution. ACQUIRE also incorporates a look-ahead parameter d in the following manner: intermediate nodes that handle the active query use information from all nodes within d hops in order to partially resolve the query. When the active query is fully resolved, a completed response is sent directly back to the querying node. We take a mathematical modelling approach in this paper to calculate the energy costs associated with ACQUIRE. The models permit us to characterize analytically the impact of critical parameters, and compare the performance of ACQUIRE with respect to other schemes such as flooding-based querying (FBQ) and expanding ring search (ERS), in terms of energy usage, response latency and storage requirements. We show that with optimal parameter settings, depending on the update frequency, ACQUIRE obtains order of magnitude reduction over FBQ and potentially over 60–75 reduction over ERS (in highly dynamic environments and high query rates) in consumed energy. We show that these energy savings are provided in trade for increased response latency. The mathematical analysis is validated through extensive simulations. � 2003 Elsevier B.V. All rights reserved.", "We focus on large graphs where nodes have attributes, such as a social network where the nodes are labelled with each person's job title. In such a setting, we want to find subgraphs that match a user query pattern. For example, a \"star\" query would be, \"find a CEO who has strong interactions with a Manager, a Lawyer,and an Accountant, or another structure as close to that as possible\". Similarly, a \"loop\" query could help spot a money laundering ring. Traditional SQL-based methods, as well as more recent graph indexing methods, will return no answer when an exact match does not exist. This is the first main feature of our method. It can find exact-, as well as near-matches, and it will present them to the user in our proposed \"goodness\" order. For example, our method tolerates indirect paths between, say, the \"CEO\" and the \"Accountant\" of the above sample query, when direct paths don't exist. Its second feature is scalability. In general, if the query has nq nodes and the data graph has n nodes, the problem needs polynomial time complexity O(n n q), which is prohibitive. Our G-Ray (\"Graph X-Ray\") method finds high-quality subgraphs in time linear on the size of the data graph. Experimental results on the DLBP author-publication graph (with 356K nodes and 1.9M edges) illustrate both the effectiveness and scalability of our approach. The results agree with our intuition, and the speed is excellent. It takes 4 seconds on average fora 4-node query on the DBLP graph." ] }
cs0202008
2952049303
Recently the problem of indexing and locating content in peer-to-peer networks has received much attention. Previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries, but until now there has been little focus on how to maintain these intermediate caches. This paper proposes CUP, a new comprehensive architecture for Controlled Update Propagation in peer-to-peer networks. CUP asynchronously builds caches of index entries while answering search queries. It then propagates updates of index entries to maintain these caches. Under unfavorable conditions, when compared with standard caching based on expiration times, CUP reduces the average miss latency by as much as a factor of three. Under favorable conditions, CUP can reduce the average miss latency by more than a factor of ten. CUP refreshes intermediate caches, reduces query latency, and reduces network load by coalescing bursts of queries for the same item. CUP controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries. CUP gives peer-to-peer nodes the flexibility to use their own incentive-based policies to determine when to receive and when to propagate updates. Finally, the small propagation overhead incurred by CUP is more than compensated for by its savings in cache misses.
Consistent hashing work by @cite_3 looks at relieving hot spots at origin web servers by caching at intermediate caches between client caches and origin servers. Requests for items originate at the leaf clients of a conceptual tree and travel up through intermediate caches toward the origin server at the root of the tree. This work uses a model slightly different from the peer-to-peer model. Their model and analysis assume requests are made only at leaf clients and that intermediate caches do not store an item until it has been requested some threshold number of times. Also, this work does not focus on maintaining cache freshness.
{ "cite_N": [ "@cite_3" ], "mid": [ "2157170064" ], "abstract": [ "The Web is a distributed system, where data is stored and disseminated from both origin servers and caches. Origin servers provide the most up-to-date copy whereas caches store and serve copies that had been cached for a while. Origin servers do not maintain per-client state, and weak-consistency of cached copies is maintained by the origin server attaching to each copy an expiration time. Typically, the lifetime-duration of an object is fixed, and as a result, a copy fetched directly from its origin server has maximum time-to-live (TTL) whereas a copy obtained through a cache has a shorter TTL since its age (elapsed time since fetched from the origin) is deducted from its lifetime duration. Thus, a cache that is served from a cache would incur a higher miss-rate than a cache served from origin servers. Similarly, a high-level cache would receive more requests from the same client population than an origin server would have received. As Web caches are often served from other caches (e.g., proxy and reverse-proxy caches), age emerges as a performance factor. Guided by a formal model and analysis, we use different inter-request time distributions and trace-based simulations to explore the effect of age for different cache settings and configurations. We also evaluate the effectiveness of frequent pre-term refreshes by higher-level caches as a means to decrease client misses. Beyond Web content distribution, our conclusions generally apply to systems of caches deploying expiration-based consistency." ] }
cs0202008
2952049303
Recently the problem of indexing and locating content in peer-to-peer networks has received much attention. Previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries, but until now there has been little focus on how to maintain these intermediate caches. This paper proposes CUP, a new comprehensive architecture for Controlled Update Propagation in peer-to-peer networks. CUP asynchronously builds caches of index entries while answering search queries. It then propagates updates of index entries to maintain these caches. Under unfavorable conditions, when compared with standard caching based on expiration times, CUP reduces the average miss latency by as much as a factor of three. Under favorable conditions, CUP can reduce the average miss latency by more than a factor of ten. CUP refreshes intermediate caches, reduces query latency, and reduces network load by coalescing bursts of queries for the same item. CUP controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries. CUP gives peer-to-peer nodes the flexibility to use their own incentive-based policies to determine when to receive and when to propagate updates. Finally, the small propagation overhead incurred by CUP is more than compensated for by its savings in cache misses.
Cohen and Kaplan study the effect that aging through cascaded caches has on the miss rates of web client caches @cite_10 . For each object an intermediate cache refreshes its copy of the object when its age exceeds a fraction of the lifetime duration. The intermediate cache does not push this refresh to the client; instead, the client waits until its own copy has expired at which point it fetches the intermediate cache's copy with the remaining lifetime. For some sequences of requests at the client cache and some 's, the client cache can suffer from a higher miss rate than if the intermediate cache only refreshed on expiration. Their model assumes zero communication delay. A CUP tree could be viewed as a series of cascaded caches in that each node depends on the previous node in the tree for updates to an index entry. The key difference is that in CUP, refreshes are pushed down the entire tree of interested nodes. Therefore, barring communication delays, whenever a parent cache gets a refresh so does the interested child node. In such situations, the miss rate at the child node actually improves.
{ "cite_N": [ "@cite_10" ], "mid": [ "2157170064" ], "abstract": [ "The Web is a distributed system, where data is stored and disseminated from both origin servers and caches. Origin servers provide the most up-to-date copy whereas caches store and serve copies that had been cached for a while. Origin servers do not maintain per-client state, and weak-consistency of cached copies is maintained by the origin server attaching to each copy an expiration time. Typically, the lifetime-duration of an object is fixed, and as a result, a copy fetched directly from its origin server has maximum time-to-live (TTL) whereas a copy obtained through a cache has a shorter TTL since its age (elapsed time since fetched from the origin) is deducted from its lifetime duration. Thus, a cache that is served from a cache would incur a higher miss-rate than a cache served from origin servers. Similarly, a high-level cache would receive more requests from the same client population than an origin server would have received. As Web caches are often served from other caches (e.g., proxy and reverse-proxy caches), age emerges as a performance factor. Guided by a formal model and analysis, we use different inter-request time distributions and trace-based simulations to explore the effect of age for different cache settings and configurations. We also evaluate the effectiveness of frequent pre-term refreshes by higher-level caches as a means to decrease client misses. Beyond Web content distribution, our conclusions generally apply to systems of caches deploying expiration-based consistency." ] }
cs0204018
2952290399
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
Formal program transformation @cite_0 separates two concerns: the development of an initial, maybe inefficient program the correctness of which can easily be shown, and the stepwise derivation of a better implementation in a semantics-preserving manner. Partsch's textbook @cite_8 describes the formal approach to this kind of software development. Pettorossi and Proietti study typical transformation rules (for functional and logic) programs in @cite_17 . Formal program transformation, in part, also addresses datatype transformation @cite_3 , say data refinement. Here, one gives different axiomatisations or implementations of an abstract datatype which are then related by well-founded transformation steps. This typically involves some amount of mathematical program calculation. By contrast, we deliberately focus on the more syntactical transformations that a programmer uses anyway to adapt evolving programs.
{ "cite_N": [ "@cite_0", "@cite_17", "@cite_3", "@cite_8" ], "mid": [ "2067776455", "2023299380", "2149237601", "1989310296" ], "abstract": [ "Program transformation is used in a wide range of applications including compiler construction, optimization, program synthesis, refactoring, software renovation, and reverse engineering. Complex program transformations are achieved through a number of consecutive modications of a program. Transformation rules dene basic modications. A transformation strategy is an algorithm for choosing a path in the rewrite relation induced by a set of rules. This paper surveys the support for the denition of strategies in program transformation systems. After a discussion of kinds of program transformation and choices in program representation, the basic elements of a strategy system are discussed and the choices in the design of a strategy language are considered. Several styles of strategy systems as provided in existing languages are then analyzed.", "A system of rules for transforming programs is described, with the programs in the form of recursion equations. An initially very simple, lucid, and hopefully correct program is transformed into a more efficient one by altering the recursion structure. Illustrative examples of program transformations are given, and a tentative implementation is described. Alternative structures for programs are shown, and a possible initial phase for an automatic or semiautomatic program-manipulation system is indicated.", "Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. However, the development of specialized software is time-consuming, and is likely to exceed the production of today’s programmers. New techniques are required to solve this so-called software crisis. Partial evaluation is a program specialization technique that reconciles the benefits of generality with efficiency. This thesis presents an automatic partial evaluator for the Ansi C programming language. The content of this thesis is analysis and transformation of C programs. We develop several analyses that support the transformation of a program into its generating extension. A generating extension is a program that produces specialized programs when executed on parts of the input. The thesis contains the following main results. • We develop a generating-extension transformation, and describe specialization of the various parts of C, including pointers and structures. • We develop constraint-based inter-procedural pointer and binding-time analysis. Both analyses are specified via non-standard type inference systems, and implemented by constraint solving. • We develop a side-effect and an in-use analysis. These analyses are developed in the classical monotone data-flow analysis framework. Some intriguing similarities with constraint-based analysis are observed. • We investigate separate and incremental program analysis and transformation. Realistic programs are structured into modules, which break down inter-procedural analyses that need global information about functions. • We prove that partial evaluation at most can accomplish linear speedup, and develop an automatic speedup analysis. • We study the stronger transformation technique driving, and initiate the development of generating super-extensions. The developments in this thesis are supported by an implementation. Throughout the chapters we present empirical results.", "Semantics-preserving program transformations, such as refactorings and optimisations, can have a significant impact on the effectiveness of symbolic execution testing and analysis. Furthermore, semantics-preserving transformations that increase the performance of native execution can in fact decrease the scalability of symbolic execution. Similarly, semantics-altering transformations, such as type changes and object size modifications, can often lead to substantial improvements in the testing effectiveness achieved by symbolic execution in the original program. As a result, we argue that one should treat program transformations as first-class ingredients of scalable symbolic execution, alongside widely-accepted aspects such as search heuristics and constraint solving optimisations. First, we propose to understand the impact of existing program transformations on symbolic execution, to increase scalability and improve experimental design and reproducibility. Second, we argue for the design of testability transformations specifically targeted toward more scalable symbolic execution." ] }
cs0204018
2952290399
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
There is a large body of research addressing the related problem of database schema evolution @cite_11 as relevant, for example, in database re- and reverse engineering @cite_9 . The schema transformations themselves can be compared with our datatype transformations only at a superficial level because of the different formalisms involved. There exist formal frameworks for the definition of schema transformations and various formalisms have been investigated @cite_6 . An interesting aspect of database schema evolution is that schema evolution necessitates a database instance mapping @cite_18 . Compare this with the evolution of the datatypes in a functional program. Here, the main concern is to update the function declarations for compliance with the new datatypes. It seems that the instance mapping problem is a special case of the program update problem.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_6", "@cite_11" ], "mid": [ "2040811166", "2140708907", "2013096722", "1556253886" ], "abstract": [ "Supporting database schema evolution represents a long-standing challenge of practical and theoretical importance for modern information systems. In this paper, we describe techniques and systems for automating the critical tasks of migrating the database and rewriting the legacy applications. In addition to labor saving, the benefits delivered by these advances are many and include reliable prediction of outcome, minimization of downtime, system-produced documentation, and support for archiving, historical queries, and provenance. The PRISM PRISM++ system delivers these benefits, by solving the difficult problem of automating the migration of databases and the rewriting of queries and updates. In this paper, we present the PRISM PRISM++ system and the novel technology that made it possible. In particular, we focus on the difficult and previously unsolved problem of supporting legacy queries and updates under schema and integrity constraints evolution. The PRISM PRISM++ approach consists in providing the users with a set of SQL-based Schema Modification Operators (SMOs), which describe how the tables in the old schema are modified into those in the new schema. In order to support updates, SMOs are extended with integrity constraints modification operators. By using recent results on schema mapping, the paper (i) characterizes the impact on integrity constraints of structural schema changes, (ii) devises representations that enable the rewriting of updates, and (iii) develop a unified approach for query and update rewriting under constraints. We complement the system with two novel tools: the first automatically collects and provides statistics on schema evolution histories, whereas the second derives equivalent sequences of SMOs from the migration scripts that were used for schema upgrades. These tools were used to produce an extensive testbed containing 15 evolution histories of scientific databases and web information systems, providing over 100 years of aggregate evolution histories and almost 2,000 schema evolution steps.", "We introduce a protocol for schema evolution in a globally distributed database management system with shared data, stateless servers, and no global membership. Our protocol is asynchronous--it allows different servers in the database system to transition to a new schema at different times--and online--all servers can access and update all data during a schema change. We provide a formal model for determining the correctness of schema changes under these conditions, and we demonstrate that many common schema changes can cause anomalies and database corruption. We avoid these problems by replacing corruption-causing schema changes with a sequence of schema changes that is guaranteed to avoid corrupting the database so long as all servers are no more than one schema version behind at any time. Finally, we discuss a practical implementation of our protocol in F1, the database management system that stores data for Google AdWords.", "Data-Exchange is the problem of creating new databases according to a high-level specification called a schema-mapping while preserving the information encoded in a source database. This paper introduces a notion of generalized schema-mapping that enriches the standard schema-mappings (as defined by ) with more expressive power. It then proposes a more general and arguably more intuitive notion of semantics that rely on three criteria: Soundness, Completeness and Laconicity (non-redundancy and minimal size). These semantics are shown to coincide precisely with the notion of cores of universal solutions in the framework of Fagin, Kolaitis and Popa. It is also well-defined and of interest for larger classes of schema-mappings and more expressive source databases (with null-values and equality constraints). After an investigation of the key properties of generalized schema-mappings and their semantics, a criterion called Termination of the Oblivious Chase (TOC) is identified that ensures polynomial data-complexity. This criterion strictly generalizes the previously known criterion of Weak-Acyclicity. To prove the tractability of TOC schema-mappings, a new polynomial time algorithm is provided that, unlike the algorithm of Gottlob and Nash from which it is inspired, does not rely on the syntactic property of Weak-Acyclicity. As the problem of deciding whether a Schema-mapping satisfies the TOC criterion is only recursively enumerable, a more restrictive criterion called Super-weak Acylicity (SwA) is identified that can be decided in Polynomial-time while generalizing substantially the notion of Weak-Acyclicity.", "To achieve interoperability, modern information systems and e-commerce applications use mappings to translate data from one representation to another. In dynamic environments like the Web, data sources may change not only their data but also their schemas, their semantics, and their query capabilities. Such changes must be reflected in the mappings. Mappings left inconsistent by a schema change have to be detected and updated. As large, complicated schemas become more prevalent, and as data is reused in more applications, manually maintaining mappings (even simple mappings like view definitions) is becoming impractical. We present a novel framework and a tool (ToMAS) for automatically adapting mappings as schemas evolve. Our approach considers not only local changes to a schema, but also changes that may affect and transform many components of a schema. We consider a comprehensive class of mappings for relational and XML schemas with choice types and (nested) constraints. Our algorithm detects mappings affected by a structural or constraint change and generates all the rewritings that are consistent with the semantics of the mapped schemas. Our approach explicitly models mapping choices made by a user and maintains these choices, whenever possible, as the schemas and mappings evolve. We describe an implementation of a mapping management and adaptation tool based on these ideas and compare it with a mapping generation tool." ] }
cs0204018
2952290399
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
The transformational approach to program evolution is nowadays called refactoring @cite_19 @cite_4 , but the idea is not new @cite_15 @cite_12 . Refactoring means to improve the structure of code so that it becomes more comprehensible, maintainable, and adaptable. Interactive refactoring tools are being studied and used extensively in the object-oriented programming context @cite_16 @cite_21 . Typical examples of program refactorings are described in @cite_5 , e.g., the introduction of a monad in a non-monadic program. The precise inhabitation of the refactoring notion for functional programming is being addressed in a project at the University of Kent by Thompson and Reinke; see @cite_7 . There is also related work on type-safe meta-programming in a functional context, e.g., by Erwig @cite_13 . Previous work did not specifically address datatype transformations. The refactorings for object-oriented class structures are not directly applicable because of the different structure and semantics of classes vs. algebraic datatypes.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_21", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "1487664366", "1989310296", "2767331170", "2003145055", "2010907903", "1996710532", "2006990447", "2020692742", "1928673936" ], "abstract": [ "This thesis defines a set of program restructuring operations (refactorings) that support the design, evolution and reuse of object-oriented application frameworks. The focus of the thesis is on automating the refactorings in a way that preserves the behavior of a program. The refactorings are defined to be behavior preserving, provided that their preconditions are met. Most of the refactorings are simple to implement and it is almost trivial to show that they are behavior preserving. However, for a few refactorings, one or more of their preconditions are in general undecidable. Fortunately, for some cases it can be determined whether these refactorings can be applied safely. Three of the most complex refactorings are defined in detail: generalizing the inheritance hierarchy, specializing the inheritance hierarchy and using aggregations to model the relationships among classes. These operations are decomposed into more primitive parts, and the power of these operations is discussed from the perspectives of automatability and usefulness in supporting design. Two design constraints needed in refactoring are class invariants and exclusive components. These constraints are needed to ensure that behavior is preserved across some refactorings. This thesis gives some conservative algorithms for determining whether a program satisfies these constraints, and describes how to use this design information to refactor a program.", "Semantics-preserving program transformations, such as refactorings and optimisations, can have a significant impact on the effectiveness of symbolic execution testing and analysis. Furthermore, semantics-preserving transformations that increase the performance of native execution can in fact decrease the scalability of symbolic execution. Similarly, semantics-altering transformations, such as type changes and object size modifications, can often lead to substantial improvements in the testing effectiveness achieved by symbolic execution in the original program. As a result, we argue that one should treat program transformations as first-class ingredients of scalable symbolic execution, alongside widely-accepted aspects such as search heuristics and constraint solving optimisations. First, we propose to understand the impact of existing program transformations on symbolic execution, to increase scalability and improve experimental design and reproducibility. Second, we argue for the design of testability transformations specifically targeted toward more scalable symbolic execution.", "Refactoring is a common software development practice and many simple refactorings can be performed automatically by tools. Identifier renaming is a widely performed refactoring activity. With tool support, rename refactorings can rely on the program structure to ensure correctness of the code transformation. Unfortunately, the textual references to the renamed identifier present in the unstructured comment text cannot be formally detected through the syntax of the language, and are thus fragile with respect to identifier renaming. We designed a new rule-based approach to detect fragile comments. Our approach, called Fraco, takes into account the type of identifier, its morphology, the scope of the identifier and the location of comments. We evaluated the approach by comparing its precision and recall against hand-annotated benchmarks created for six target Java systems, and compared the results against the performance of Eclipse's automated in-comment identifier replacement feature. Fraco performed with near-optimal precision and recall on most components of our evaluation data set, and generally outperformed the baseline Eclipse feature. As part of our evaluation, we also noted that more than half of the total number of identifiers in our data set had fragile comments after renaming, which further motivates the need for research on automatic comment refactoring.", "This paper presents an introduction to a calculus of binary multirelations, which can model both angelic and demonic kinds of non-determinism. The isomorphism between up-closed multirelations and monotonic predicate transformers allows a different view of program transformation, and program transformation calculations using multirelations are easier to perform in some circumstances. Multirelations are illustrated by modelling both kinds of nondeterministic behaviour in games and resource-sharing protocols.", "Fusion laws permit to eliminate various of the intermediate data structures that are created in function compositions. The fusion laws associated with the traditional recursive operators on datatypes cannot, in general, be used to transform recursive programs with effects. Motivated by this fact, this paper addresses the definition of two recursive operators on datatypes that capture functional programs with effects. Effects are assumed to be modeled by monads. The main goal is thus the derivation of fusion laws for the new operators. One of the new operators is called monadic unfold. It captures programs (with effects) that generate a data structure in a standard way. The other operator is called monadic hylomorphism, and corresponds to programs formed by the composition of a monadic unfold followed by a function defined by structural induction on the data structure that the monadic unfold generates.", "Most, object-oriented programs have imperfectly designed inheritance hierarchies and imperfectly factored methods, and these imperfections tend to increase with maintenance. Hence, even object-oriented programs are more expensive to maintain, harder to understand and larger than necessary. Automatic restructuring of inheritance hierarchies and refactoring of methods can improve the design of inheritance hierarchies, and the factoring of methods. This results in programs being smaller, having better code re-use and being more consistent. This paper describes Guru, a prototype tool for automatic inheritance hierarchy restructuring and method refactoring of Self programs. Results from realistic applications of the tool are presented.", "In recent years much interest has been shown in a class of functional languages including HASKELL, lazy ML, SASL KRC MIRANDA, ALFL, ORWELL, and PONDER. It has been seen that their expressive power is great, programs are compact, and program manipulation and transformation is much easier than with imperative languages or more traditional applicative ones. Common characteristics: they are purely applicative, manipulate trees as data objects, use pattern matching both to determine control flow and to decompose compound data structures, and use a ''lazy'' evaluation strategy. In this paper we describe a technique for data flow analysis of programs in this class by safely approximating the behavior of a certain class of term rewriting systems. In particular we obtain ''safe'' descriptions of program inputs, outputs and intermediate results by regular sets of trees. Potential applications include optimization, strictness analysis and partial evaluation. The technique improves earlier work because of its applicability to programs with higher-order functions, and with either eager or lazy evaluation. The technique addresses the call-by-name aspect of laziness, but not memoization.", "We show how the complexity of higher-order functional programs can be analysed automatically by applying program transformations to a defunctionalised versions of them, and feeding the result to existing tools for the complexity analysis of first-order term rewrite systems. This is done while carefully analysing complexity preservation and reflection of the employed transformations such that the complexity of the obtained term rewrite system reflects on the complexity of the initial program. Further, we describe suitable strategies for the application of the studied transformations and provide ample experimental data for assessing the viability of our method.", "First-order languages based on rewrite rules share many features with functional languages, but one difference is that matching and rewriting can be made much more expressive and powerful by incorporating some built-in equational theories. To provide reasonable programming environments, compilation techniques for such languages based on rewriting have to be designed. This is the topic addressed in this paper. The proposed techniques are independent from the rewriting language, and may be useful to build a compiler for any system using rewriting modulo Associative and Commutative (AC) theories. An algorithm for many-to-one AC matching is presented, that works efficiently for a restricted class of patterns. Other patterns are transformed to fit into this class. A refined data structure, namely compact bipartite graph, allows encoding of all matching problems relative to a set of rewrite rules. A few optimisations concerning the construction of the substitution and of the reduced term are described. We also address the problem of non-determinism related to AC rewriting, and show how to handle it through the concept of strategies. We explain how an analysis of the determinism can be performed at compile time, and we illustrate the benefits of this analysis for the performance of the compiled evaluation process. Then we briefly introduce the ELAN system and its compiler, in order to give some experimental results and comparisons with other languages or rewrite engines." ] }
cs0206040
2949146353
Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.
A variety of approaches have been proposed to programming Grid applications, including object systems @cite_35 @cite_40 , Web technologies @cite_11 @cite_26 , problem solving environments @cite_28 @cite_34 , CORBA, workflow systems, high-throughput computing systems @cite_27 @cite_8 , and compiler-based systems @cite_3 . We assume that while different technologies will prove attractive for different purposes, a programming model such as MPI that allows direct control over low-level communications will always be attractive for certain applications.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_8", "@cite_28", "@cite_3", "@cite_40", "@cite_27", "@cite_34", "@cite_11" ], "mid": [ "2077783617", "2074833026", "2132896973", "2127013169", "2545968212", "1503313641", "2130604611", "2004261242", "2740001873" ], "abstract": [ "Application development for distributed-computing \"Grids\" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.", "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.", "As high-end computer systems present users with rapidly increasing numbers of processors, possibly also incorporating attached co-processors, programmers are increasingly challenged to express the necessary levels of concurrency with the dominant parallel programming model, Fortran+MPI+OpenMP (or minor variations). In this paper, we examine the languages developed under the DARPA High-Productivity Computing Systems (HPCS) program (Chapel, Fortress, and XIO) as representatives of a different parallel programming model which might be more effective on emerging high-performance systems. The application used in this study is the Hartree-Fock method from quantum chemistry, which combines access to distributed data with a task-parallel algorithm and is characterized by significant irregularity in the computational tasks. We present several different implementation strategies for load balancing of the task-parallel computation, as well as distributed array operations, in each of the three languages. We conclude that the HPCS languages provide a wide variety of mechanisms for expressing parallelism, which can be combined at multiple levels, making them quite expressive for this problem.", "In this work we directly evaluate five candidate programming models for future exascale applications (MPI, MPI+OpenMP, MPI+OpenACC, MPI+CUDA and CAF) using a recently developed Lagrangian-Eulerian explicit hydrodynamics mini-application. The aim of this work is to better inform the exacsale planning at large HPC centres such as AWE. Such organisations invest significant resources maintaining and updating existing scientific codebases, many of which were not designed to run at the scale required to reach exascale levels of computation on future system architectures. We present our results and experiences of scaling these different approaches to high node counts on existing large-scale Cray systems (Titan and HECToR). We also examine the effect that improving the mapping between process layout and the underlying machine interconnect topology can have on performance and scalability, as well as highlighting several communication-focused optimisations.", "Most high-performance, scientific libraries have adopted hybrid parallelization schemes - such as the popular MPI+OpenMP hybridization - to benefit from the capacities of modern distributed-memory machines. While these approaches have shown to achieve high performance, they require a lot of effort to design and maintain sophisticated synchronization communication strategies. On the other hand, task-based programming paradigms aim at delegating this burden to a runtime system for maximizing productivity. In this article, we assess the potential of task-based fast multipole methods (FMM) on clusters of multicore processors. We propose both a hybrid MPI+task FMM parallelization and a pure task-based parallelization where the MPI communications are implicitly handled by the runtime system. The latter approach yields a very compact code following a sequential task-based programming model. We show that task-based approaches can compete with a hybrid MPI+OpenMP highly optimized code and that furthermore the compact task-based scheme fully matches the performance of the sophisticated, hybrid MPI+task version, ensuring performance while maximizing productivity. We illustrate our discussion with the ScalFMM FMM library and the StarPU runtime system.", "An increasingly large number of scientific applications run on large clusters based on GPU systems. In most cases the large scale parallelism of the applications uses MPI, widely recognized as the de-facto standard for building parallel applications, while several programming languages are used to express the parallelism available in the application and map it onto the parallel resources available on GPUs. Regular grids and stencil codes are used in a subset of these applications, often corresponding to computational “Grand Challenges”. One such class of applications are Lattice Boltzmann Methods (LB) used in computational fluid dynamics. The regular structure of LB algorithms makes them suitable for processor architectures with a large degree of parallelism like GPUs. Scalability of these applications on large clusters requires a careful design of processor-to-processor data communications, exploiting all possibilities to overlap communication and computation. This paper looks at these issues, considering as a use case a state-of-the-art two-dimensional LB model, that accurately reproduces the thermo-hydrodynamics of a 2D-fluid obeying the equation-of-state of a perfect gas. We study in details the interplay between data organization and data layout, data-communication options and overlapping of communication and computation. We derive partial models of some performance features and compare with experimental results for production-grade codes that we run on a large cluster of GPUs.", "Parallel machines are becoming more complex with increasing core counts and more heterogeneous architectures. However, the commonly used parallel programming models, C C++ with MPI and or OpenMP, make it difficult to write source code that is easily tuned for many targets. Newer language approaches attempt to ease this burden by providing optimization features such as automatic load balancing, overlap of computation and communication, message-driven execution, and implicit data layout optimizations. In this paper, we compare several implementations of LULESH, a proxy application for shock hydrodynamics, to determine strengths and weaknesses of different programming models for parallel computation. We focus on four traditional (OpenMP, MPI, MPI+OpenMP, CUDA) and four emerging (Chapel, Charm++, Liszt, Loci) programming models. In evaluating these models, we focus on programmer productivity, performance and ease of applying optimizations.", "Large scale supercomputing applications typically run on clusters using vendor message passing libraries, limiting the application to the availability of memory and CPU resources on that single machine. The ability to run inter-cluster parallel code is attractive since it allows the consolidation of multiple large scale resources for computational simulations not possible on a single machine, and it also allows the conglomeration of small subsets of CPU cores for rapid turnaround, for example, in the case of high-availability computing. MPIg is a grid-enabled implementation of the Message Passing Interface (MPI), extending the MPICH implementation of MPI to use Globus Toolkit services such as resource allocation and authentication. To achieve co-availability of resources, HARC, the Highly-Available Resource Co-allocator, is used. Here we examine two applications using MPIg: LAMMPS (Large-scale Atomic Molecular Massively Parallel Simulator), is used with a replica exchange molecular dynamics approach to enhance binding affinity calculations in HIV drug research, and HemeLB, which is a lattice-Boltzmann solver designed to address fluid flow in geometries such as the human cerebral vascular system. The cross-site scalability of both these applications is tested and compared to single-machine performance. In HemeLB, communication costs are hidden by effectively overlapping non-blocking communication with computation, essentially scaling linearly across multiple sites, and LAMMPS scales almost as well when run between two significantly geographically separated sites as it does at a single site.", "Traditionally, MPI runtimes have been designed for clusters with a large number of nodes. However, with the advent of MPI+CUDA applications and dense multi-GPU systems, it has become important to design efficient communication schemes. This coupled with new application workloads brought forward by Deep Learning frameworks like Caffe and Microsoft CNTK pose additional design constraints due to very large message communication of GPU buffers during the training phase. In this context, special-purpose libraries like NCCL have been proposed. In this paper, we propose a pipelined chain (ring) design for the MPI_Bcast collective operation along with an enhanced collective tuning framework in MVAPICH2-GDR that enables efficient intra- internode multi-GPU communication. We present an in-depth performance landscape for the proposed MPI_Bcast schemes along with a comparative analysis of NCCL Broadcast and NCCL-based MPI_Bcast. The proposed designs for MVAPICH2-GDR enable up to 14X and 16.6X improvement, compared to NCCL-based solutions, for intra- and internode broadcast latency, respectively. In addition, the proposed designs provide up to 7 improvement over NCCL-based solutions for data parallel training of the VGG network on 128 GPUs using Microsoft CNTK. The proposed solutions outperform the recently introduced NCCL2 library for small and medium message sizes and offer comparable better performance for very large message sizes." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
There is a large body of literature dealing with verification of protocols. Verification systems typically address well-defined properties --such as safety , liveness , and responsiveness @cite_41 -- and aim to detect violations of these properties. In general, the two main approaches for protocol verification are theorem proving and reachability analysis @cite_2 . Theorem proving systems define a set of axioms and relations to prove properties, and include model-based and logic-based formalisms @cite_39 @cite_22 . These systems are useful in many applications. However, these systems tend to abstract out some network dynamics that we study (e.g., selective packet loss). Moreover, they do not synthesize network topologies and do not address performance issues per se.
{ "cite_N": [ "@cite_41", "@cite_22", "@cite_39", "@cite_2" ], "mid": [ "2033263591", "1508967933", "2291637985", "1926128235" ], "abstract": [ "In this article we present a comprehensive survey of various approaches for the verification of cache coherence protocols based on state enumeration, (symbolic model checking , and symbolic state models . Since these techniques search the state space of the protocol exhaustively, the amount of memory required to manipulate that state information and the verification time grow very fast with the number of processors and the complexity of the protocol mechanisms. To be successful for systems of arbitrary complexity, a verification technique must solve this so-called state space explosion problem. The emphasis of our discussion is onthe underlying theory in each method of handling the state space exposion problem, and formulationg and checking the safety properties (e.g., data consistency) and the liveness properties (absence of deadlock and livelock). We compare the efficiency and discuss the limitations of each technique in terms of memory and computation time. Also, we discuss issues of generality, applicability, automaticity, and amenity for existing tools in each class of methods. No method is truly superior because each method has its own strengths and weaknesses. Finally, refinements that can further reduce the verification time and or the memory requirement are also discussed.", "Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. In this paper, we show that it is possible to obtain the best of both worlds: fully automated proofs and strong, clear security guarantees. Specifically, for the case of protocols that use signatures and asymmetric encryption, we establish that symbolic integrity and secrecy proofs are sound with respect to the computational model. The main new challenges concern secrecy properties for which we obtain the first soundness result for the case of active adversaries. Our proofs are carried out using Casrul, a fully automated tool.", "Formal verification is used to establish the compliance of software and hardware systems with important classes of requirements. System compliance with functional requirements is frequently analyzed using techniques such as model checking, and theorem proving. In addition, a technique called quantitative verification supports the analysis of the reliability, performance, and other quality-of-service (QoS) properties of systems that exhibit stochastic behavior. In this paper, we extend the applicability of quantitative verification to the common scenario when the probabilities of transition between some or all states of the Markov models analyzed by the technique are unknown, but observations of these transitions are available. To this end, we introduce a theoretical framework, and a tool chain that establish confidence intervals for the QoS properties of a software system modelled as a Markov chain with uncertain transition probabilities. We use two case studies from different application domains to assess the effectiveness of the new quantitative verification technique. Our experiments show that disregarding the above source of uncertainty may significantly affect the accuracy of the verification results, leading to wrong decisions, and low-quality software systems.", "When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
Reachability analysis algorithms @cite_21 , on the other hand, try to inspect reachable protocol states, and suffer from the state space explosion' problem. To circumvent this problem, state reduction techniques could be used @cite_25 . These algorithms, however, do not synthesize network topologies. Reduced reachability analysis has been used in the verification of cache coherence protocols @cite_0 , using a global FSM model. We adopt a similar FSM model and extend it for our approach in this study. However, our approach differs in that we address end-to-end protocols, that encompass rich timing, delay, and loss semantics, and we address performance issues (such as overhead or response delays).
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_25" ], "mid": [ "2023949272", "2033263591", "1503128752" ], "abstract": [ "Reachability analysis has proved to be one of the most effective methods in verifying correctness of communication protocols based on the state transition model. Consequently, many protocol verification tools have been built based on the method of reachability analysis. Nevertheless, it is also well known that state space explosion is the most severe limitation to the applicability of this method. Although researchers in the field have proposed various strategies to relieve this intricate problem when building the tools, a survey and evaluation of these strategies has not been done in the literature. In searching for an appropriate approach to tackling such a problem for a grammar-based validation tool, we have collected and evaluated these relief strategies, and have decided to develop our own from yet another but more systematic approach. The results of our research are now reported in this paper. Essentially, the paper is to serve two purposes: first, to give a survey and evaluation of existing relief strategies; second, to propose a new strategy, called PROVAT (PROtocol VAlidation Testing), which is inspired by the heuristic search techniques in Artificial Intelligence. Preliminary results of incorporating the PROVAT strategy into our validation tool are reviewed in the paper. These results show the empirical evidence of the effectiveness of the PROVAT strategy.", "In this article we present a comprehensive survey of various approaches for the verification of cache coherence protocols based on state enumeration, (symbolic model checking , and symbolic state models . Since these techniques search the state space of the protocol exhaustively, the amount of memory required to manipulate that state information and the verification time grow very fast with the number of processors and the complexity of the protocol mechanisms. To be successful for systems of arbitrary complexity, a verification technique must solve this so-called state space explosion problem. The emphasis of our discussion is onthe underlying theory in each method of handling the state space exposion problem, and formulationg and checking the safety properties (e.g., data consistency) and the liveness properties (absence of deadlock and livelock). We compare the efficiency and discuss the limitations of each technique in terms of memory and computation time. Also, we discuss issues of generality, applicability, automaticity, and amenity for existing tools in each class of methods. No method is truly superior because each method has its own strengths and weaknesses. Finally, refinements that can further reduce the verification time and or the memory requirement are also discussed.", "We consider verification of safety properties for concurrent real-timed systems modelled as timed Petri nets by performing symbolic forward reachability analysis. We introduce a formalism, called region generators, for representing sets of markings of timed Petri nets. Region generators characterize downward closed sets of regions and provide exact abstractions of sets of reachable states with respect to safety properties. We show that the standard operations needed for performing symbolic reachability analysis are computable for region generators. Since forward reachability analysis is necessarily incomplete, we introduce an acceleration technique to make the procedure terminate more often on practical examples. We have implemented a prototype for analyzing timed Petri nets and used it to verify a parameterized version of Fischer's protocol, Lynch and Shavit's mutual exclusion protocol and a producer-consumer protocol. We also used the tool to extract finite-state abstractions of these protocols." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
There is a good number of publications dealing with conformance testing @cite_24 @cite_27 @cite_26 @cite_5 . However, conformance testing verifies that an implementation (as a black box) adheres to a given specification of the protocol by constructing input output sequences. Conformance testing is useful during the implementation testing phase --which we do not address in this paper-- but does not address performance issues nor topology synthesis for design testing. By contrast, our method synthesizes test scenarios for protocol design, according to evaluation criteria.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_27", "@cite_26" ], "mid": [ "2107360681", "2029755436", "2339842460", "2121954581" ], "abstract": [ "This chapter presents principles and techniques for modelbased black-box conformance testing of real-time systems using the Uppaal model-checking tool-suite. The basis for testing is given as a network of concurrent timed automata specified by the test engineer. Relativized input output conformance serves as the notion of implementation correctness, essentially timed trace inclusion taking environment assumptions into account. Test cases can be generated offline and later executed, or they can be generated and executed online. For both approaches this chapter discusses how to specify test objectives, derive test sequences, apply these to the system under test, and assign a verdict.", "A novel procedure presented here generates test sequences for checking the conformity of protocol implementations to their specifications. The test sequences generated by this procedure only detect the presence of many faults, but they do not locate the faults. It can always detect the problem in an implementation with a single fault. A protocol entity is specified as a finite state machine (FSM). It typically has two interfaces: an interface with the user and with the lower-layer protocol. The inputs from both interfaces are merged into a single set I and the outputs from both interfaces are merged into a single set O. The implementation is assumed to be a black box. The key idea in this procedure is to tour all states and state transitions and to check a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an I O behavior that is not exhibited by any other state.", "Industrial-sized hybrid systems are typically not amenable to formal verification techniques. For this reason, a common approach is to formally verify abstractions of (parts of) the original system. However, we need to show that this abstraction conforms to the actual system implementation including its physical dynamics. In particular, verified properties of the abstract system need to transfer to the implementation. To this end, we introduce a formal conformance relation, called reachset conformance, which guarantees transference of safety properties, while being a weaker relation than the existing trace inclusion conformance. Based on this formal relation, we present a conformance testing method which allows us to tune the trade-off between accuracy and computational load. Additionally, we present a test selection algorithm that uses a coverage measure to reduce the number of test cases for conformance testing. We experimentally show the benefits of our novel techniques based on an example from autonomous driving.", "The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out. >" ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
Automatic test generation techniques have been used in several fields. VLSI chip testing @cite_16 uses test vector generation to detect target faults. Test vectors may be generated based on circuit and fault models, using the fault-oriented technique, that utilizes implication techniques. These techniques were adopted in @cite_29 to develop fault-oriented test generation (FOTG) for multicast routing. @cite_29 , FOTG was used to study correctness of a multicast routing protocol on a LAN. We extend FOTG to study performance of end-to-end multicast mechanisms. We introduce the concept of a virtual LAN to represent the underlying network, integrate timing and delay semantics into our model and use performance criteria to drive our synthesis algorithm.
{ "cite_N": [ "@cite_29", "@cite_16" ], "mid": [ "1962021926", "2495715420" ], "abstract": [ "We present a new algorithm for automatic test generation for multicast routing. Our algorithm processes a finite state machine (FSM) model of the protocol and uses a mix of forward and backward search techniques to generate the tests. The output tests include a set of topologies, protocol events and network failures, that lead to violation of protocol correctness and behavioral requirements. We target protocol robustness in specific, and do not attempt to verify other properties in this paper. We apply our method to a multicast routing protocol; PIM-DM, and investigate its behavior in the presence of selective packet loss on LANs and router crashes. Our study unveils several robustness violations in PIM-DM, for which we suggest fixes with the aid of the presented algorithm.", "In engineering of safety critical systems, regulatory standards often put requirements on both traceable specification-based testing, and structural coverage on program units. Automated test generation techniques can be used to generate inputs to cover the structural aspects of a program. However, there is no conclusive evidence on how automated test generation compares to manual test design, or how testing based on the program implementation relates to specification-based testing. In this paper, we investigate specification -- and implementation-based testing of embedded software written in the IEC 61131-3 language, a programming standard used in many embedded safety critical software systems. Further, we measure the efficiency and effectiveness in terms of fault detection. For this purpose, a controlled experiment was conducted, comparing tests created by a total of twenty-three software engineering master students. The participants worked individually on manually designing and automatically generating tests for two IEC 61131-3 programs. Tests created by the participants in the experiment were collected and analyzed in terms of mutation score, decision coverage, number of tests, and testing duration. We found that, when compared to implementation-based testing, specification-based testing yields significantly more effective tests in terms of the number of faults detected. Specifically, specification-based tests more effectively detect comparison and value replacement type of faults, compared to implementation-based tests. On the other hand, implementation-based automated test generation leads to fewer tests (up to 85 improvement) created in shorter time than the ones manually created based on the specification." ] }
cs0208023
2132313847
The advent of multicast and the growth and complexity of the Internet has complicated network protocol design and evaluation. Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such an approach may be useful for average case analysis but does not cover boundary-point (worst or best case) scenarios. To synthesize boundary-point scenarios, a more systematic approach is needed. In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. Our algorithms utilize implicit backward search using branch and bound techniques and start from given target events. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average case analyses. We hope for our method to serve as a model for applying systematic evaluation to other multicast protocols.
@cite_36 , a simulation-based stress testing framework based on heuristics was proposed. However, that method does not provide automatic topology generation, nor does it address performance issues. The VINT @cite_17 tools provide a framework for Internet protocols simulation. Based on the network simulator (NS) @cite_31 and the network animator (NAM) @cite_23 , VINT provides a library of protocols and a set of validation test suites. However, it does not provide a generic tool for generating these tests automatically. Work in this paper is complementary to such studies, and may be integrated with network simulation tools similar to our work in .
{ "cite_N": [ "@cite_36", "@cite_31", "@cite_23", "@cite_17" ], "mid": [ "2031849950", "1544819167", "2432004435", "2748436040" ], "abstract": [ "Network researchers must test Internet protocols under varied conditions to determine whether they are robust and reliable. The paper discusses the Virtual Inter Network Testbed (VINT) project which has enhanced its network simulator and related software to provide several practical innovations that broaden the conditions under which researchers can evaluate network protocols.", "We propose a method for using simulation to analyze the robustness of multiparty (multicast-based) protocols in a systematic fashion. We call our method Systematic Testing of Robustness by Examination of Selected Scenarios (STRESS). STRESS aims to cut the time and effort needed to explore pathological cases of a protocol during its design. This paper has two goals: (1) to describe the method, and (2) to serve as a case study of robustness analysis of multicast routing protocols. We aim to offer design tools similar to those used in CAD and VLSI design, and demonstrate how effective systematic simulation can be in studying protocol robustness.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "The simulation of product behavior is a vital part in virtual product development, but currently there is no tool or method available that can examine the quality of FE simulations and decide automatically on whether a simulation is plausible or non-plausible. In the paper a method is presented that enables automatic plausibility checks on basis of empirical simulation datasets. Nodal simulation data is transformed to numerical arrays, of fixed size, using virtual spherical detector surfaces. Afterwards the arrays are used to train a Deep Convolutional Neural Network (AlexNet). The Neural Network can then be used for plausibility checks of FE simulations (structural mechanics). In a first application a Deep Convolutional Neural Network is trained with simulation data of a demonstrator part, the rail of speed inline skates. After the GPU training of the Neural Network, further simulations are evaluated with the net. These simulations were not part of the training data and are used to calculate the prediction quality of the Neural Network. This approach is to support development engineers during design accompanying FEA in virtual product development." ] }
cs0208029
2953092023
Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.
The multiparadigm language Leda was developed for educational purposes @cite_117 . It is sequential, supports functional and object-oriented programming, and has basic support for backtracking and a simple form of logic programming that is a subset of Prolog.
{ "cite_N": [ "@cite_117" ], "mid": [ "1600021945" ], "abstract": [ "Offering an alternative approach to multiparadigm programming concepts, this work presents the four major language paradigms - imperative, object-oriented, functional and logical - through a new, common language called Leda. It: introduces the important emerging topic of multiparadigm programming - a concept that could be characterized as \"the best of\" programming languages; provides a coherent basis for comparing multiple paradigms in a common framework through a single language; and gives both a technical overview and summaries on important topics in programming-language development." ] }
cs0210015
2130323825
We consider the prospect of a processor that can perform interval arithmetic at the same speed as conventional floating-point arithmetic. This makes it possible for all arithmetic to be performed with the superior security of interval methods without any penalty in speed. In such a situation the IEEE floating-point standard needs to be compared with a version of floating-point arithmetic that is ideal for the purpose of interval arithmetic. Such a comparison requires a succinct and complete exposition of interval arithmetic according to its recent developments. We present such an exposition in this paper. We conclude that the directed roundings toward the infinities and the definition of division by the signed zeros are valuable features of the standard. Because the operations of interval arithmetic are always defined, exceptions do not arise. As a result neither Nans nor exceptions are needed. Of the status flags, only the inexact flag may be useful. Denormalized numbers seem to have no use for interval arithmetic; in the use of interval constraints, they are a handicap.
For most of the time since the beginning of interval arithmetic, two systems have coexisted. One was the official one, where intervals were bounded, and division by an interval containing zero was undefined. Recognizing the unpracticality of this approach, there was also a definition of extended'' interval arithmetic @cite_5 where these limitations were lifted. Representative of this state of affairs are the monographs by Hansen @cite_4 and Kearfott @cite_0 . However, here the specification of interval division is quite far from an efficient implementation that takes advantage of the IEEE floating-point standard. The specification is indirect via multiplication by the interval inverse. There is no consideration of the possibility of undefined operations: presumably one is to perform a test before each operation.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4" ], "mid": [ "1504204411", "2036740708", "2041977310" ], "abstract": [ "This paper applies interval methods to a classical problem in computer algebra. Let a quantified constraint be a first-order formula over the real numbers. As shown by A. Tarski in the 1930's, such constraints, when restricted to the predicate symbols <, = and function symbols +, ×, are in general solvable. However, the problem becomes undecidable, when we add function symbols like sin. Furthermore, all exact algorithms known up to now are too slow for big examples, do not provide partial information before computing the total result, cannot satisfactorily deal with interval constants in the input, and often generate huge output. As a remedy we propose an approximation method based on interval arithmetic. It uses a generalization of the notion of cylindrical decomposition—as introduced by G. Collins. We describe an implementation of the method and demonstrate that, for quantified constraints without equalities, it can efficiently give approximate information on problems that are too hard for current exact methods.", "The range of Fourier methods can be significantly increased by extending a nonperiodic function f(x) to a periodic function f? on a larger interval. When f(x) is analytically known on the extended interval, the extension is straightforward. When f(x) is unknown outside the physical interval, there is no standard recipe. Worse still, like a radarless aircraft groping through fog, the algorithm may wreck on the “mountain-in-fog” problem: a function f(x) which is perfectly well behaved on the physical interval may very well have singularities in the extended domain. In this article, we compare several algorithms for successfully extending a function f(x) into the “fog” even when the analytic extension is singular. The best third-kind extension requires singular value decomposition with iterative refinement but achieves accuracy close to machine precision.", "We show how interval arithmetic can be used in connection with Borsuk's theorem to computationally prove the existence of a solution of a system of nonlinear equations. It turns out that this new test, which can be checked computationally in several different ways, is more general than an existing test based on Miranda's theorem in the sense that it is successful for a larger set of situations. A numerical example is included." ] }
cs0210024
1661438368
We introduce a new class of scheduling problems in which the optimization is performed by the worker (single "machine") who performs the tasks. A typical worker's objective is to minimize the amount of work he does (he is "lazy"), or more generally, to schedule as inefficiently (in some sense) as possible. The worker is subject to the constraint that he must be busy when there is work that he can do; we make this notion precise both in the preemptive and nonpreemptive settings. The resulting class of "perverse" scheduling preblems, which we denote "Lazy Bureaucrat Problems," gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules.
Recently Hepner and Stein @cite_5 published a pseudo-polynomial-time algorithm for minimizing the makespan subject to preemption constraint II, thus resolving an open problem from an earlier version of this paper @cite_3 . They also extend the LBP to the parallel setting, in which there are multiple bureaucrats.
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "1565331571", "2019133545" ], "abstract": [ "We study the problem of minimizing makespan for the Lazy Bureaucrat Scheduling Problem. We give a pseudopolynomial time algorithm for a preemptive scheduling problem, resolving an open problem by We also extend the definition of Lazy Bureaucrat scheduling to the multiple-bureaucrat (parallel) setting, and provide pseudopolynomial-time algorithms for problems in that model.", "We consider one of the basic, well-studied problems of scheduling theory, that of nonpreemptively scheduling n independent tasks on m identical, parallel processors with the objective of minimizing the “makespan,” i.e., the total timespan required to process all the given tasks. Because this problem is @math -complete and apparently intractable in general, much effort has been directed toward devising fast algorithms which find near-optimal schedules. The well-known LPT (Largest Processing Time first) algorithm always finds a schedule having makespan within @math of the minimum possible makespan, and this is the best such bound satisfied by any previously published fast algorithm. We describe a comparably fast algorithm, based on techniques from “bin-packing,” which we prove satisfies a bound of 1.220. On the basis of exact upper bounds determined for each @math , we conjecture that the best possible general bound for our algorithm is actually @math ." ] }
quant-ph0210020
2949678910
Given a Boolean function f, we study two natural generalizations of the certificate complexity C(f): the randomized certificate complexity RC(f) and the quantum certificate complexity QC(f). Using Ambainis' adversary method, we exactly characterize QC(f) as the square root of RC(f). We then use this result to prove the new relation R0(f) = O(Q2(f)^2 Q0(f) log n) for total f, where R0, Q2, and Q0 are zero-error randomized, bounded-error quantum, and zero-error quantum query complexities respectively. Finally we give asymptotic gaps between the measures, including a total f for which C(f) is superquadratic in QC(f), and a symmetric partial f for which QC(f) = O(1) yet Q2(f) = Omega(n log n).
@cite_2 studied a query complexity measure they called @math , for Merlin-Arthur. In our notation, @math equals the maximum of @math over all @math with @math . observed that @math , where @math is the number of queries needed given arbitrarily many rounds of interaction with a prover. They also used error-correcting codes to construct a total @math for which @math but @math . This has similarities to our construction, in Section , of a symmetric partial @math for which @math but @math . Aside from that and from Proposition , 's results do not overlap with ours.
{ "cite_N": [ "@cite_2" ], "mid": [ "2085180799" ], "abstract": [ "It is well known that probabilistic boolean decision trees cannot be much more powerful than deterministic ones (N. Nisan, SIAM J. Comput.20, No. 6 (1991), 999?1007). Motivated by a question if randomization can significantly speed up a nondeterministic computation via a boolean decision tree, we address structural properties of Arthur?Merlin games in this model and prove some lower bounds. We consider two cases of interest, the first when the length of communication between the players is limited and the second, if it is not. While in the first case we can carry over the relations between the corresponding Turing complexity classes, in the second case we observe in contrast with Turing complexity that a one-round Merlin?Arthur protocol is as powerful as a general interactive proof system and, in particular, can simulate a one-round Arthur?Merlin protocol. Moreover, we show that sometimes a Merlin?Arthur protocol can be more efficient than an Arthur?Merlin protocol and than a Merlin?Arthur protocol with limited communication. This is the case for a boolean function whose set of zeroes is a code with high minimum distance and a natural uniformity condition. Such functions provide an example when the Merlin?Arthur complexity is 1 with one-sided error ??(23, 1), but at the same time the nondeterministic decision tree complexity is ?(n). The latter should be contrasted with another fact we prove. Namely, if a function has Merlin?Arthur complexity 1 with one-sided error probability ??(0, 23, then its nondeterministic complexity is bounded by a constant. Other results of the paper include connections with the block sensitivity and related combinatorial properties of a boolean function." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
We now review loop checking in more details. To our best knowledge, among all existing loop checking mechanisms only OS-check @cite_18 , EVA-check @cite_13 and VAF-check @cite_14 are suitable for logic programs with function symbols. They rely on a structural characteristic of infinite SLD-derivations, namely, the growth of the size of some generated subgoals. This is what the following theorem states. Here, @math is a given function that maps an atom to its size which is defined in terms of the number of symbols appearing in the atom. As this theorem does not provide any sufficient condition to detect infinite SLD-derivations, the three loop checking mechanisms mentioned above may detect finite derivations as infinite. However, these mechanisms are they detect all infinite loops when the leftmost selection rule is used.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_13" ], "mid": [ "1981314990", "1975546131", "2276638936" ], "abstract": [ "Two complete loop checking mechanisms have been presented in the literature for logic programs with functions: OS-check and EVA-check. OS-check is computationally efficient but quite unreliable in that it often misidentifies infinite loops, whereas EVA-check is reliable for a majority of cases but quite expensive. In this paper, we develop a series of new complete loop checking mechanisms, called VAF-checks. The key technique we introduce is the notion of expanded variants, which captures a key structural characteristic of in finite loops. We show that our approach is superior to both OS-check and EVA-check in that it is as efficient as OS-check and as reliable as EVA-check. Copyright 2001 Elsevier Science B.V.", "The Equality check and the Subsumption check are weakly sound, but are not complete even for function-free logic programs. Although the OverSize (OS) check is complete for positive logic programs, it is too general in the sense that it prunes SLD-derivations merely based on the depth-bound of repeated predicate symbols and the size of atoms, regardless of the inner structure of the atoms, so it may make wrong conclusions even for some simple programs. In this paper, we explore complete loop checking mechanisms for positive logic programs. We develop an extended Variant of Atoms (VA) check that has the following features: (1) it generalizes the concept of “variant” from “the same up to variable renaming” to “the same up to variable renaming except possibly with some arguments whose size recursively increases”, (2) it makes use of the depth-bound of repeated variants of atoms instead of depth-bound of repeated predicate symbols, (3) it combines the Equality Subsumption check with the VA check, (4) it is complete w. r. t. the leftmost selection rule for positive logic programs, and (5) it is more sound than both the OS check and all the existing versions of the VA check.", "Logic programming (LP) is a programming language based on first-order Horn clause logic that uses SLD-resolution as a semi-decision procedure. Finite SLD-computations are inductively sound and complete with respect to least Herbrand models of logic programs. Dually, the corecursive approach to SLD-resolution views infinite SLD-computations as successively approximating infinite terms contained in programs' greatest complete Herbrand models. State-of-the-art algorithms implementing corecursion in LP are based on loop detection. However, such algorithms support inference of logical entailment only for rational terms, and they do not account for the important property of productivity in infinite SLD-computations. Loop detection thus lags behind coinductive methods in interactive theorem proving (ITP) and term-rewriting systems (TRS). Structural resolution is a newly proposed alternative to SLD-resolution that makes it possible to define and semi-decide a notion of productivity appropriate to LP. In this paper, we prove soundness of structural resolution relative to Herbrand model semantics for productive inductive, coinductive, and mixed inductive-coinductive logic programs. We introduce two algorithms that support coinductive proof search for infinite productive terms. One algorithm combines the method of loop detection with productive structural resolution, thus guaranteeing productivity of coinductive proofs for infinite rational terms. The other allows to make lazy sound observations of fragments of infinite irrational productive terms. This puts coinductive methods in LP on par with productivity-based observational approaches to coinduction in ITP and TRS." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
OS-check (for OverSize loop check) was first introduced by Shalin @cite_18 @cite_11 and was then formalized by Bol @cite_19 . It is based on a function @math that can have one of the three following definitions: for any atoms @math and @math , either @math , either @math (resp. @math ) is the count of symbols appearing in @math (resp. @math ), either @math if for each @math , the count of symbols of the @math -th argument of @math is smaller than or equal to that of the @math -th argument of @math . OS-check says that an SLD-derivation may be infinite if it generates an atomic subgoal @math that is , that has ancestor subgoals which have the same predicate symbol as @math and whose size is smaller than or equal to that of @math .
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_11" ], "mid": [ "1975546131", "2949121808", "1981314990" ], "abstract": [ "The Equality check and the Subsumption check are weakly sound, but are not complete even for function-free logic programs. Although the OverSize (OS) check is complete for positive logic programs, it is too general in the sense that it prunes SLD-derivations merely based on the depth-bound of repeated predicate symbols and the size of atoms, regardless of the inner structure of the atoms, so it may make wrong conclusions even for some simple programs. In this paper, we explore complete loop checking mechanisms for positive logic programs. We develop an extended Variant of Atoms (VA) check that has the following features: (1) it generalizes the concept of “variant” from “the same up to variable renaming” to “the same up to variable renaming except possibly with some arguments whose size recursively increases”, (2) it makes use of the depth-bound of repeated variants of atoms instead of depth-bound of repeated predicate symbols, (3) it combines the Equality Subsumption check with the VA check, (4) it is complete w. r. t. the leftmost selection rule for positive logic programs, and (5) it is more sound than both the OS check and all the existing versions of the VA check.", "Let @math be a @math -ary predicate over a finite alphabet. Consider a random CSP @math instance @math over @math variables with @math constraints. When @math the instance @math will be unsatisfiable with high probability, and we want to find a refutation - i.e., a certificate of unsatisfiability. When @math is the @math -ary OR predicate, this is the well studied problem of refuting random @math -SAT formulas, and an efficient algorithm is known only when @math . Understanding the density required for refutation of other predicates is important in cryptography, proof complexity, and learning theory. Previously, it was known that for a @math -ary predicate, having @math constraints suffices for refutation. We give a criterion for predicates that often yields efficient refutation algorithms at much lower densities. Specifically, if @math fails to support a @math -wise uniform distribution, then there is an efficient algorithm that refutes random CSP @math instances @math whp when @math . Indeed, our algorithm will \"somewhat strongly\" refute @math , certifying @math , if @math then we get the strongest possible refutation, certifying @math . This last result is new even in the context of random @math -SAT. Regarding the optimality of our @math requirement, prior work on SDP hierarchies has given some evidence that efficient refutation of random CSP @math may be impossible when @math . Thus there is an indication our algorithm's dependence on @math is optimal for every @math , at least in the context of SDP hierarchies. Along these lines, we show that our refutation algorithm can be carried out by the @math -round SOS SDP hierarchy. Finally, as an application of our result, we falsify assumptions used to show hardness-of-learning results in recent work of Daniely, Linial, and Shalev-Shwartz.", "Two complete loop checking mechanisms have been presented in the literature for logic programs with functions: OS-check and EVA-check. OS-check is computationally efficient but quite unreliable in that it often misidentifies infinite loops, whereas EVA-check is reliable for a majority of cases but quite expensive. In this paper, we develop a series of new complete loop checking mechanisms, called VAF-checks. The key technique we introduce is the notion of expanded variants, which captures a key structural characteristic of in finite loops. We show that our approach is superior to both OS-check and EVA-check in that it is as efficient as OS-check and as reliable as EVA-check. Copyright 2001 Elsevier Science B.V." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
EVA-check (for Extented Variant Atoms loop check) was introduced by Shen @cite_13 . It is based on the notion of . EVA-check says that an SLD-derivation may be infinite if it generates an atomic subgoal @math that is a generalized variant of some of its ancestor @math , @math is a variant of @math except for some arguments whose size increases from @math to @math via a set of recursive clauses. Here the size function that is used applies to predicate arguments, to terms, and it is fixed: it is defined as the the count of symbols that appear in the terms. EVA-check is more reliable than OS-check because it is less likely to mis-identify infinite loops @cite_13 . This is mainly due to the fact that, unlike OS-check, EVA-check refers to the informative internal structure of subgoals.
{ "cite_N": [ "@cite_13" ], "mid": [ "1975546131" ], "abstract": [ "The Equality check and the Subsumption check are weakly sound, but are not complete even for function-free logic programs. Although the OverSize (OS) check is complete for positive logic programs, it is too general in the sense that it prunes SLD-derivations merely based on the depth-bound of repeated predicate symbols and the size of atoms, regardless of the inner structure of the atoms, so it may make wrong conclusions even for some simple programs. In this paper, we explore complete loop checking mechanisms for positive logic programs. We develop an extended Variant of Atoms (VA) check that has the following features: (1) it generalizes the concept of “variant” from “the same up to variable renaming” to “the same up to variable renaming except possibly with some arguments whose size recursively increases”, (2) it makes use of the depth-bound of repeated variants of atoms instead of depth-bound of repeated predicate symbols, (3) it combines the Equality Subsumption check with the VA check, (4) it is complete w. r. t. the leftmost selection rule for positive logic programs, and (5) it is more sound than both the OS check and all the existing versions of the VA check." ] }
cs0212026
1666765554
Since the seminal work of J. A. Robinson on resolution, many lifting lemmas for simplifying proofs of completeness of resolution have been proposed in the literature. In the logic programming framework, they may also help to detect some infinite derivations while proving goals under the SLD-resolution. In this paper, we first generalize a version of the lifting lemma, by extending the relation "is more general than" so that it takes into account only some arguments of the atoms. The other arguments, which we call neutral arguments, are disregarded. Then we propose two syntactic conditions of increasing power for identifying neutral arguments from mere inspection of the text of a logic program.
VAF-check (for Variant Atoms loop check for logic programs with Functions) was proposed by Shen @cite_14 . It is based on the notion of . VAF-check says that an SLD-derivation may be infinite if it generates an atomic subgoal @math that is an expanded variant of some of its ancestor @math , @math is a variant of @math except for some arguments @math such that: @math grows from @math to @math into a function containing @math , , @math grows from @math to @math into a function containing @math . VAF-check is as reliable as and more efficient than EVA-check @cite_14 .
{ "cite_N": [ "@cite_14" ], "mid": [ "1981314990" ], "abstract": [ "Two complete loop checking mechanisms have been presented in the literature for logic programs with functions: OS-check and EVA-check. OS-check is computationally efficient but quite unreliable in that it often misidentifies infinite loops, whereas EVA-check is reliable for a majority of cases but quite expensive. In this paper, we develop a series of new complete loop checking mechanisms, called VAF-checks. The key technique we introduce is the notion of expanded variants, which captures a key structural characteristic of in finite loops. We show that our approach is superior to both OS-check and EVA-check in that it is as efficient as OS-check and as reliable as EVA-check. Copyright 2001 Elsevier Science B.V." ] }
cs0212045
1877185759
Community identification algorithms have been used to enhance the quality of the services perceived by its users. Although algorithms for community have a widespread use in the Web, their application to portals or specific subsets of the Web has not been much studied. In this paper, we propose a technique for local community identification that takes into account user access behavior derived from access logs of servers in the Web. The technique takes a departure from the existing community algorithms since it changes the focus of in terest, moving from authors to users. Our approach does not use relations imposed by authors (e.g. hyperlinks in the case of Web pages). It uses information derived from user accesses to a service in order to infer relationships. The communities identified are of great interest to content providers since they can be used to improve quality of their services. We also propose an evaluation methodology for analyzing the results obtained by the algorithm. We present two case studies based on actual data from two services: an online bookstore and an online radio. The case of the online radio is particularly relevant, because it emphasizes the contribution of the proposed algorithm to find out communities in an environment (i.e., streaming media service) without links, that represent the relations imposed by authors (e.g. hyperlinks in the case of Web pages).
A considerable amount of research has been developed on community identification over the Web. Most of the approaches focus on analyzing text content, considering vector-space models for the objects usually related to Information Retrieval @cite_7 , hyperlink structure connecting the pages @cite_5 @cite_12 @cite_4 , markup tags associated with the hyperlinks or the combination of the previously cited sources of information @cite_6 @cite_3 . Therefore, they are restricted to objects that contain implicit information provided by the authors.Our work, on the other hand, is based solely on user access behavior.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_6", "@cite_3", "@cite_5", "@cite_12" ], "mid": [ "2619328388", "2076008912", "2109757083", "2962984063", "2079149066", "1784290353" ], "abstract": [ "This paper focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition, i.e., ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available data sets: ICDAR03, ICDAR13, Con-Text , and Flickr-logo . We improve the state-of-the-art end-to-end character recognition by a large margin of 15 on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7 to 60.3 ) and logo retrieval (57.4 to 54.8 ).", "A major challenge in indexing unstructured hypertext databases is to automatically extract meta-data that enables structured search using topic taxonomies, circumvents keyword ambiguity, and improves the quality of search and profile-based routing and filtering. Therefore, an accurate classifier is an essential component of a hypertext database. Hyperlinks pose new problems not addressed in the extensive text classification literature. Links clearly contain high-quality semantic clues that are lost upon a purely term-based classifier, but exploiting link information is non-trivial because it is noisy. Naive use of terms in the link neighborhood of a document can even degrade accuracy. Our contribution is to propose robust statistical models and a relaxation labeling technique for better classification by exploiting link information in a small neighborhood around documents. Our technique also adapts gracefully to the fraction of neighboring documents having known topics. We experimented with pre-classified samples from Yahoo! 1 and the US Patent Database 2 . In previous work, we developed a text classifier that misclassified only 13 of the documents in the well-known Reuters benchmark; this was comparable to the best results ever obtained. This classifier misclassified 36 of the patents, indicating that classifying hypertext can be more difficult than classifying text. Naively using terms in neighboring documents increased error to 38 ; our hypertext classifier reduced it to 21 . Results with the Yahoo! sample were more dramatic: the text classifier showed 68 error, whereas our hypertext classifier reduced this to only 21 .", "In this paper, a method based on part-of-speech tagging (PoS) is used for bibliographic reference structure. This method operates on a roughly structured ASCII file, produced by OCR. Because of the heterogeneity of the reference structure, the method acts in a bottom-up way, without an a priori model, gathering structural elements from basic tags to sub-fields and fields. Significant tags are first grouped in homogeneous classes according to their grammar categories and then reduced in canonical forms corresponding to record fields: \"authors\", \"title\", \"conference name\", \"date\", etc. Non labelled tokens are integrated in one or another field by either applying PoS correction rules or using a structure model generated from well-detected records. The designed prototype operates with a great satisfaction on different record layouts and character recognition qualities. Without manual intervention, 96.6 words are correctly attributed, and about 75.9 references are completely segmented from 2500 references.", "Abstract Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (, 2013; , 2014; , 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (, 2014; , 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 F -score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https: github.com lluisgomez TextProposals .", "Four major factors govern the intricacies of community extraction in networks: (1) the literature offers a multitude of disparate community detection algorithms whose output exhibits high structural variability across the collection, (2) communities identified by algorithms may differ structurally from real communities that arise in practice, (3) there is no consensus characterizing how to discriminate communities from noncommunities, and (4) the application domain includes a wide variety of networks of fundamentally different natures. In this article, we present a class separability framework to tackle these challenges through a comprehensive analysis of community properties. Our approach enables the assessment of the structural dissimilarity among the output of multiple community detection algorithms and between the output of algorithms and communities that arise in practice. In addition, our method provides us with a way to organize the vast collection of community detection algorithms by grouping those that behave similarly. Finally, we identify the most discriminative graph-theoretical properties of community signature and the small subset of properties that account for most of the biases of the different community detection algorithms. We illustrate our approach with an experimental analysis, which reveals nuances of the structure of real and extracted communities. In our experiments, we furnish our framework with the output of 10 different community detection procedures, representative of categories of popular algorithms available in the literature, applied to a diverse collection of large-scale real network datasets whose domains span biology, online shopping, and social systems. We also analyze communities identified by annotations that accompany the data, which reflect exemplar communities in various domain. We characterize these communities using a broad spectrum of community properties to produce the different structural classes. As our experiments show that community structure is not a universal concept, our framework enables an informed choice of the most suitable community detection method for identifying communities of a specific type in a given network and allows for a comparison of existing community detection algorithms while guiding the design of new ones.", "Over the last decades, several billion Web pages have been made available on the Web. The ongoing transition from the current Web of unstructured data to the Web of Data yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework) from these websites. One of the key steps towards extracting RDF from text is the disambiguation of named entities. While several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting AGDISTIS, a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach combines the Hypertext-Induced Topic Search (HITS) algorithm with label expansion strategies and string similarity measures. Based on this combination, AGDISTIS can efficiently detect the correct URIs for a given set of named entities within an input text. We evaluate our approach on eight different datasets against state-of-the-art named entity disambiguation frameworks. Our results indicate that we outperform the state-of-the-art approach by up to 29 F-measure." ] }
cs0212045
1877185759
Community identification algorithms have been used to enhance the quality of the services perceived by its users. Although algorithms for community have a widespread use in the Web, their application to portals or specific subsets of the Web has not been much studied. In this paper, we propose a technique for local community identification that takes into account user access behavior derived from access logs of servers in the Web. The technique takes a departure from the existing community algorithms since it changes the focus of in terest, moving from authors to users. Our approach does not use relations imposed by authors (e.g. hyperlinks in the case of Web pages). It uses information derived from user accesses to a service in order to infer relationships. The communities identified are of great interest to content providers since they can be used to improve quality of their services. We also propose an evaluation methodology for analyzing the results obtained by the algorithm. We present two case studies based on actual data from two services: an online bookstore and an online radio. The case of the online radio is particularly relevant, because it emphasizes the contribution of the proposed algorithm to find out communities in an environment (i.e., streaming media service) without links, that represent the relations imposed by authors (e.g. hyperlinks in the case of Web pages).
Besides, we are considering community identification applied to a local context instead of the whole Web. Our approach aims to adapt the graph based community identification algorithm described in @cite_5 . Some modifications to @cite_5 that takes into account user information have already been proposed in @cite_10 . However, this work was not focused on the community identification capabilities of @cite_5 and also considered a different representation of user patterns.
{ "cite_N": [ "@cite_5", "@cite_10" ], "mid": [ "2079149066", "2552367669" ], "abstract": [ "Four major factors govern the intricacies of community extraction in networks: (1) the literature offers a multitude of disparate community detection algorithms whose output exhibits high structural variability across the collection, (2) communities identified by algorithms may differ structurally from real communities that arise in practice, (3) there is no consensus characterizing how to discriminate communities from noncommunities, and (4) the application domain includes a wide variety of networks of fundamentally different natures. In this article, we present a class separability framework to tackle these challenges through a comprehensive analysis of community properties. Our approach enables the assessment of the structural dissimilarity among the output of multiple community detection algorithms and between the output of algorithms and communities that arise in practice. In addition, our method provides us with a way to organize the vast collection of community detection algorithms by grouping those that behave similarly. Finally, we identify the most discriminative graph-theoretical properties of community signature and the small subset of properties that account for most of the biases of the different community detection algorithms. We illustrate our approach with an experimental analysis, which reveals nuances of the structure of real and extracted communities. In our experiments, we furnish our framework with the output of 10 different community detection procedures, representative of categories of popular algorithms available in the literature, applied to a diverse collection of large-scale real network datasets whose domains span biology, online shopping, and social systems. We also analyze communities identified by annotations that accompany the data, which reflect exemplar communities in various domain. We characterize these communities using a broad spectrum of community properties to produce the different structural classes. As our experiments show that community structure is not a universal concept, our framework enables an informed choice of the most suitable community detection method for identifying communities of a specific type in a given network and allows for a comparison of existing community detection algorithms while guiding the design of new ones.", "Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web-e.g., microblogging Web sites, trust networks or the Web graph itself-are often directed . Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes." ] }
quant-ph0212071
1522728444
We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal self-delimiting Turing machine to output s. In the other way, we first introduce the so-called universal probability m, and then define K(s) as -log_2 m(s) without using the concept of program-size. We generalize the universal probability to a matrix-valued function, and identify this function with a POVM (positive operator-valued measure). On the basis of this identification, we study a computable POVM measurement with countable measurement outcomes performed upon a finite dimensional quantum system. We show that, up to a multiplicative constant, 2^ -K(s) is the upper bound for the probability of each measurement outcome s in such a POVM measurement. In what follows, the upper bound 2^ -K(s) is shown to be optimal in a certain sense.
Our aim is to generalize algorithmic information theory in order to understand the algorithmic feature of quantum mechanics. There are related works whose purpose is mainly to define the information content of an individual pure quantum state, i.e., to define the of the quantum state @cite_4 @cite_3 @cite_2 , while we will not make such an attempt in this paper.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "1999580286", "2137637395", "2106534004" ], "abstract": [ "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity.", "This paper employs a powerful argument, called an algorithmic argument, to prove lower bounds of the quantum query complexity of a multiple-block ordered search problem, which is a natural generalization of the ordered search problem. Apart from much studied polynomial and adversary methods for quantum query complexity lower bounds, our argument shows that the multiple-block ordered search needs a large number of nonadaptive oracle queries on a black-box model of quantum computation that is also supplemented with advice. Our argument is also applied to the notions of computational complexity theory: quantum truth-table reducibility and quantum truth-table autoreducibility.", "We develop a theory of the algorithmic information in bits contained in an individual pure quantum state. This extends classical Kolmogorov complexity to the quantum domain retaining classical descriptions. Quantum Kolmogorov complexity coincides with the classical Kolmogorov complexity on the classical domain. Quantum Kolmogorov complexity is upper-bounded and can be effectively approximated from above under certain conditions. With high probability, a quantum object is incompressible. Upper and lower bounds of the quantum complexity of multiple copies of individual pure quantum states are derived and may shed some light on the no-cloning properties of quantum states. In the quantum situation complexity is not subadditive. We discuss some relations with \"no-cloning\" and \"approximate cloning\" properties." ] }
quant-ph0212071
1522728444
We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal self-delimiting Turing machine to output s. In the other way, we first introduce the so-called universal probability m, and then define K(s) as -log_2 m(s) without using the concept of program-size. We generalize the universal probability to a matrix-valued function, and identify this function with a POVM (positive operator-valued measure). On the basis of this identification, we study a computable POVM measurement with countable measurement outcomes performed upon a finite dimensional quantum system. We show that, up to a multiplicative constant, 2^ -K(s) is the upper bound for the probability of each measurement outcome s in such a POVM measurement. In what follows, the upper bound 2^ -K(s) is shown to be optimal in a certain sense.
In quantum mechanics, what is represented by a matrix is either a quantum state or a measurement operator. In this paper we generalize the universal probability to a matrix-valued function in different way from @cite_2 , and identify it with an analogue of a POVM. We do not stick to defining the information content of a quantum state. Instead, we focus our thoughts on applying algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure of quantum mechanics. In this line we have the above inequalities and .
{ "cite_N": [ "@cite_2" ], "mid": [ "1999580286" ], "abstract": [ "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity." ] }
quant-ph0212071
1522728444
We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal self-delimiting Turing machine to output s. In the other way, we first introduce the so-called universal probability m, and then define K(s) as -log_2 m(s) without using the concept of program-size. We generalize the universal probability to a matrix-valued function, and identify this function with a POVM (positive operator-valued measure). On the basis of this identification, we study a computable POVM measurement with countable measurement outcomes performed upon a finite dimensional quantum system. We show that, up to a multiplicative constant, 2^ -K(s) is the upper bound for the probability of each measurement outcome s in such a POVM measurement. In what follows, the upper bound 2^ -K(s) is shown to be optimal in a certain sense.
In each of @cite_4 and @cite_3 , the quantum Kolmogorov complexity of a qubit string was defined as a quantum generalization of the standard definition of classical Kolmogorov complexity; the length of the shortest input for the universal decoding algorithm @math to output a finite binary string. Both @cite_4 and @cite_3 adopt the as a universal decoding algorithm @math to output a quantum state in their definition. However, there is a difference between @cite_4 and @cite_3 with respect to the object which is allowed as an input to @math . That is, @cite_4 can only allow a classical binary string as an input, whereas @cite_3 can allow any qubit string. The works @cite_4 , @cite_3 , and @cite_2 are closely related to one another as shown in each of these works. In comparison with our work, since our work is, in essence, based on a generalization of the universal probability, the work @cite_2 is more related to our work than the works @cite_4 and @cite_3 . These two works may be related to our work via the work @cite_2 .
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2762038352", "1999580286", "2072474105" ], "abstract": [ "In this paper we give a definition for quantum Kolmogorov complexity. In the classical setting, the Kolmogorov complexity of a string is the length of the shortest program that can produce this string as its output. It is a measure of the amount of innate randomness (or information) contained in the string. We define the quantum Kolmogorov complexity of a qubit string as the length of the shortest quantum input to a universal quantum Turing machine that produces the initial qubit string with high fidelity. The definition of P. Vitanyi (2001, IEEE Trans. Inform. Theory47, 2464?2479) measures the amount of classical information, whereas we consider the amount of quantum information in a qubit string. We argue that our definition is a natural and accurate representation of the amount of quantum information contained in a quantum state. Recently, P. Gacs (2001, J. Phys. A: Mathematical and General34, 6859?6880) also proposed two measures of quantum algorithmic entropy which are based on the existence of a universal semidensity matrix. The latter definitions are related to Vitanyi's and the one presented in this article, respectively.", "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity.", "Abstract Given two infinite binary sequences A , B we say that B can compress at least as well as A if the prefix-free Kolmogorov complexity relative to B of any binary string is at most as much as the prefix-free Kolmogorov complexity relative to A , modulo a constant. This relation, introduced in Nies (2005) [14] and denoted by A ≤ L K B , is a measure of relative compressing power of oracles, in the same way that Turing reducibility is a measure of relative information. The equivalence classes induced by ≤ L K are called L K degrees (or degrees of compressibility) and there is a least degree containing the oracles which can only compress as much as a computable oracle, also called the ‘low for K ’ sets. A well-known result from Nies (2005) [14] states that these coincide with the K-trivial sets, which are the ones whose initial segments have minimal prefix-free Kolmogorov complexity. We show that with respect to ≤ L K , given any non-trivial Δ 2 0 sets X , Y there is a computably enumerable set A which is not K-trivial and it is below X , Y . This shows that the local structures of Σ 1 0 and Δ 2 0 Turing degrees are not elementarily equivalent to the corresponding local structures in the LK degrees. It also shows that there is no pair of sets computable from the halting problem which forms a minimal pair in the LK degrees; this is sharp in terms of the jump, as it is known that there are sets computable from 0 ″ which form a minimal pair in the LK degrees. We also show that the structure of LK degrees below the LK degree of the halting problem is not elementarily equivalent to the Δ 2 0 or Σ 1 0 structures of LK degrees. The proofs introduce a new technique of permitting below a Δ 2 0 set that is not K -trivial, which is likely to have wider applications." ] }
0705.4604
1556312007
In this paper we present an algorithm for performing runtime verification of a bounded temporal logic over timed runs. The algorithm consists of three elements. First, the bounded temporal formula to be verified is translated into a monadic first-order logic over difference inequalities, which we call monadic difference logic. Second, at each step of the timed run, the monadic difference formula is modified by computing a quotient with the state and time of that step. Third, the resulting formula is checked for being a tautology or being unsatisfiable by a decision procedure for monadic difference logic. We further provide a simple decision procedure for monadic difference logic based on the data structure Difference Decision Diagrams. The algorithm is complete in a very strong sense on a subclass of temporal formulae characterized as homogeneously monadic and it is approximate on other formulae. The approximation comes from the fact that not all unsatisfiable or tautological formulae are recognised at the earliest possible time of the runtime verification. Contrary to existing approaches, the presented algorithms do not work by syntactic rewriting but employ efficient decision structures which make them applicable in real applications within for instance business software.
We take a different approach. By encoding the runtime problem as a satisfiability problem for a monadic first-order logic, we arrive at a different type of algorithm. This algorithm is capable of utilizing a powerful decision structure for difference logic which inherits some of the strengths of binary decision diagrams @cite_20 .
{ "cite_N": [ "@cite_20" ], "mid": [ "1515930456" ], "abstract": [ "The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations in software systems). In this paper, we focus on the case where the background theory is the combination T1∪T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with a decision procedure for T1∪T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT(T1∪T2), called Delayed Theory Combination, which does not require a decision procedure for T1∪T2, but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental comparison." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
There are three different tabling schemes, namely OLDT and SLG @cite_27 @cite_10 , CAT @cite_20 @cite_35 , and iteration-based tabling including linear tabling @cite_23 @cite_18 @cite_33 @cite_7 @cite_17 and DRA @cite_15 . SLG @cite_16 is a formalization based on OLDT for computing well-founded semantics for general programs with negation. The basic idea of using iterative deepening to compute fixpoints dates back to the ET* algorithm @cite_2 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_33", "@cite_7", "@cite_27", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2042416159", "2118889869", "2070598037", "2169591289", "1522225310", "190622319", "1997210046", "2495949634", "2130683061", "2108205915", "2096979400", "2115896319" ], "abstract": [ "Early resolution mechanisms proposed for tabling such as OLDT rely on suspension and resumption of subgoals to compute fixpoints. Recently, a new resolution framework called linear tabling has emerged as an alternative tabling method. The idea of linear tabling is to use iterative computation rather than suspension to compute fixpoints. Although linear tabling is simple, easy to implement, and superior in space efficiency, the current implementations are several times slower than XSB, the state-of-the-art implementation of OLDT, due to re-evaluation of looping subgoals. In this paper, we present a new linear tabling method and propose several optimization techniques for fast computation of fixpoints. The optimization techniques significantly improve the performance by avoiding redundant evaluation of subgoals, re-application of clauses, and reproduction of answers in iterative computation. Our implementation of the method in B-Prolog not only consumes an order of magnitude less stack space than XSB for some programs but also compares favorably well with XSB in speed.", "The Copying Approach to Tabling, abbrv. CAT, is an alternative to SLG-WAM and based on total copying of the areas that the SLG-WAM freezes to preserve execution states of suspended computations. The disadvantage of CAT as pointed out in a previous paper is that in the worst case, CAT must copy so much that it becomes arbitrarily worse than the SLG-WAM. Remedies to this problem have been studied, but a completely satisfactory solution has not emerged. Here, a hybrid approach is presented: CHAT. Its design was guided by the requirement that for non-tabled (i.e. Prolog) execution no changes to the underlying WAM engine need to be made. CHAT combines certain features of the SLG-WAM with features of CAT, but also introduces a technique for freezing WAM stacks without the use of the SLG-WAM's freeze registers that is of independent interest. Empirical results indicate that CHAT is a better choice for implementing the control of tabling than SLG-WAM or CAT. However, programs with arbitrarily worse behaviour exist.", "SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog systems. Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evaluation for three reasons: (a) it may not terminate due to infinite positive recursion; (b) it may be terminate due to infinite recursion through negation; and (c) it may repeatedly evaluate the same literal in a rule body, leading to unacceptable performance. We address all three problems for goal-oriented query evaluation of general logic programs by presenting tabled evaluation with delaying, called SLG resolution. It has three distinctive features: (i) SLG resolution is a partial deduction procedure, consisting of seven fundamental transformations. A query is transformed step by step into a set of answers. The use of transformations separates logical issues of query evaluation from procedural ones. SLG allows an arbitrary computation rule for selecting a literal from a rule body and an arbitrary control strategy for selecting transformations to apply. (ii) SLG resolution is sound and search space complete with respect to the well-founded partial model for all non-floundering queries, and preserves all three-valued stable models. To evaluate a query under differenc three-valued stable models, SLG resolution can be enhanced by further processing of the answers of subgoals relevant to a query. (iii) SLG resolution avoids both positive and negative loops and always terminates for programs with the bounded-term-size property. It has a polynomial time data complexity for well-founded negation of function-free programs. Through a delaying mechanism for handling ground negative literals involved in loops, SLG resolution avoids the repetition of any of its derivation steps. Restricted forms of SLG resolution are identified for definite, locally stratified, and modularly stratified programs, shedding light on the role each transformation plays.", "Global SLS-resolution and SLG-resolution are two representative mechanisms for top-down evaluation of the well-founded semantics of general logic programs. Global SLS-resolution is linear but suffers from infinite loops and redundant computations. In contrast, SLG-resolution resolves infinite loops and redundant computations by means of tabling, but it is not linear. The distinctive advantage of a linear approach is that it can be implemented using a simple, efficient stack-based memory structure like that in Prolog. In this paper we present a linear tabulated resolution for the well-founded semantics, which resolves the problems of infinite loops and redundant computations while preserving the linearity. For non-floundering queries, the proposed method is sound and complete for general logic programs with the bounded-term-size property.", "To resolve the search-incompleteness of depth-first logic program interpreters, a new interpretation method based on the tabulation technique is developed and modeled as a refinement to SLD resolution. Its search space completeness is proved, and a complete search strategy consisting of iterated stages of depth-first search is presented. It is also proved that for programs defining finite relations only, the method under an arbitrary search strategy is terminating and complete.", "Abstract We propose an extension of the SLDNF proof procedure for checking integrity constraints in deductive databases. To achieve the effect of the simplification methods investigated by Nicolas [1982], Lloyd, Sonenberg, and Topor [1986], and Decker [1986], we use clauses corresponding to the updates as top clauses for the search space. This builds in the assumption that the database satisfied its integrity constraints prior to the transaction, and that, therefore, any violation of the constraints in the updated database must involve at least one of the updates in the transaction. Different simplification methods can be simulated by different strategies for literal selection and search. The SLDNF proof procedure needs to be extended to use as top clause any arbitrary deductive rule, denial, or negated fact, to incorporate inference rules for reasoning about implicit deletions resulting from other deletions and additions, and to allow an extended resolution step for reasoning forward from negated facts.", "SLG resolution uses tabling to evaluate nonfloundering normal logic pr ograms according to the well-founded semantics. The SLG-WAM, which forms the engine of the XSB system, can compute in-memory recursive queries an order of magnitute faster than current deductive databases. At the same time, the SLG-WAM tightly intergrates Prolog code with tabled SLG code, and executes Prolog code with minimal overhead compared to the WAM. As a result, the SLG-WAM brings to logic programming important termination and complexity properties of deductive databases. This article describes the architecture of the SLG-WAM for a powerful class of programs, the class of fixed-order dynamically stratified programs . We offer a detailed description of the algorithms, data structures, and instructions that the SLG-WAM adds to the WAM, and a performance analysis of engine overhead due to the extensions.", "SLG resolution, a type of tabled resolution and a technique of logic programming (LP), has polynomial data complexity for ground Datalog queries with negation, making it suitable for deductive database (DDB). It evaluates non-stratified negation according to the three-valued Well-Founded Semantics, making it a suitable starting point for non-monotonic reasoning (NMR). Furthermore, SLG has an efficient partial implementation in the SLG-WAM which, in the XSB logic programming system, has proven an order of magnitude faster than current DDR systems for in-memory queries. Building on SLG resolution, we formulate a method for distributed tabled resolution termed Multi-Processor SLG (SLGMP). Since SLG is modeled as a forest of trees, it then becomes natural to think of these trees as executing at various places over a distributed network in SLGMP. Incremental completion, which is necessary for efficient sequential evaluation, can be modeled through the use of a subgoal dependency graph (SDG), or its approximation. However the subgoal dependency graph is a global property of a forest; in a distributed environment each processor should maintain as small a view of the SDG as possible. The formulation of what and when dependency information must be maintained and propagated in order for distributed completion to be performed safely is the central contribution of SLGMP. Specifically, subgoals in SLGMP are properly numbered such that most of the dependencies among subgoals are represented by the subgoal numbers. Dependency information that is not represented by subgoal numbers is maintained explicitly at each processor and propagated by each processor. SLGMP resolution aims at efficiently evaluating normal logic programs in a distributed environment. SLGMP operations are explicitly defined and soundness and completeness is proven for SLGMP with respect to SLG for programs which terminate for SLG evaluation. The resulting framework can serve as a basis for query processing and non-monotonic reasoning within a distributed environment. We also implemented Distributed XSB, a prototype implementation of SLGMP. Distributed XSB, as a distributed tabled evaluation model, is really a distributed problem solving system, where the data to solve the problem is distributed and each participating process cooperates with other participants (perhaps including itself), by sending and receiving data. Distributed XSB proposes a distributed data computing model, where there may be cyclic dependencies among participating processes and the dependencies can be both negative and positive.", "Tabling is an implementation technique that improves the declarativeness and expressiveness of Prolog by reusing answers to subgoals. During tabled execution, several decisions have to be made. These are determined by the scheduling strategy. Whereas a strategy can achieve very good performance for certain applications, for others it might add overheads and even lead to unacceptable inefficiency. The ability of using multiple strategies within the same evaluation can be a means of achieving the best possible performance. In this work, we present how the YapTab system was designed to support dynamic mixed-strategy evaluation of the two most successful tabling scheduling strategies: batched scheduling and local scheduling.", "As evaluation methods for logic programs have become more sophisticated, the classes of programs for which termination can be guaranteed have expanded. From the perspective of ar set programs that include function symbols, recent work has identified classes for which grounding routines can terminate either on the entire program [ 2008] or on suitable queries [ 2009]. From the perspective of tabling, it has long been known that a tabling technique called subgoal abstraction provides good termination properties for definite programs [Tamaki and Sato 1986], and this result was recently extended to stratified programs via the class of bounded term-size programs [Riguzzi and Swift 2013]. In this article, we provide a formal definition of tabling with subgoal abstraction resulting in the SLGSA algorithm. Moreover, we discuss a declarative characterization of the queries and programs for which SLGSA terminates. We call this class strongly bounded term-size programs and show its equivalence to programs with finite well-founded models. For normal programs, strongly bounded term-size programs strictly includes the finitely ground programs of [2008]. SLGSA has an asymptotic complexity on strongly bounded term-size programs equal to the best known and produces a residual program that can be sent to an answer set programming system. Finally, we describe the implementation of subgoal abstraction within the SLG-WAM of XSB and provide performance results.", "Delaying-based tabling mechanisms, such as the one adopted in XSB, are non-linear in the sense that the computation state of delayed calls has to be preserved. In this paper, we present the implementation of a linear tabling mechanism. The key idea is to let a call execute from the backtracking point of a former variant call if such a call exists. The linear tabling mechanism has the following advantages over non-linear ones: (1) it is relatively easy to implement; (2) it imposes no overhead on standard Prolog programs; and (3) the cut operator works as for standard Prolog programs and thus it is possible to use the cut operator to express negation-as-failure and conditionals in tabled programs. The weakness of the linear mechanism is the necessity of re-computation for computing fix-points. However, we have found that re-computation can be avoided for a large portion of calls of directly-recursive tabled predicates. We have implemented the linear tabling mechanism in B-Prolog. Experimental comparison shows that B-Prolog is close in speed to XSB and outperforms XSB when re-computation can be avoided. Concerning space efficiency, B-Prolog is an order of magnitude better than XSB for some programs.", "This article describes the resource-and system-building efforts of an 8-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation), and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. Although the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described here. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
In SLG-WAM, a consumer fails after it exhausts all the existing answers and its state is preserved by freezing the stack so that it can be reactivated after new answers are generated. The CAT approach does not freeze the stack but instead copies the stack segments between the consumer and its producer into a separate area so that backtracking can be done normally. The saved state is reinstalled after a new answer is generated. CHAT @cite_26 is a hybrid approach that combines SLG-WAM and CAT.
{ "cite_N": [ "@cite_26" ], "mid": [ "2118889869" ], "abstract": [ "The Copying Approach to Tabling, abbrv. CAT, is an alternative to SLG-WAM and based on total copying of the areas that the SLG-WAM freezes to preserve execution states of suspended computations. The disadvantage of CAT as pointed out in a previous paper is that in the worst case, CAT must copy so much that it becomes arbitrarily worse than the SLG-WAM. Remedies to this problem have been studied, but a completely satisfactory solution has not emerged. Here, a hybrid approach is presented: CHAT. Its design was guided by the requirement that for non-tabled (i.e. Prolog) execution no changes to the underlying WAM engine need to be made. CHAT combines certain features of the SLG-WAM with features of CAT, but also introduces a technique for freezing WAM stacks without the use of the SLG-WAM's freeze registers that is of independent interest. Empirical results indicate that CHAT is a better choice for implementing the control of tabling than SLG-WAM or CAT. However, programs with arbitrarily worse behaviour exist." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
The DRA method @cite_15 is also iteration based, but it identifies looping clauses dynamically and iterates the execution of looping clauses to compute fixpoints. While in linear tabling iteration is performed on only top-most looping subgoals, in DRA iteration is performed on every looping subgoal. In ET* @cite_2 , every tabled subgoal is iterated even if it does not occur in a loop. Besides the difference in answer consumption strategies and optimizations, the linear tabling scheme described in this paper differs from the original version @cite_33 @cite_18 in that followers fail after they exhaust their answers rather than steal their pioneers' choice points. This strategy is originally adopted in the DRA method.
{ "cite_N": [ "@cite_15", "@cite_18", "@cite_33", "@cite_2" ], "mid": [ "2042416159", "1518621415", "2096979400", "2031397756" ], "abstract": [ "Early resolution mechanisms proposed for tabling such as OLDT rely on suspension and resumption of subgoals to compute fixpoints. Recently, a new resolution framework called linear tabling has emerged as an alternative tabling method. The idea of linear tabling is to use iterative computation rather than suspension to compute fixpoints. Although linear tabling is simple, easy to implement, and superior in space efficiency, the current implementations are several times slower than XSB, the state-of-the-art implementation of OLDT, due to re-evaluation of looping subgoals. In this paper, we present a new linear tabling method and propose several optimization techniques for fast computation of fixpoints. The optimization techniques significantly improve the performance by avoiding redundant evaluation of subgoals, re-application of clauses, and reproduction of answers in iterative computation. Our implementation of the method in B-Prolog not only consumes an order of magnitude less stack space than XSB for some programs but also compares favorably well with XSB in speed.", "Tabled evaluations ensure termination of logic programs with finite models by keeping track of which subgoals have been called. Given several variant subgoals in an evaluation, only the first one encountered will use program clause resolution; the rest uses answer resolution. This use of answer resolution prevents infinite looping which happens in SLD. Given the asynchronicity of answer generation and answer return, tabling systems face an important scheduling choice not present in traditional top-down evaluation: How does the order of returning answers to consuming subgoals affect program efficiency.", "Delaying-based tabling mechanisms, such as the one adopted in XSB, are non-linear in the sense that the computation state of delayed calls has to be preserved. In this paper, we present the implementation of a linear tabling mechanism. The key idea is to let a call execute from the backtracking point of a former variant call if such a call exists. The linear tabling mechanism has the following advantages over non-linear ones: (1) it is relatively easy to implement; (2) it imposes no overhead on standard Prolog programs; and (3) the cut operator works as for standard Prolog programs and thus it is possible to use the cut operator to express negation-as-failure and conditionals in tabled programs. The weakness of the linear mechanism is the necessity of re-computation for computing fix-points. However, we have found that re-computation can be avoided for a large portion of calls of directly-recursive tabled predicates. We have implemented the linear tabling mechanism in B-Prolog. Experimental comparison shows that B-Prolog is close in speed to XSB and outperforms XSB when re-computation can be avoided. Concerning space efficiency, B-Prolog is an order of magnitude better than XSB for some programs.", "This paper examines techniques for finding falsifying trajectories of hybrid systems using an approach that we call trajectory splicing. Many formal verification techniques for hybrid systems, including flowpipe construction, can identify plausible abstract counterexamples for property violations. However, there is often a gap between the reported abstract counterexamples and the concrete system trajectories. Our approach starts with a candidate sequence of disconnected trajectory segments, each segment lying inside a discrete mode. However, such disconnected segments do not form concrete violations due to the gaps that exist between the ending state of one segment and the starting state of the subsequent segment. Therefore, trajectory splicing uses local optimization to minimize the gap between these segments, effectively splicing them together to form a concrete trajectory. We demonstrate the use of our approach for falsifying safety properties of hybrid systems using standard optimization techniques. As such, our approach is not restricted to linear systems. We compare our approach with other falsification approaches including uniform random sampling and a robustness guided falsification approach used in the tool S-Taliro. Our preliminary evaluation clearly shows the potential of our approach to search for candidate trajectory segments and use them to find concrete property violations." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
The two consumption strategies have been compared in XSB @cite_5 as two scheduling strategies. The lazy strategy is called local scheduling and the eager strategy is called single-stack scheduling . Another strategy, called batched scheduling , is similar to local scheduling but top-most looping subgoals do not have to wait until their clusters become complete to return answers. Their experimental results indicate that local scheduling constantly outperforms the other two strategies on stack space and can perform asymptotically better than the other two strategies on speed. The superior performance of local scheduling is attributed to the saving of freezing stack segments. Although our experiment confirms the good space performance of the lazy strategy, it gives a counterintuitive result that the eager strategy is as fast as the lazy strategy. This result implies that the cost of iterative evaluation is considerably smaller than that of freezing stack segments, and for predicates with cuts the eager strategy can be used without significant slow-down. In our tabling system, different answer consumption strategies can be used for different predicates. The tabling system described in @cite_31 also supports mixed strategies.
{ "cite_N": [ "@cite_5", "@cite_31" ], "mid": [ "2130683061", "1510598052" ], "abstract": [ "Tabling is an implementation technique that improves the declarativeness and expressiveness of Prolog by reusing answers to subgoals. During tabled execution, several decisions have to be made. These are determined by the scheduling strategy. Whereas a strategy can achieve very good performance for certain applications, for others it might add overheads and even lead to unacceptable inefficiency. The ability of using multiple strategies within the same evaluation can be a means of achieving the best possible performance. In this work, we present how the YapTab system was designed to support dynamic mixed-strategy evaluation of the two most successful tabling scheduling strategies: batched scheduling and local scheduling.", "We study a processing system comprised of parallel queues, whose individual service rates are specified by a global service mode (configuration). The issue is how to switch the system between various possible service modes, so as to maximize its throughput and maintain stability under the most workload-intensive input traffic traces (arrival processes). Stability preserves the job inflow–outflow balance at each queue on the traffic traces. Two key families of service policies are shown to maximize throughput, under the mild condition that traffic traces have long-term average workload rates. In the first family of cone policies, the service mode is chosen based on the system backlog state belonging to a corresponding cone. Two distinct policy classes of that nature are investigated, MaxProduct and FastEmpty. In the second family of batch policies (BatchAdapt), jobs are collectively scheduled over adaptively chosen horizons, according to an asymptotically optimal, robust schedule. The issues of nonpreemptive job processing and non-negligible switching times between service modes are addressed. The analysis is extended to cover feed-forward networks of such processing systems nodes. The approach taken unifies and generalizes prior studies, by developing a general trace-based modeling framework (sample-path approach) for addressing the queueing stability problem. It treats the queueing structure as a deterministic dynamical system and analyzes directly its evolution trajectories. It does not require any probabilistic superstructure, which is typically used in previous approaches. Probability can be superposed later to address finer performance questions (e.g., delay). The throughput maximization problem is seen to be primarily of structural nature. The developed methodology appears to have broader applicability to other queueing systems." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
Semi-naive optimization is a fundamental idea for reducing redundancy in bottom-up evaluation of logic database queries @cite_21 @cite_32 . As far as we know, its impact on top-down evaluation had been unknown before @cite_17 . OLDT @cite_27 and SLG @cite_10 do not need this technique since it is not iterative and the underlying delaying mechanism successfully avoids the repetition of any derivation step. An attempt has been made by Guo and Gupta @cite_15 to make incremental consumption of tabled answers possible in DRA. In their scheme, answers are also divided into three regions but answers are consumed incrementally as in bottom-up evaluation. Since no condition is given for the completeness and no experimental result is reported on the impact of the technique, we are unable to give a detailed comparison.
{ "cite_N": [ "@cite_21", "@cite_32", "@cite_27", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "2151952683", "2070598037", "2341508215", "2571600243", "2120624822", "2068852191" ], "abstract": [ "Semi-naive evaluation is an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers. The impact of this technique on top-down evaluation had been unknown. In this paper, we introduce semi-naive evaluation into linear tabling, a top-down resolution mechanism for tabled logic programs. We give the conditions for the technique to be safe and propose an optimization technique called early answer promotion to enhance its effectiveness. While semi-naive evaluation is not as effective in linear tabling as in bottom-up evaluation, it is worthwhile to be adopted. Our benchmarking shows that this technique gives significant speed-ups to some programs.", "SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog systems. Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evaluation for three reasons: (a) it may not terminate due to infinite positive recursion; (b) it may be terminate due to infinite recursion through negation; and (c) it may repeatedly evaluate the same literal in a rule body, leading to unacceptable performance. We address all three problems for goal-oriented query evaluation of general logic programs by presenting tabled evaluation with delaying, called SLG resolution. It has three distinctive features: (i) SLG resolution is a partial deduction procedure, consisting of seven fundamental transformations. A query is transformed step by step into a set of answers. The use of transformations separates logical issues of query evaluation from procedural ones. SLG allows an arbitrary computation rule for selecting a literal from a rule body and an arbitrary control strategy for selecting transformations to apply. (ii) SLG resolution is sound and search space complete with respect to the well-founded partial model for all non-floundering queries, and preserves all three-valued stable models. To evaluate a query under differenc three-valued stable models, SLG resolution can be enhanced by further processing of the answers of subgoals relevant to a query. (iii) SLG resolution avoids both positive and negative loops and always terminates for programs with the bounded-term-size property. It has a polynomial time data complexity for well-founded negation of function-free programs. Through a delaying mechanism for handling ground negative literals involved in loops, SLG resolution avoids the repetition of any of its derivation steps. Restricted forms of SLG resolution are identified for definite, locally stratified, and modularly stratified programs, shedding light on the role each transformation plays.", "In this paper, we propose a new decomposition approach named the proximal primal dual algorithm (Prox-PDA) for smooth nonconvex linearly constrained optimization problems. The proposed approach is primal-dual based, where the primal step minimizes certain approximation of the augmented Lagrangian of the problem, and the dual step performs an approximate dual ascent. The approximation used in the primal step is able to decompose the variable blocks, making it possible to obtain simple subproblems by leveraging the problem structures. Theoretically, we show that whenever the penalty parameter in the augmented Lagrangian is larger than a given threshold, the Prox-PDA converges to the set of stationary solutions, globally and in a sublinear manner (i.e., certain measure of stationarity decreases in the rate of @math , where @math is the iteration counter). Interestingly, when applying a variant of the Prox-PDA to the problem of distributed nonconvex optimization (over a connected undirected graph), the resulting algorithm coincides with the popular EXTRA algorithm [ 2014], which is only known to work in convex cases. Our analysis implies that EXTRA and its variants converge globally sublinearly to stationary solutions of certain nonconvex distributed optimization problem. There are many possible extensions of the Prox-PDA, and we present one particular extension to certain nonconvex distributed matrix factorization problem.", "Consider the convex optimization problem min x ƒ (g(x)) where both ƒ and g are unknown but can be estimated through sampling. We consider the stochastic compositional gradient descent method (SCGD) that updates based on random function and subgradient evaluations, which are generated by a conditional sampling oracle. We focus on the case where samples are corrupted with Markov noise. Under certain diminishing stepsize assumptions, we prove that the iterate of SCGD converges almost surely to an optimal solution if such a solution exists. Under specific constant stepsize assumptions, we obtain finite-sample error bounds for the averaged iterates of the algorithm. We illustrate an application to online value evaluation in dynamic programming.", "We describe an optimization method for combinational and sequential logic networks, with emphasis on scalability and the scope of optimization. The proposed resynthesis (a) is capable of substantial logic restructuring, (b) is customizable to solve a variety of optimization tasks, and (c) has reasonable runtime on industrial designs. The approach uses don't cares computed for a window surrounding a node and can take into account external don't cares (e.g. unreachable states). It uses a SAT solver and interpolation to find a new representation for a node. This representation can be in terms of inputs from other nodes in the window thus effecting Boolean re-substitution. Experimental results on 6-input LUT networks after high effort synthesis show substantial reductions in area and delay. When applied to 20 large academic benchmarks, the LUT count and logic level is reduced by 45.0 and 12.2 , respectively. The longest runtime for synthesis and mapping is about two minutes. When applied to a set of 14 industrial benchmarks ranging up to 83K 6-LUTs, the LUT count and logic level is reduced by 11.8 and 16.5 , respectively. Experimental results on 6-input LUT networks after high-effort synthesis show substantial reductions in area and delay. The longest runtime is about 30 minutes.", "We consider the makespan-minimization problem on unrelated machines in the context of algorithmic mechanism design. No truthful mechanisms with non-trivial approximation guarantees are known for this multidimensional domain. We study a well-motivated special case (also a multidimensional domain), where the processing time of a job on each machine is either \"low\" or \"high.\" We give a general technique to convert any c-approximation algorithm (in a black-box fashion) to a 3c-approximation truthful-in-expectation mechanism. Our construction uses fractional truthful mechanisms as a building block, and builds upon a technique of Lavi and Swamy [Lavi, R., Swamy, C., 2005. Truthful and near-optimal mechanism design via linear programming. In: Proc. 46th FOCS, pp. 595-604]. When all jobs have identical low and high values, we devise a deterministic 2-approximation truthful mechanism. The chief novelty of our results is that we do not utilize explicit price definitions to prove truthfulness. Instead we design algorithms that satisfy cycle monotonicity [Rochet, J., 1987. A necessary and sufficient condition for rationalizability in a quasilinear context. J. Math. Econ. 16, 191-200], a necessary and sufficient condition for truthfulness in multidimensional settings; this is the first work that leverages this characterization." ] }
0705.3468
2950373299
Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This paper describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of inter-dependent subgoals as represented by a top-most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this paper, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e. sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
Semi-naive optimization does not solve all the problems of recomputation in linear tabling. Recall the Warren's example: Assume there is a very costly non-tabled subgoal preceding p(X,Z) , then the subgoal has to be executed in each iteration even with semi-naive optimization. This example demonstrates the acuteness of the problem of recomputation because the number of iterations needed to reach the fixpoint is not constant. One treatment would be to table the subgoal to avoid recomputation, as suggested in @cite_15 , but tabling extra predicates can cause other problems such as over consumption of table space.
{ "cite_N": [ "@cite_15" ], "mid": [ "2042416159" ], "abstract": [ "Early resolution mechanisms proposed for tabling such as OLDT rely on suspension and resumption of subgoals to compute fixpoints. Recently, a new resolution framework called linear tabling has emerged as an alternative tabling method. The idea of linear tabling is to use iterative computation rather than suspension to compute fixpoints. Although linear tabling is simple, easy to implement, and superior in space efficiency, the current implementations are several times slower than XSB, the state-of-the-art implementation of OLDT, due to re-evaluation of looping subgoals. In this paper, we present a new linear tabling method and propose several optimization techniques for fast computation of fixpoints. The optimization techniques significantly improve the performance by avoiding redundant evaluation of subgoals, re-application of clauses, and reproduction of answers in iterative computation. Our implementation of the method in B-Prolog not only consumes an order of magnitude less stack space than XSB for some programs but also compares favorably well with XSB in speed." ] }
0705.3243
2102136055
Traceroute sampling is an important technique in exploring the internet router graph and the autonomous system graph. Although it is one of the primary techniques used in calculating statistics about the internet, it can introduce bias that corrupts these estimates. This paper reports on a theoretical and experimental investigation of a new technique to reduce the bias of traceroute sampling when estimating the degree distribution. We develop a new estimator for the degree of a node in a traceroute-sampled graph; validate the estimator theoretically in Erdos-Renyi graphs and, through computer experiments, for a wider range of graphs; and apply it to produce a new picture of the degree distribution of the autonomous system graph.
Internet mapping by traceroute sampling was pioneered by Pansiot and Grad in @cite_10 , and the scale-free nature of the degree distribution was observed by Faloutsos, Faloutsos, and Faloutsos in @cite_21 . Since 1998, the Cooperative Association for Internet Data Analysis (CAIDA) project has archived traceroute data that is collected daily @cite_17 . The bias introduced by traceroute sampling was identified in computer experiments by Lakhina, Byers, Crovella, and Xie in @cite_4 and Petermann and De Los Rios @cite_16 , and formally proven to hold in a model of one-monitor, all-target traceroute sample by Clauset and Moore @cite_2 and, in further generality, by Achlioptas, Clauset, Kempe, and Moore @cite_1 . Computer experiments by to Guillaume, Latapy, and Magoni @cite_14 and an analysis using the mean field approximation of statistical physics due to Dall'Asta, Alvarez-Hamelin, Barrat, V ' a zquez, and Vespignani @cite_11 argue that, despite the bias introduced by traceroute sampling, some sort of scale-free behavior can be inferred from the union of traceroute-sampled paths.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_11", "@cite_21", "@cite_1", "@cite_2", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "1519020025", "1540064387", "1693225151", "2120511087", "2107648668", "1971795786", "1479935453", "2159326294", "2156531019" ], "abstract": [ "Department of Physics and Astronomy,University of New Mexico, Albuquerque NM 87131(aaron,moore)@cs.unm.edu(Dated: February 2, 2008)A great deal of effort has been spent measuring topological features of the Internet. However, itwas recently argued that sampling based on taking paths or traceroutes through the network froma small number of sources introduces a fundamental bias in the observed degree distribution. Weexamine this bias analytically and experimentally. For Erd˝os-R´enyirandom graphs with mean degreec, we show analytically that traceroute sampling gives an observed degree distribution P(k) ∼ k", "Mapping the Internet generally consists in sampling the network from a limited set of sources by using \"traceroute\"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.", "Despite great effort spent measuring topological features of large networks like the Internet, it was recently argued that sampling based on taking paths through the network (e.g., traceroutes) introduces a fundamental bias in the observed degree distribution. We examine this bias analytically and experimentally. For classic random graphs with mean degree c, we show analytically that traceroute sampling gives an observed degree distribution P(k) 1 k for k < c, even though the underlying degree distribution is Poisson. For graphs whose degree distributions have power-law tails P(k) k^-alpha, the accuracy of traceroute sampling is highly sensitive to the population of low-degree vertices. In particular, when the graph has a large excess (i.e., many more edges than vertices), traceroute sampling can significantly misestimate alpha.", "Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the network's edges, and a recent paper by found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson.In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both δ-regular and Poisson-distributed random graphs. Thus, our work puts the observations of on a rigorous footing, and extends them to nearly arbitrary degree distributions.", "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of the maps. In this paper, we present a set of experiments we have conducted to evaluate the relevance of this approach. It appears that the statistical properties of the underlying network have a strong influence on the quality of the obtained maps, which can be improved using massively distributed explorations. Conversely, some statistical properties are very robust, and so the known values for the Internet may be considered as reliable. We validate our analysis using real-world data and experiments, and we discuss its implications.", "Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of the maps. In this paper, we present a set of experiments we have conduced to evaluate the relevance of this approach. It appears that the statistical properties of the underlying network have a strong influence on the quality of the obtained maps, which can be improved using massively distributed explorations. Conversely, we show that the exploration process induces some properties on the maps. We validate our analysis using real-world data and experiments and we discuss its implications.", "Despite many efforts over the past decade, the ability to generate topological maps of the Internet at the router-level accurately and in a timely fashion remains elusive. Mapping campaigns commonly involve traceroute-like probing that are usually non-adaptive and incomplete, thus revealing only a portion of the underlying topology. In this paper we demonstrate that standard probing methods yield datasets that implicitly contain information about much more than just the directly observed links and routers. Each probe, in addition to the underlying domain knowledge, returns information that places constraints on the underlying topology, and by integrating a large number of such constraints it is possible to accurately infer the existence of unseen components of the Internet. We describe DomainImpute, a novel data analysis methodology designed to accurately infer the unseen hop-count distances between observed routers. We use both synthetic and a large empirical dataset to validate the proposed methods. On our empirical real world dataset, we show that our methods can estimate over 55 of the unseen distances between observed routers to within a one-hop error.", "This paper presents a method for automatically converting raw GPS traces from everyday vehicles into a routable road network. The method begins by smoothing raw GPS traces using a novel aggregation technique. This technique pulls together traces that belong on the same road in response to simulated potential energy wells created around each trace. After the traces are moved in response to the potential fields, they tend to coalesce into smooth paths. To help adjust the parameters of the constituent potential fields, we present a theoretical analysis of the behavior of our algorithm on a few different road configurations. With the resulting smooth traces, we apply a custom clustering algorithm to create a graph of nodes and edges representing the road network. We show how this network can be used to plan reasonable driving routes, much like consumer-oriented mapping Web sites. We demonstrate our algorithms using real GPS data collected on public roads, and we evaluate the effectiveness of our approach on public roads, and we evaluate the effectiveness of our approach by comparing the route planning results suggested by our generated graph to a commercial route planner." ] }
0705.3243
2102136055
Traceroute sampling is an important technique in exploring the internet router graph and the autonomous system graph. Although it is one of the primary techniques used in calculating statistics about the internet, it can introduce bias that corrupts these estimates. This paper reports on a theoretical and experimental investigation of a new technique to reduce the bias of traceroute sampling when estimating the degree distribution. We develop a new estimator for the degree of a node in a traceroute-sampled graph; validate the estimator theoretically in Erdos-Renyi graphs and, through computer experiments, for a wider range of graphs; and apply it to produce a new picture of the degree distribution of the autonomous system graph.
In addition to traceroute sampling, maps of the AS graph have been generated in two different ways, using BGP tables and using the WHOIS database. A recent paper by Mahadevan, Krioukov, Fomenkov, Dimitropoulos, claffy, and Vahdat provides a detailed comparison of the graphs that result from each of these measurement techniques @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "1693225151" ], "abstract": [ "Despite great effort spent measuring topological features of large networks like the Internet, it was recently argued that sampling based on taking paths through the network (e.g., traceroutes) introduces a fundamental bias in the observed degree distribution. We examine this bias analytically and experimentally. For classic random graphs with mean degree c, we show analytically that traceroute sampling gives an observed degree distribution P(k) 1 k for k < c, even though the underlying degree distribution is Poisson. For graphs whose degree distributions have power-law tails P(k) k^-alpha, the accuracy of traceroute sampling is highly sensitive to the population of low-degree vertices. In particular, when the graph has a large excess (i.e., many more edges than vertices), traceroute sampling can significantly misestimate alpha." ] }
0706.3265
1929816335
This paper studies the gap between quantum one-way communication complexity Q(f) and its classical counterpart C(f), under the unbounded-error setting, i.e., it is enough that the success probability is strictly greater than 1 2. It is proved that for any (total or partial) Boolean function f, Q(f) = ⌈C(f) 2⌉, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining an exact bound for the existence of (m, n, p)-QRAC which is the n-qubit random access coding that can recover any one of m original bits with success probability ≥ p. We can prove that (m, n, > 1 2)-QRAC exists if and only if m ≤ 22n - 1. Previously, only the non-existence of (22n, n,> 1 2)-QRAC was known.
Partial Total Boolean Functions. For total functions, the one-way quantum communication complexity is nicely characterized or bounded below in several ways. Klauck @cite_22 characterized the one-way communication complexity of total Boolean functions by the number of different rows of the communication matrix in the exact setting, i.e., the success probability is one, and showed that it equals to the one-way deterministic communication complexity. Also, he gave a lower bound of bounded-error one-way quantum communication complexity of total Boolean functions by the VC dimension. Aaronson @cite_20 @cite_10 presented lower bounds of the one-way quantum communication complexity that are also applicable for partial Boolean functions. His lower bounds are given in terms of the deterministic or bounded-error classical communication complexity and the length of Bob's input, which are shown to be tight by using the partial Boolean function of @cite_11 .
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_20", "@cite_11" ], "mid": [ "2949863531", "1515707347", "1914859158", "2027214648" ], "abstract": [ "We prove new lower bounds for bounded error quantum communication complexity. Our methods are based on the Fourier transform of the considered functions. First we generalize a method for proving classical communication complexity lower bounds developed by Raz to the quantum case. Applying this method we give an exponential separation between bounded error quantum communication complexity and nondeterministic quantum communication complexity. We develop several other lower bound methods based on the Fourier transform, notably showing that s (f) n , for the average sensitivity s (f) of a function f, yields a lower bound on the bounded error quantum communication complexity of f(x AND y XOR z), where x is a Boolean word held by Alice and y,z are Boolean words held by Bob. We then prove the first large lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible. For all the functions we investigate, the only previously applied general lower bound method based on discrepancy yields bounds that are O( n).", "Recently, Forster [7] proved a new lower bound on probabilistic communication complexity in terms of the operator norm of the communication matrix. In this paper, we want to exploit the various relations between communication complexity of distributed Boolean functions, geometric questions related to half space representations of these functions, and the computational complexity of these functions in various restricted models of computation. In order to widen the range of applicability of Forster's bound, we start with the derivation of a generalized lower bound. We present a concrete family of distributed Boolean functions where the generalized bound leads to a linear lower bound on the probabilistic communication complexity (and thus to an exponential lower bound on the number of Euclidean dimensions needed for a successful half space representation), whereas the old bound fails. We move on to a geometric characterization of the well known communication complexity class C-PP in terms of half space representations achieving a large margin. Our characterization hints to a close connection between the bounded error model of probabilistic communication complexity and the area of large margin classification. In the final section of the paper, we describe how our techniques can be used to prove exponential lower bounds on the size of depth-2 threshold circuits (with still some technical restrictions). Similar results can be obtained for read-k-times randomized ordered binary decision diagram and related models.", "We study a model of communication complexity that encompasses many well-studied problems, including classical and quantum communication complexity, the complexity of simulating distributions arising from bipartite measurements of shared quantum states, and XOR games. In this model, Alice gets an input x, Bob gets an input y, and their goal is to each produce an output a, b distributed according to some pre-specified joint distribution p(a, b|x, y). Our results apply to any non-signaling distribution, that is, those where Alice's marginal distribution does not depend on Bob's input, and vice versa. By taking a geometric view of the non-signaling distributions, we introduce a simple new technique based on affine combinations of lower-complexity distributions, and we give the first general technique to apply to all these settings, with elementary proofs and very intuitive interpretations. Specifically, we introduce two complexity measures, one which gives lower bounds on classical communication, and one for quantum communication. These measures can be expressed as convex optimization problems. We show that the dual formulations have a striking interpretation, since they coincide with maximum violations of Bell and Tsirelson inequalities. The dual expressions are closely related to the winning probability of XOR games. Despite their apparent simplicity, these lower bounds subsume many known communication complexity lower bound methods, most notably the recent lower bounds of Linial and Shraibman for the special case of Boolean functions. We show that as in the case of Boolean functions, the gap between the quantum and classical lower bounds is at most linear in the size of the support of the distribution, and does not depend on the size of the inputs. This translates into a bound on the gap between maximal Bell and Tsirelson inequality violations, which was previously known only for the case of distributions with Boolean outcomes and uniform marginals. It also allows us to show that for some distributions, information theoretic methods are necessary to prove strong lower bounds. Finally, we give an exponential upper bound on quantum and classical communication complexity in the simultaneous messages model, for any non-signaling distribution. One consequence of this is a simple proof that any quantum distribution can be approximated with a constant number of bits of communication.", "We prove three different types of complexity lower bounds for the one-way unbounded-error and bounded-error error probabilistic communication protocols for boolean functions. The lower bounds are proved in terms of the deterministic communication complexity of functions and in terms of the notion “probabilistic communication characteristic” that we define. We present boolean functions with the different probabilistic communication characteristics which demonstrates that each of these lower bounds can be more precise than the others depending on the probabilistic communication characteristics of a function. Our lower bounds are good enough for proving that proper hierarchy for one-way probabilistic communication complexity classes depends on a measure of bounded error. As the application of lower bounds for probabilistic communication complexity, we prove two different types of complexity lower bounds for the one-way bounded-error error probabilistic space complexity. Our lower bounds are good enough for proving proper hierarchies for different one-way probabilistic space communication complexity classes inside SPACE(n) (namely for bounded error probabilistic computation, and for errors of probabilistic computation)." ] }
0706.3265
1929816335
This paper studies the gap between quantum one-way communication complexity Q(f) and its classical counterpart C(f), under the unbounded-error setting, i.e., it is enough that the success probability is strictly greater than 1 2. It is proved that for any (total or partial) Boolean function f, Q(f) = ⌈C(f) 2⌉, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining an exact bound for the existence of (m, n, p)-QRAC which is the n-qubit random access coding that can recover any one of m original bits with success probability ≥ p. We can prove that (m, n, > 1 2)-QRAC exists if and only if m ≤ 22n - 1. Previously, only the non-existence of (22n, n,> 1 2)-QRAC was known.
Private-coin Public-coin Models. The exponential quantum classical separations in @cite_13 and @cite_11 still hold under the public-coin model where Alice and Bob share random coins, since the one-way classical public-coin model can be simulated by the one-way classical private-coin model with additional @math -bit communication @cite_19 . However, exponential quantum classical separation for total functions remains open for all of the bounded-error two-way, one-way and SMP models. Note that the public-coin model is too powerful in the unbounded-error model: we can easily see that the unbounded-error one-way (classical or quantum) communication complexity of any function (or relation) is @math with prior shared randomness.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_11" ], "mid": [ "2148488235", "1914859158", "2611161794" ], "abstract": [ "We give an exponential separation between one-way quantum and classical communication protocols for twopartial Boolean functions, both of which are variants of the Boolean Hidden Matching Problem of Bar- Earlier such an exponential separation was known only for a relational version of the Hidden Matching Problem. Our proofs use the Fourier coefficients inequality of Kahn, Kalai, and Linial. We give a number of applications of this separation. In particular, in the bounded-storage model of cryptography we exhibita scheme that is secure against adversaries with a certain amount of classical storage, but insecure against adversaries with a similar (or even much smaller) amount of quantum storage; in the setting of privacy amplification, we show that there are strong extractors that yield a classically secure key, but are insecure against a quantum adversary.", "We study a model of communication complexity that encompasses many well-studied problems, including classical and quantum communication complexity, the complexity of simulating distributions arising from bipartite measurements of shared quantum states, and XOR games. In this model, Alice gets an input x, Bob gets an input y, and their goal is to each produce an output a, b distributed according to some pre-specified joint distribution p(a, b|x, y). Our results apply to any non-signaling distribution, that is, those where Alice's marginal distribution does not depend on Bob's input, and vice versa. By taking a geometric view of the non-signaling distributions, we introduce a simple new technique based on affine combinations of lower-complexity distributions, and we give the first general technique to apply to all these settings, with elementary proofs and very intuitive interpretations. Specifically, we introduce two complexity measures, one which gives lower bounds on classical communication, and one for quantum communication. These measures can be expressed as convex optimization problems. We show that the dual formulations have a striking interpretation, since they coincide with maximum violations of Bell and Tsirelson inequalities. The dual expressions are closely related to the winning probability of XOR games. Despite their apparent simplicity, these lower bounds subsume many known communication complexity lower bound methods, most notably the recent lower bounds of Linial and Shraibman for the special case of Boolean functions. We show that as in the case of Boolean functions, the gap between the quantum and classical lower bounds is at most linear in the size of the support of the distribution, and does not depend on the size of the inputs. This translates into a bound on the gap between maximal Bell and Tsirelson inequality violations, which was previously known only for the case of distributions with Boolean outcomes and uniform marginals. It also allows us to show that for some distributions, information theoretic methods are necessary to prove strong lower bounds. Finally, we give an exponential upper bound on quantum and classical communication complexity in the simultaneous messages model, for any non-signaling distribution. One consequence of this is a simple proof that any quantum distribution can be approximated with a constant number of bits of communication.", "Although a quantum state requires exponentially many classical bits to describe, the laws of quantum mechanics impose severe restrictions on how that state can be accessed. This paper shows in three settings that quantum messages have only limited advantages over classical ones. First, we show that BQP qpoly is contained in PP poly, where BQP qpoly is the class of problems solvable in quantum polynomial time, given a polynomial-size \"quantum advice state\" that depends only on the input length. This resolves a question of Buhrman, and means that we should not hope for an unrelativized separation between quantum and classical advice. Underlying our complexity result is a general new relation between deterministic and quantum one-way communication complexities, which applies to partial as well as total functions. Second, we construct an oracle relative to which NP is not contained in BQP qpoly. To do so, we use the polynomial method to give the first correct proof of a direct product theorem for quantum search. This theorem has other applications; for example, it can be used to fix a flawed result of Klauck about quantum time-space tradeoffs for sorting. Third, we introduce a new trace distance method for proving lower bounds on quantum one-way communication complexity. Using this method, we obtain optimal quantum lower bounds for two problems of Ambainis, for which no nontrivial lower bounds were previously known even for classical randomized protocols." ] }
0706.3265
1929816335
This paper studies the gap between quantum one-way communication complexity Q(f) and its classical counterpart C(f), under the unbounded-error setting, i.e., it is enough that the success probability is strictly greater than 1 2. It is proved that for any (total or partial) Boolean function f, Q(f) = ⌈C(f) 2⌉, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining an exact bound for the existence of (m, n, p)-QRAC which is the n-qubit random access coding that can recover any one of m original bits with success probability ≥ p. We can prove that (m, n, > 1 2)-QRAC exists if and only if m ≤ 22n - 1. Previously, only the non-existence of (22n, n,> 1 2)-QRAC was known.
Unbounded-error Models. Since the seminal paper @cite_17 , the unbounded-error (classical) one-way communication complexity has been developed in the literature @cite_8 @cite_12 @cite_2 . (Note that in the classical setting, the difference of communication cost between one-way and two-way models is at most @math bit.) Klauck @cite_5 also studied a variant of the unbounded-error quantum and classical communication complexity, called the weakly unbounded-error communication complexity: the cost is communication (qu)bits plus @math where @math is the success probability. He characterized the discrepancy, a useful measure for bounded-error communication complexity @cite_18 , in terms of the weakly unbounded-error communication complexity.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_2", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "1914859158", "2104886711", "2008671796", "1515707347", "2949863531", "2027214648" ], "abstract": [ "We study a model of communication complexity that encompasses many well-studied problems, including classical and quantum communication complexity, the complexity of simulating distributions arising from bipartite measurements of shared quantum states, and XOR games. In this model, Alice gets an input x, Bob gets an input y, and their goal is to each produce an output a, b distributed according to some pre-specified joint distribution p(a, b|x, y). Our results apply to any non-signaling distribution, that is, those where Alice's marginal distribution does not depend on Bob's input, and vice versa. By taking a geometric view of the non-signaling distributions, we introduce a simple new technique based on affine combinations of lower-complexity distributions, and we give the first general technique to apply to all these settings, with elementary proofs and very intuitive interpretations. Specifically, we introduce two complexity measures, one which gives lower bounds on classical communication, and one for quantum communication. These measures can be expressed as convex optimization problems. We show that the dual formulations have a striking interpretation, since they coincide with maximum violations of Bell and Tsirelson inequalities. The dual expressions are closely related to the winning probability of XOR games. Despite their apparent simplicity, these lower bounds subsume many known communication complexity lower bound methods, most notably the recent lower bounds of Linial and Shraibman for the special case of Boolean functions. We show that as in the case of Boolean functions, the gap between the quantum and classical lower bounds is at most linear in the size of the support of the distribution, and does not depend on the size of the inputs. This translates into a bound on the gap between maximal Bell and Tsirelson inequality violations, which was previously known only for the case of distributions with Boolean outcomes and uniform marginals. It also allows us to show that for some distributions, information theoretic methods are necessary to prove strong lower bounds. Finally, we give an exponential upper bound on quantum and classical communication complexity in the simultaneous messages model, for any non-signaling distribution. One consequence of this is a simple proof that any quantum distribution can be approximated with a constant number of bits of communication.", "We show that almost all known lower bound methods for communication complexity are also lower bounds for the information complexity. In particular, we define a relaxed version of the partition bound of Jain and Klauck and prove that it lower bounds the information complexity of any function. Our relaxed partition bound subsumes all norm based methods (e.g. the ?2 method) and rectangle-based methods (e.g. the rectangle corruption bound, the smooth rectangle bound, and the discrepancy bound), except the partition bound. Our result uses a new connection between rectangles and zero-communication protocols where the players can either output a value or abort. We prove the following compression lemma: given a protocol for a function f with information complexity I, one can construct a zero-communication protocol that has non-abort probability at least 2^ -O(I) and that computes f correctly with high probability conditioned on not aborting. Then, we show how such a zero-communication protocol relates to the relaxed partition bound. We use our main theorem to resolve three of the open questions raised by Braver man. First, we show that the information complexity of the Vector in Subspace Problem is O(n^ 1 3 ), which, in turn, implies that there exists an exponential separation between quantum communication complexity and classical information complexity. Moreover, we provide an O(n) lower bound on the information complexity of the Gap Hamming Distance Problem.", "Communication is a bottleneck in many distributed computations. In VLSI, communication constraints dictate lower bounds on the performance of chips. The two-processor information transfer model measures the communication requirements to compute functions. We study the unbounded error probabilistic version of this model. Because of its weak notion of correct output, we believe that this model measures the “intrinsic” communication complexity of functions. We present exact characterizations of the unbounded error communication complexity in terms of arrangements of hyperplanes and approximations of matrices. These characterizations establish the connection with certain classical problems in combinatorial geometry which are concerned with the configurations of points in d-dimensional real space. With the help of these characterizations, we obtain some upper and lower bounds on communication complexity. The upper bounds which we obtained for the functions—equality and verification of Hamming distance—are considerably better than their counterparts in the deterministic, the nondeterministic, and the bounded error probabilistic models. We also exhibit a function which has log n complexity. We present a counting argument to show that most functions have linear complexity. Further, we apply the logarithmic lower bound on communication complexity to obtain an Ω(n log n) bound on the time of 1-tape unbounded error probabilistic Turing machines. We believe that this is the first nontrivial lower bound obtained for such machines.", "Recently, Forster [7] proved a new lower bound on probabilistic communication complexity in terms of the operator norm of the communication matrix. In this paper, we want to exploit the various relations between communication complexity of distributed Boolean functions, geometric questions related to half space representations of these functions, and the computational complexity of these functions in various restricted models of computation. In order to widen the range of applicability of Forster's bound, we start with the derivation of a generalized lower bound. We present a concrete family of distributed Boolean functions where the generalized bound leads to a linear lower bound on the probabilistic communication complexity (and thus to an exponential lower bound on the number of Euclidean dimensions needed for a successful half space representation), whereas the old bound fails. We move on to a geometric characterization of the well known communication complexity class C-PP in terms of half space representations achieving a large margin. Our characterization hints to a close connection between the bounded error model of probabilistic communication complexity and the area of large margin classification. In the final section of the paper, we describe how our techniques can be used to prove exponential lower bounds on the size of depth-2 threshold circuits (with still some technical restrictions). Similar results can be obtained for read-k-times randomized ordered binary decision diagram and related models.", "We prove new lower bounds for bounded error quantum communication complexity. Our methods are based on the Fourier transform of the considered functions. First we generalize a method for proving classical communication complexity lower bounds developed by Raz to the quantum case. Applying this method we give an exponential separation between bounded error quantum communication complexity and nondeterministic quantum communication complexity. We develop several other lower bound methods based on the Fourier transform, notably showing that s (f) n , for the average sensitivity s (f) of a function f, yields a lower bound on the bounded error quantum communication complexity of f(x AND y XOR z), where x is a Boolean word held by Alice and y,z are Boolean words held by Bob. We then prove the first large lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible. For all the functions we investigate, the only previously applied general lower bound method based on discrepancy yields bounds that are O( n).", "We prove three different types of complexity lower bounds for the one-way unbounded-error and bounded-error error probabilistic communication protocols for boolean functions. The lower bounds are proved in terms of the deterministic communication complexity of functions and in terms of the notion “probabilistic communication characteristic” that we define. We present boolean functions with the different probabilistic communication characteristics which demonstrates that each of these lower bounds can be more precise than the others depending on the probabilistic communication characteristics of a function. Our lower bounds are good enough for proving that proper hierarchy for one-way probabilistic communication complexity classes depends on a measure of bounded error. As the application of lower bounds for probabilistic communication complexity, we prove two different types of complexity lower bounds for the one-way bounded-error error probabilistic space complexity. Our lower bounds are good enough for proving proper hierarchies for different one-way probabilistic space communication complexity classes inside SPACE(n) (namely for bounded error probabilistic computation, and for errors of probabilistic computation)." ] }
0706.4298
1624353584
How to pass from local to global scales in anonymous networks? How to organize a selfstabilizing propagation of information with feedback. From the Angluin impossibility results, we cannot elect a leader in a general anonymous network. Thus, it is impossible to build a rooted spanning tree. Many problems can only be solved by probabilistic methods. In this paper we show how to use Unison to design a self-stabilizing barrier synchronization in an anonymous network. We show that the commuication structure of this barrier synchronization designs a self-stabilizing wave-stream, or pipelining wave, in anonymous networks. We introduce two variants of Wave: the strong waves and the wavelets. A strong wave can be used to solve the idempotent r-operator parametrized computation problem. A wavelet deals with k-distance computation. We show how to use Unison to design a self-stabilizing wave stream, a self-stabilizing strong wave stream and a self-stabilizing wavelet stream.
@cite_3 designs a self-stabilizing Barrier Synchonization algorithm in asynchronous anonymous complet networks. For the other topologies the authors use the network with a root, the program is not uniform, but only semi-uniform. An interesting question is to give a solution to this problem in a general connected asynchronous anonymous network. As far as we know, the algorithm @cite_2 is the only decentralised uniform wave algorithm for a general anonymous network. This algorithm requires that the processors know the diameter, or most simply a common upper bound @math of the diameter. This algorithm is not self-stabilizing.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1520594164", "2623176784" ], "abstract": [ "Consider a distributed network of n nodes that is connected to a global source of \"beats\". All nodes receive the \"beats\" simultaneously, and operate in lock-step. A scheme that produces a \"pulse\" every Cycle beats is shown. That is, the nodes agree on \"special beats\", which are spaced Cycle beats apart. Given such a scheme, a clock synchronization algorithm is built. The \"pulsing\" scheme is self-stabilized despite any transient faults and the continuous presence of up to f < n 3 Byzantine nodes. Therefore, the clock synchronization built on top of the \"pulse\" is highly fault tolerant. In addition, a highly fault tolerant general stabilizer algorithm is constructed on top of the \"pulse\" mechanism. Previous clock synchronization solutions, operating in the exact same model as this one, either support f < n 4 and converge in linear time, or support f < n 3 and have exponential convergence time that also depends on the value of max-clock (the clock wrap around value). The proposed scheme combines the best of both worlds: it converges in linear time that is independent of max-clock and is tolerant to up to f < n 3 Byzantine nodes. Moreover, considering problems in a self-stabilizing, Byzantine tolerant environment that require nodes to know the global state (clock synchronization, token circulation, agreement, etc.), the work presented here is the first protocol to operate in a network that is not fully connected.", "Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behavior, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilizing despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilization time in @math (asymptotically optimal), (4) use a small number of states, and, consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomization, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, ..." ] }
1406.6937
2133769104
Abstract The most common method to validate a DEVS model against the requirements is to simulate it several times under different conditions, with some simulation tool. The behavior of the model is compared with what the system is supposed to do. The number of different scenarios to simulate is usually infinite, therefore, selecting them becomes a crucial task. This selection, actually, is made following the experience or intuition of an engineer. Here we present a family of criteria to conduct DEVS model simulations in a disciplined way and covering the most significant simulations to increase the confidence on the model. This is achieved by analyzing the mathematical representation of the DEVS model and, thus, part of the validation process can be automatized.
There are several works that use verification techniques, like model checking, to verify the correctness of a model. For instance, Napoli and Parente @cite_30 present a model-checking algorithm for Hierarchical Finite State Machines as an abstract DEVS model. They also focus on the generation of simulation configurations for DEVS, but as counter-examples obtained by the application of their model-checking algorithm. Another relevant and recent work involving verification techniques is @cite_12 where Saadawi and Wainer introduce a new extension to the DEVS formalism, called the Rational Time-Advance DEVS (RTA-DEVS). RTA-DEVS models can be formally checked with standard model-checking algorithm and tools. Further, they introduce a methodology to transform classic DEVS models to RTA-DEVS models, allowing formal verification of classic DEVS. Although model checking techniques are formally defined and they are useful to prove properties and theorems over a model, the main problem of such techniques is the so-called @cite_13 , i.e. the exponential blowup of the state space and variables in any real or practical system. This made almost impossible the use of such techniques in large projects, although model checking has been used in real projects.
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_12" ], "mid": [ "2110806620", "2075969214", "1498432697" ], "abstract": [ "Real-time systems modeling and verification is a complex task. In many cases, formal methods have been employed to deal with the complexity of these systems, but checking those models is usually unfeasible. Modeling and simulation methods introduce a means of validating these model's specifications. In particular, Discrete Event System Specification (DEVS) models can be used for this purpose. Here, we introduce a new extension to the DEVS formalism, called the Rational Time-Advance DEVS (RTA-DEVS), which permits modeling the behavior of real-time systems that can be modeled by the classical DEVS; however, RTA-DEVS models can be formally checked with standard model-checking algorithms and tools. In order to do so, we introduce a procedure to create timed automata (TA) models that are behaviorally equivalent to the original RTA-DEVS models. This enables the use of the available TA tools and theories for formal model checking. Further, we introduce a methodology to transform classic DEVS models to RTA-DEVS models, thus enabling formal verification of classic DEVS with an acceptable accuracy.", "While model checking of pushdown systems is by now an established technique in software verification, temporal logics and automata traditionally used in this area are unattractive on two counts. First, logics and automata traditionally used in model checking cannot express requirements such as pre post-conditions that are basic to analysis of software. Second, unlike in the finite-state world, where the μ-calculus has a symbolic model-checking algorithm and serves as an “assembly language” to which temporal logics can be compiled, there is no common formalism—either fixpoint-based or automata-theoretic—to model-check requirements on pushdown models. In this article, we introduce a new theory of temporal logics and automata that addresses the above issues, and provides a unified foundation for the verification of pushdown systems. The key idea here is to view a program as a generator of structures known as nested trees as opposed to trees. A fixpoint logic (called N T -μ) and a class of automata (called nested tree automata) interpreted on languages of these structures are now defined, and branching-time model-checking is phrased as language inclusion and membership problems for these languages. We show that N T -μ and nested tree automata allow the specification of a new frontier of requirements usable in software verification. At the same time, their model checking problem has the same worst-case complexity as their traditional analogs, and can be solved symbolically using a fixpoint computation that generalizes, and includes as a special case, “summary”-based computations traditionally used in interprocedural program analysis. We also show that our logics and automata define a robust class of languages—in particular, just as the μ-calculus is equivalent to alternating parity automata on trees, NT-μ is equivalent to alternating parity automata on nested trees.", "Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature." ] }
1406.6937
2133769104
Abstract The most common method to validate a DEVS model against the requirements is to simulate it several times under different conditions, with some simulation tool. The behavior of the model is compared with what the system is supposed to do. The number of different scenarios to simulate is usually infinite, therefore, selecting them becomes a crucial task. This selection, actually, is made following the experience or intuition of an engineer. Here we present a family of criteria to conduct DEVS model simulations in a disciplined way and covering the most significant simulations to increase the confidence on the model. This is achieved by analyzing the mathematical representation of the DEVS model and, thus, part of the validation process can be automatized.
K. J. Hong and T. G. Kim @cite_31 introduce a method for the verification of discrete event models. They propose a formalism, Time State Reachability Graph (TSRG), to specify modules of a discrete event model and a methodology for the generation of test sequences to test such modules at an I O level. Later, a graph theoretical analysis of TSGR generates all possible timed I O sequences from which a test set of timed I O sequences with 100 Another recent work that applies verification techniques over discrete event simulation is @cite_24 where da Silva and de Melo presents a method to perform simulations orderly and verify properties about them using transitions systems. Both, the possible simulation paths and the property to be verified are described as transition systems. The verification is achieved by building a special kind of synchronous product between these two transition systems. They focused their work on the verification of properties by simulation but not on the generation of simulations in order to validate the model.
{ "cite_N": [ "@cite_24", "@cite_31" ], "mid": [ "2118645685", "2167672803" ], "abstract": [ "Model verification examines the correctness of a model implementation with respect to a model specification. While being described from model specification, implementation prepares to execute or evaluate a simulation model by a computer program. Viewing model verification as a program test this paper proposes a method for generation of test sequences that completely covers all possible behavior in specification at an I O level. Timed State Reachability Graph (TSRG) is proposed as a means of model specification. Graph theoretical analysis of TSRG has generated a test set of timed I O event sequences, which guarantees 100 test coverage of an implementation under test.", "We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm." ] }
1406.6937
2133769104
Abstract The most common method to validate a DEVS model against the requirements is to simulate it several times under different conditions, with some simulation tool. The behavior of the model is compared with what the system is supposed to do. The number of different scenarios to simulate is usually infinite, therefore, selecting them becomes a crucial task. This selection, actually, is made following the experience or intuition of an engineer. Here we present a family of criteria to conduct DEVS model simulations in a disciplined way and covering the most significant simulations to increase the confidence on the model. This is achieved by analyzing the mathematical representation of the DEVS model and, thus, part of the validation process can be automatized.
@cite_23 present a framework to test DEVS tools. In their framework they combine black-box and white-box testing approaches. Actually, this work is not really related to ours because they do not validate or verify a DEVS model, whereas they test DEVS implementations. However, it is useful to see how they introduce software testing techniques in the DEVS world.
{ "cite_N": [ "@cite_23" ], "mid": [ "2128006558" ], "abstract": [ "We present an extension of traditional \"black box\" fuzz testing using a genetic algorithm based upon a dynamic Markov model fitness heuristic. This heuristic allows us to \"intelligently\" guide input selection based upon feedback concerning the \"success\" of past inputs that have been tried. Unlike many software testing tools, our implementation is strictly based upon binary code and does not require that source code be available. Our evaluation on a Windows server program shows that this approach is superior to random black box fuzzing for increasing code coverage and depth of penetration into program control flow logic. As a result, the technique may be beneficial to the development of future automated vulnerability analysis tools." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
Statistical distributions similar to the ones considered in this paper have been previously applied to characterize the dynamics of the behavior ow crowds of Web users. In an early contribution, @cite_29 analyzed browsing behaviors and found that the number of links a user is likely to follow on a Web site is distributed according to an inverse Gaussian. In @cite_34 , Wu and Huberman studied life-cycles of news items on social bookmarking site and found that the amount of attention novel content receives is distributed log-normally. The log-Normal distribution was also found to model sizes of cascades of messages passed through a peer-to-peer recommendation network @cite_25 or the number of messages exchanged in instant messaging services @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_29", "@cite_34", "@cite_25" ], "mid": [ "2116338588", "1996359281", "1980722028", "2289705509" ], "abstract": [ "We study the distribution of the activity period of users in five of the largest localized versions of the free, on- line encyclopedia Wikipedia. We find it to be consis- tent with a mixture of two truncated log-normal distri- butions. Using this model, the temporal evolution of these systems can be analyzed, showing that the statis- tical description is consistent over time. contributions: 1. We find all datasets to be consistent with the hypothesis that the lifetime of an user account is described by the su- perposition of two truncated log-normal distributions. An interpretation for this phenomenon is that two different regimes govern the participation of individuals to these versions of the Wikipedia project: the occasional users, who fail to find interest in the project after the first few at- tempts to contribute, and the long-term users, whose with- drawal is probably more related to external factors like the loss of personal incentives in contributing and similar causes. 2. Using our model, we characterize how the participation of users over time evolves, as the system ages. We find that the statistical description of the one-timers is stable over time, while the properties of the group of long-term users change as a consequence of the aging of the system. 3. We find evidence that the inter-edit time distribution de- cays with an heavy tail. In view of this finding, we check that our analysis is not affected by the choice of the pa- rameter used for determining when an user is to be con- sidered \"inactive\"; we find that for the one-timers it has no quantitative effect. For the statistics of the long-lived users we find instead a very good qualitative agreement.", "Human behaviour is highly individual by nature, yet statistical structures are emerging which seem to govern the actions of human beings collectively. Here we search for universal statistical laws dictating the timing of human actions in communication decisions. We focus on the distribution of the time interval between messages in human broadcast communication, as documented in Twitter, and study a collection of over 160,000 tweets for three user categories: personal (controlled by one person), managed (typically PR agency controlled) and bot-controlled (automated system). To test our hypothesis, we investigate whether it is possible to differentiate between user types based on tweet timing behaviour, independently of the content in messages. For this purpose, we developed a system to process a large amount of tweets for reality mining and implemented two simple probabilistic inference algorithms: 1. a naive Bayes classifier, which distinguishes between two and three account categories with classification performance of 84.6 and 75.8 , respectively and 2. a prediction algorithm to estimate the time of a user's next tweet with an . Our results show that we can reliably distinguish between the three user categories as well as predict the distribution of a user's inter-message time with reasonable accuracy. More importantly, we identify a characteristic power-law decrease in the tail of inter-message time distribution by human users which is different from that obtained for managed and automated accounts. This result is evidence of a universal law that permeates the timing of human decisions in broadcast communication and extends the findings of several previous studies of peer-to-peer communication.", "We study the detailed growth of a social networking site with full temporal information by examining the creation process of each friendship relation that can collectively lead to the macroscopic properties of the network. We first study the reciprocal behavior of users, and find that link requests are quickly responded to and that the distribution of reciprocation intervals decays in an exponential form. The degrees of inviters accepters are slightly negatively correlative with reciprocation time. In addition, the temporal feature of the online community shows that the distributions of intervals of user behaviors, such as sending or accepting link requests, follow a power law with a universal exponent, and peaks emerge for intervals of an integral day. We finally study the preferential selection and linking phenomena of the social networking site and find that, for the former, a linear preference holds for preferential sending and reception, and for the latter, a linear preference also holds for preferential acceptance, creation, and attachment. Based on the linearly preferential linking, we put forward an analyzable network model which can reproduce the degree distribution of the network. The research framework presented in the paper could provide a potential insight into how the micro-motives of users lead to the global structure of online social networks.", "Information cascades on social networks, such as retweet cascades on Twitter, have been often viewed as an epidemiological process, with the associated notion of virality to capture popular cascades that spread across the network. The notion of structural virality or average path length has been posited as a measure of global spread. In this paper, we argue that this simple epidemiological view, though analytically compelling, is not the entire story. We first show empirically that the classical SIR diffusion process on the Twitter graph, even with the best possible distribution of infectiousness parameter, cannot explain the nature of observed retweet cascades on Twitter. More specifically, rather than spreading further from the source as the SIR model would predict, many cascades that have several retweets from direct followers, die out quickly beyond that. We show that our empirical observations can be reconciled if we take interests of users and tweets into account. In particular, we consider a model where users have multi-dimensional interests, and connect to other users based on similarity in interests. Tweets are correspondingly labeled with interests, and propagate only in the subgraph of interested users via the SIR process. In this model, interests can be either narrow or broad, with the narrowest interest corresponding to a star graph on the interested users, with the root being the source of the tweet, and the broadest interest spanning the whole graph. We show that if tweets are generated using such a mix of interests, coupled with a varying infectiousness parameter, then we can qualitatively explain our observation that cascades die out much more quickly than is predicted by the SIR model. In the same breath, this model also explains how cascades can have large size, but low \"structural virality\" or average path length." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
While attention dynamics on shorter time scales have been modeled using random fields @cite_15 , structured models @cite_12 , or differential equations @cite_22 , long term temporal dynamics of collective attention have previously been modeled using mixtures of power-law and Poisson distributions @cite_4 or systems of differential equations @cite_30 @cite_25 which were inspired by techniques from the area of epidemic modeling @cite_26 @cite_9 . In this context, we note that the diffusion models considered in this paper also allow for interpretations in terms of the dynamics of elementary differential equations. For instance, the Weibull model in can be expressed as @math which hints at a similarity in spirit between economic diffusion and established epidemic models that seems to merit further research.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_4", "@cite_22", "@cite_9", "@cite_15", "@cite_25", "@cite_12" ], "mid": [ "2042034885", "2086627840", "2144588366", "2016674662", "2398538542", "2028055861", "2056284729", "2043890595" ], "abstract": [ "We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems.", "The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.", "Epidemic algorithms have recently been proposed as an effective solution for disseminating information in large-scale peer-to-peer (P2P) systems and in mobile ad hoc networks (MANET). In this paper, we present a modeling approach for steady-state analysis of epidemic dissemination of information in MANET. As major contribution, the introduced approach explicitly represents the spread of multiple data items, finite buffer capacity at mobile devices and a least recently used buffer replacement scheme. Using the introduced modeling approach, we analyze seven degrees of separation (7DS) as one well-known approach for implementing P2P data sharing in a MANET using epidemic dissemination of information. A validation of results derived from the analytical model against simulation shows excellent agreement. Quantitative performance curves derived from the analytical model yield several insights for optimizing the system design of 7DS.", "Among the realistic ingredients to be considered in the computational modeling of infectious diseases, human mobility represents a crucial challenge both on the theoretical side and in view of the limited availability of empirical data. To study the interplay between short-scale commuting flows and long-range airline traffic in shaping the spatiotemporal pattern of a global epidemic we (i) analyze mobility data from 29 countries around the world and find a gravity model able to provide a global description of commuting patterns up to 300 kms and (ii) integrate in a worldwide-structured metapopulation epidemic model a timescale-separation technique for evaluating the force of infection due to multiscale mobility processes in the disease dynamics. Commuting flows are found, on average, to be one order of magnitude larger than airline flows. However, their introduction into the worldwide model shows that the large-scale pattern of the simulated epidemic exhibits only small variations with respect to the baseline case where only airline traffic is considered. The presence of short-range mobility increases, however, the synchronization of subpopulations in close proximity and affects the epidemic behavior at the periphery of the airline transportation infrastructure. The present approach outlines the possibility for the definition of layered computational approaches where different modeling assumptions and granularities can be used consistently in a unifying multiscale framework.", "The spread of epidemics and malware is commonly modeled by diffusion processes on networks. Protective interventions such as vaccinations or installing anti-virus software are used to contain their spread. Typically, each node in the network has to decide its own strategy of securing itself, and its benefit depends on which other nodes are secure, making this a natural game-theoretic setting. There has been a lot of work on network security game models, but most of the focus has been either on simplified epidemic models or homogeneous network structure. We develop a new formulation for an epidemic containment game, which relies on the characterization of the SIS model in terms of the spectral radius of the network. We show in this model that pure Nash equilibria (NE) always exist, and can be found by a best response strategy. We analyze the complexity of finding NE, and derive rigorous bounds on their costs and the Price of Anarchy or PoA (the ratio of the cost of the worst NE to the optimum social cost) in general graphs as well as in random graph models. In particular, for arbitrary power-law graphs with exponent β > 2, we show that the PoA is bounded by O(T2(β -1))), where T = γ α is the ratio of the recovery rate to the transmission rate in the SIS model. We prove that this bound is tight up to a constant factor for the Chung-Lu random power-law graph model. We study the characteristics of Nash equilibria empirically in different real communication and infrastructure networks, and find that our analytical results can help explain some of the empirical observations.", "Models of networked diffusion that are motivated by analogy with the spread of infectious disease have been applied to a wide range of social and economic adoption processes, including those related to new products, ideas, norms and behaviors. However, it is unknown how accurately these models account for the empirical structure of diffusion over networks. Here we describe the diffusion patterns arising from seven online domains, ranging from communications platforms to networked games to microblogging services, each involving distinct types of content and modes of sharing. We find strikingly similar patterns across all domains. In particular, the vast majority of cascades are small, and are described by a handful of simple tree structures that terminate within one degree of an initial adopting \"seed.\" In addition we find that structures other than these account for only a tiny fraction of total adoptions; that is, adoptions resulting from chains of referrals are extremely rare. Finally, even for the largest cascades that we observe, we find that the bulk of adoptions often takes place within one degree of a few dominant individuals. Together, these observations suggest new directions for modeling of online adoption processes.", "The website wheresgeorge.com invites its users to enter the serial numbers of their US dollar bills and track them across America and beyond. Why? “For fun and because it had not been done yet”, they say. But the dataset accumulated since December 1998 has provided the ideal raw material to test the mathematical laws underlying human travel, and that has important implications for the epidemiology of infectious diseases. Analysis of the trajectories of over half a million dollar bills shows that human dispersal is described by a ‘two-parameter continuous-time random walk’ model: our travel habits conform to a type of random proliferation known as ‘superdiffusion’. And with that much established, it should soon be possible to develop a new class of models to account for the spread of human disease. The dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales. It can synchronize populations of interacting species, stabilize them, and diversify gene pools1,2,3. Human travel, for example, is responsible for the geographical spread of human infectious disease4,5,6,7,8,9. In the light of increasing international trade, intensified human mobility and the imminent threat of an influenza A epidemic10, the knowledge of dynamical and statistical properties of human travel is of fundamental importance. Despite its crucial role, a quantitative assessment of these properties on geographical scales remains elusive, and the assumption that humans disperse diffusively still prevails in models. Here we report on a solid and quantitative assessment of human travelling statistics by analysing the circulation of bank notes in the United States. Using a comprehensive data set of over a million individual displacements, we find that dispersal is anomalous in two ways. First, the distribution of travelling distances decays as a power law, indicating that trajectories of bank notes are reminiscent of scale-free random walks known as Levy flights. Second, the probability of remaining in a small, spatially confined region for a time T is dominated by algebraically long tails that attenuate the superdiffusive spread. We show that human travelling behaviour can be described mathematically on many spatiotemporal scales by a two-parameter continuous-time random walk model to a surprising accuracy, and conclude that human travel on geographical scales is an ambivalent and effectively superdiffusive process.", "The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
With respect to time series retrieved from Google Trends, epidemic models based on differential equations involving exogenous end endogenous influences have been discussed in @cite_4 . There, they were used as means of classifying, i.e. distinguishing, different types of attention dynamics. Trend analysis based on data from Google Trends was also performed in @cite_18 yet there the focus was on developing clustering algorithms to characterize different phases in search frequency data. The approaches in @cite_18 @cite_4 are thus related to what is reported here, however, in contrast to these contributions, we do not explicitly devise new models but consider simpler representations that implicitly account for different kinds of dynamics. Due to the simplicity of the diffusion models considered here and because of their apparent empirical validity and theoretical plausibility, the results reported in this paper therefore provide a new baseline for research on the mechanisms and long-term dynamics of collective attention on the Web.
{ "cite_N": [ "@cite_18", "@cite_4" ], "mid": [ "2043890595", "2952042565" ], "abstract": [ "The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.", "The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention-based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all time steps. With this dual-stage attention scheme, our model can not only make predictions effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can outperform state-of-the-art methods for time series prediction." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
In a delightful synchronicity, Cannarella and Spechler @cite_17 , Ribeiro @cite_19 , and we ourselves @cite_8 all published analyzes on how attention to social media evolves over time in early 2014. While @cite_17 was uploaded to arXiv, @cite_19 and @cite_17 were both presented at the International World Wide Web Conference in Seoul.
{ "cite_N": [ "@cite_19", "@cite_8", "@cite_17" ], "mid": [ "2159336657", "2078701302", "2112056172" ], "abstract": [ "We analyze the online response to the preprint publication of a cohort of 4,606 scientific articles submitted to the preprint database arXiv.org between October 2010 and May 2011. We study three forms of responses to these preprints: downloads on the arXiv.org site, mentions on the social media site Twitter, and early citations in the scholarly record. We perform two analyses. First, we analyze the delay and time span of article downloads and Twitter mentions following submission, to understand the temporal configuration of these reactions and whether one precedes or follows the other. Second, we run regression and correlation tests to investigate the relationship between Twitter mentions, arXiv downloads, and article citations. We find that Twitter mentions and arXiv downloads of scholarly articles follow two distinct temporal patterns of activity, with Twitter mentions having shorter delays and narrower time spans than arXiv downloads. We also find that the volume of Twitter mentions is statistically correlated with arXiv downloads and early citations just months after the publication of a preprint, with a possible bias that favors highly mentioned articles.", "Can we identify patterns of temporal activities caused by human communications in social media? Is it possible to model these patterns and tell if a user is a human or a bot based only on the timing of their postings? Social media services allow users to make postings, generating large datasets of human activity time-stamps. In this paper we analyze time-stamp data from social media services and find that the distribution of postings inter-arrival times (IAT) is characterized by four patterns: (i) positive correlation between consecutive IATs, (ii) heavy tails, (iii) periodic spikes and (iv) bimodal distribution. Based on our findings, we propose Rest-Sleep-and-Comment (RSC), a generative model that is able to match all four discovered patterns. We demonstrate the utility of RSC by showing that it can accurately fit real time-stamp data from Reddit and Twitter. We also show that RSC can be used to spot outliers and detect users with non-human behavior, such as bots. We validate RSC using real data consisting of over 35 million postings from Twitter and Reddit. RSC consistently provides a better fit to real data and clearly outperform existing models for human dynamics. RSC was also able to detect bots with a precision higher than 94 .", "Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
The work by Cannarella and Spechler from Princeton is noteworthy for triggering a brief but fierce media frenzy. Just as in the work presented here, the results in @cite_17 were obtained from analyzing Google Trends time series. Differing from our approach, Cannarella and Spechler considered epidemic models to analyze search frequency time series that indicate interest in services such as or . While this methodology had earlier been applied to analyze the temporal evolution of interest in Internet memes @cite_37 , Cannarella and Spechler caused a controversy, because they used their models to predict that would lose 80 Interestingly, our qualitative'' results in Fig. seem to corroborate Cannarella's and Spechler's predictions and we note that they were obtained from the same data but different models. In any case, we certainly agree with Develin's objection that predictions based on search frequency data have to be taken with a grain of salt. Yet, we disagree with his argument that social media related search interests of millions of Web users are not indicative of user engagement (see again our discussion in ) and note the curious absence of any direct engagement data in his reply.
{ "cite_N": [ "@cite_37", "@cite_17" ], "mid": [ "2963759277", "2142076420" ], "abstract": [ "The evolution of social media popularity exhibits rich temporality, i.e., popularities change over time at various levels of temporal granularity. This is influenced by temporal variations of public attentions or user activities. For example, popularity patterns of street snap on Flickr are observed to depict distinctive fashion styles at specific time scales, such as season-based periodic fluctuations for Trench Coat or one-off peak in days for Evening Dress. However, this fact is often overlooked by existing research of popularity modeling. We present the first study to incorporate multiple time-scale dynamics into predicting online popularity. We propose a novel computational framework in the paper, named Multi-scale Temporalization, for estimating popularity based on multi-scale decomposition and structural reconstruction in a tensor space of user, post, and time by joint low-rank constraints. By considering the noise caused by context inconsistency, we design a data rearrangement step based on context aggregation as preprocessing to enhance contextual relevance of neighboring data in the tensor space. As a result, our approach can leverage multiple levels of temporal characteristics and reduce the noise of data decomposition to improve modeling effectiveness. We evaluate our approach on two large-scale Flickr image datasets with over 1.8 million photos in total, for the task of popularity prediction. The results show that our approach significantly outperforms state-of-the-art popularity prediction techniques, with a relative improvement of 10.9 -47.5 in terms of prediction accuracy.", "The queries people issue to a search engine and the results clicked following a query change over time. For example, after the earthquake in Japan in March 2011, the query japan spiked in popularity and people issuing the query were more likely to click government-related results than they would prior to the earthquake. We explore the modeling and prediction of such temporal patterns in Web search behavior. We develop a temporal modeling framework adapted from physics and signal processing and harness it to predict temporal patterns in search behavior using smoothing, trends, periodicities, and surprises. Using current and past behavioral data, we develop a learning procedure that can be used to construct models of users' Web search activities. We also develop a novel methodology that learns to select the best prediction model from a family of predictive models for a given query or a class of queries. Experimental results indicate that the predictive models significantly outperform baseline models that weight historical evidence the same for all queries. We present two applications where new methods introduced for the temporal modeling of user behavior significantly improve upon the state of the art. Finally, we discuss opportunities for using models of temporal dynamics to enhance other areas of Web search and information retrieval." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
However, data that directly reflects engagement played an important role in Ribeiro's analysis performed at CMU @cite_19 . He considered statistics available from , a subsidiary of which provides Web traffic data that are gathered using the toolbar, a plugin that volunteers install in their browsers so that can track which Web pages they access.
{ "cite_N": [ "@cite_19" ], "mid": [ "1998451808" ], "abstract": [ "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works." ] }
1406.6529
2282629128
We analyze general trends and pattern in time series that characterize the dynamics of collective attention to social media services and Web-based businesses. Our study is based on search frequency data available from Google Trends and considers 175 different services. For each service, we collect data from 45 different countries as well as global averages. This way, we obtain more than 8,000 time series which we analyze using diffusion models from the economic sciences. We find that these models accurately characterize the empirical data and our analysis reveals that collective attention to social media grows and subsides in a highly regular and predictable manner. Regularities persist across regions, cultures, and topics and thus hint at general mechanisms that govern the adoption of Web-based services. We discuss several cases in detail to highlight interesting findings. Our methods are of economic interest as they may inform investment decisions and can help assessing at what stage of the general life-cycle a Web service is at.
Given this discussion, the approach and results presented here mark a middle ground. On the one hand, we consider simple diffusion models rather than (intricate) models for the epidemic spread of novelties. On the other hand, the statistical basis for our analysis far exceeds those in @cite_17 @cite_19 . Neither Cannarella and Spechler nor Ribeiro consider country specific data and neither of them considers as large a number of different services than we do in this paper. Moreover, we see the main contribution of this paper not in the predictions in Fig. but rather in the empirical observation that collective attention to social media shows highly regular patterns of growth and decline regardless of region of origin or cultural background of crowds of Web users.
{ "cite_N": [ "@cite_19", "@cite_17" ], "mid": [ "2056284729", "2086627840" ], "abstract": [ "The website wheresgeorge.com invites its users to enter the serial numbers of their US dollar bills and track them across America and beyond. Why? “For fun and because it had not been done yet”, they say. But the dataset accumulated since December 1998 has provided the ideal raw material to test the mathematical laws underlying human travel, and that has important implications for the epidemiology of infectious diseases. Analysis of the trajectories of over half a million dollar bills shows that human dispersal is described by a ‘two-parameter continuous-time random walk’ model: our travel habits conform to a type of random proliferation known as ‘superdiffusion’. And with that much established, it should soon be possible to develop a new class of models to account for the spread of human disease. The dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales. It can synchronize populations of interacting species, stabilize them, and diversify gene pools1,2,3. Human travel, for example, is responsible for the geographical spread of human infectious disease4,5,6,7,8,9. In the light of increasing international trade, intensified human mobility and the imminent threat of an influenza A epidemic10, the knowledge of dynamical and statistical properties of human travel is of fundamental importance. Despite its crucial role, a quantitative assessment of these properties on geographical scales remains elusive, and the assumption that humans disperse diffusively still prevails in models. Here we report on a solid and quantitative assessment of human travelling statistics by analysing the circulation of bank notes in the United States. Using a comprehensive data set of over a million individual displacements, we find that dispersal is anomalous in two ways. First, the distribution of travelling distances decays as a power law, indicating that trajectories of bank notes are reminiscent of scale-free random walks known as Levy flights. Second, the probability of remaining in a small, spatially confined region for a time T is dominated by algebraically long tails that attenuate the superdiffusive spread. We show that human travelling behaviour can be described mathematically on many spatiotemporal scales by a two-parameter continuous-time random walk model to a surprising accuracy, and conclude that human travel on geographical scales is an ambivalent and effectively superdiffusive process.", "The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics." ] }
1406.6973
1562703808
Statements about entities occur everywhere, from newspapers and web pages to structured databases. Correlating references to entities across systems that use different identifiers or names for them is a widespread problem. In this paper, we show how shared knowledge between systems can be used to solve this problem. We present "reference by description", a formal model for resolving references. We provide some results on the conditions under which a randomly chosen entity in one system can, with high probability, be mapped to the same entity in a different system.
The research by ( @cite_1 , @cite_6 and @cite_0 ) are archetypal of the approaches that have been followed for solving this class of problems. Because of the simplicity of the model, much of the attention has focussed on the development of algorithms capable of correctly performing the matching between attributes. Further, most of the work has focussed on overcoming the lexical heterogeneity of the representation of the string values and on differences introduced by data acquisition and entry errors.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_6" ], "mid": [ "2964150246", "2338325072", "2951359136" ], "abstract": [ "Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts. We propose a new approach to the problem, called Deep Match Tree (DEEPMATCHtree), under a general setting. The approach consists of two components, 1) a mining algorithm to discover patterns for matching two short-texts, defined in the product space of dependency trees, and 2) a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure. We test our algorithm on the problem of matching a tweet and a response in social media, a hard matching problem proposed in [, 2013], and show, that DEEP MATCHtree can outperform a number of competitor models including one without using dependency trees and one based on word-embedding, all with large margins.", "Semantic matching, which aims to determine the matching degree between two texts, is a fundamental problem for many NLP applications. Recently, deep learning approach has been applied to this problem and significant improvements have been achieved. In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i.e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position. Based on this idea, we propose a novel deep architecture, namely Match-SRNN, to model the recursive matching structure. Firstly, a tensor is constructed to capture the word level interactions. Then a spatial RNN is applied to integrate the local interactions recursively, with importance determined by four types of gates. Finally, the matching score is calculated based on the global interaction. We show that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence. Thus, there exists a clear interpretation for Match-SRNN. Our experiments on two semantic matching tasks showed the effectiveness of Match-SRNN, and its ability of visualizing the learned matching structure.", "Semantic matching is of central importance to many natural language tasks bordes2014semantic,RetrievalQA . A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models." ] }
1708.04801
2746114819
Stochastic gradient descent (SGD) is a popular stochastic optimization method in machine learning. Traditional parallel SGD algorithms, e.g., SimuParallel SGD, often require all nodes to have the same performance or to consume equal quantities of data. However, these requirements are difficult to satisfy when the parallel SGD algorithms run in a heterogeneous computing environment; low-performance nodes will exert a negative influence on the final result. In this paper, we propose an algorithm called weighted parallel SGD (WP-SGD). WP-SGD combines weighted model parameters from different nodes in the system to produce the final output. WP-SGD makes use of the reduction in standard deviation to compensate for the loss from the inconsistency in performance of nodes in the cluster, which means that WP-SGD does not require that all nodes consume equal quantities of data. We also analyze the theoretical feasibility of running two other parallel SGD algorithms combined with WP-SGD in a heterogeneous environment. The experimental results show that WP-SGD significantly outperforms the traditional parallel SGD algorithms on distributed training systems with an unbalanced workload.
Delay SGD algorithms first appeared in 's work @cite_6 . In a delay SGD algorithm, current model parameters add the gradient of older model parameters in @math iterations ( @math is a random number where @math , in which @math is a constant). The iteration step for delay SGD algorithms is In the Hogwild! algorithm @cite_0 , under some restrictions, parallel SGD can be implemented in a lock-free style, which is robust to noise @cite_15 . However, these methods lead to the consequence that the convergence speed will be decreased by o( @math ). To ensure the delay is limited, communication overhead is unavoidable, which hurts performance. The trade-off in delay SGD is between delay, degree of parallelism, and system efficiency:
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_6" ], "mid": [ "2138243089", "2951781666", "2792630208" ], "abstract": [ "Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.", "Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.", "Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work we present a novel theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). The novelty in our work is that our runtime analysis considers random straggler delays, which helps us design and compare distributed SGD algorithms that strike a balance between stragglers and staleness. We also present a new convergence analysis of asynchronous SGD variants without bounded or exponential delay assumptions, and a novel learning rate schedule to compensate for gradient staleness." ] }
1708.04801
2746114819
Stochastic gradient descent (SGD) is a popular stochastic optimization method in machine learning. Traditional parallel SGD algorithms, e.g., SimuParallel SGD, often require all nodes to have the same performance or to consume equal quantities of data. However, these requirements are difficult to satisfy when the parallel SGD algorithms run in a heterogeneous computing environment; low-performance nodes will exert a negative influence on the final result. In this paper, we propose an algorithm called weighted parallel SGD (WP-SGD). WP-SGD combines weighted model parameters from different nodes in the system to produce the final output. WP-SGD makes use of the reduction in standard deviation to compensate for the loss from the inconsistency in performance of nodes in the cluster, which means that WP-SGD does not require that all nodes consume equal quantities of data. We also analyze the theoretical feasibility of running two other parallel SGD algorithms combined with WP-SGD in a heterogeneous environment. The experimental results show that WP-SGD significantly outperforms the traditional parallel SGD algorithms on distributed training systems with an unbalanced workload.
Along with parallel SGD algorithms, many other kinds of numerical optimization algorithms have been proposed, such as PASSCoDe @cite_19 and CoCoA @cite_7 . They share many new features, such as fast convergence speed in the end of training phase. Most of them are formulated from the dual coordinate descent (ascent) perspective, and hence can only be used for problems whose dual function can be computed. Moreover, traditional SGD still plays an important role in those algorithms.
{ "cite_N": [ "@cite_19", "@cite_7" ], "mid": [ "1939652453", "2952594493" ], "abstract": [ "Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.", "Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications." ] }
1708.04866
2766615649
Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.
The economics and development of underground markets have perhaps been first tackled by @cite_75 . On the other hand, @cite_65 showed that cybercrime economics are distinctively problematic in that the lack of effective rule enforcement mechanisms may hinder fair trading, and as a consequence the existence of the market itself. A few studies analyzed the evolution of cybercrime markets @cite_35 @cite_51 @cite_16 @cite_43 @cite_0 @cite_42 , and provided estimates of malware development @cite_22 and attack likelihood @cite_32 , but no quantitative account of economic factors such as exploit pricing and adoption are currently reported in the literature @cite_36 @cite_22 . In this paper we provide the first empirical quantification of these economic aspects by analyzing data collected first-handedly from a prominent cybercrime market.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_36", "@cite_42", "@cite_65", "@cite_32", "@cite_0", "@cite_43", "@cite_51", "@cite_16", "@cite_75" ], "mid": [ "2153532176", "1972791208", "2110271878", "2132280055", "1988015967", "1537415400", "72733388", "1971105953", "2042830765", "2004621612", "1412796528" ], "abstract": [ "Over the last decade, the nature of cybercrime has transformed from naive vandalism to profit-driven, leading to the emergence of a global underground economy. A noticeable trend which has surfaced in this economy is the repeated use of forums to operate online stolen data markets. Using interaction data from three prominent carding forums: Shadowcrew, Cardersmarket and Darkmarket, this study sets out to understand why forums are repeatedly chosen to operate online stolen data markets despite numerous successful infiltrations by law enforcement in the past. Drawing on theories from criminology, social psychology, economics and network science, this study has identified four fundamental socio-economic mechanisms offered by carding forums: (1) formal control and coordination; (2) social networking; (3) identity uncertainty mitigation; (4) quality uncertainty mitigation. Together, they give rise to a sophisticated underground market regulatory system that facilitates underground trading over the Internet and thus drives the expansion of the underground economy.", "Cybercrime activities are supported by infrastructures and services originating from an underground economy. The current understanding of this phenomenon is that the cybercrime economy ought to be fraught with information asymmetry and adverse selection problems. They should make the effects that we observe every day impossible to sustain. In this paper, we show that the market structure and design used by cyber criminals have evolved toward a market design that is similar to legitimate, thriving, online forum markets such as eBay. We illustrate this evolution by comparing the market regulatory mechanisms of two underground forum markets: 1) a failed market for credit cards and other illegal goods and 2) another, extremely active marketplace for vulnerabilities, exploits, and cyber attacks in general. The comparison shows that cybercrime markets evolved from unruly, scam for scammers market mechanisms to mature, regulated mechanisms that greatly favors trade efficiency.", "The rise of cybercrime in the last decade is an economic case of individuals responding to monetary and psychological incentives. Two main drivers for cybercrime can be identi fied: the potential gains from cyberattacks are increasing with the growth of importance of the Internet, and malefactors' expected costs (e.g., the penalties and the likelihood of being apprehended and prosecuted) are frequently lower compared with traditional crimes. In short, computer-mediated crimes are more convenient, and pro table, and less expensive and risky than crimes not mediated by the Internet. The increase in cybercriminal activities, coupled with ineff ective legislation and ineffective law enforcement pose critical challenges for maintaining the trust and security of ourcomputer infrastructures.Modern computer attacks encompass a broad spectrum of economic activity, where various malfeasants specialize in developing speci c goods (exploits, botnets, mailers) and services (distributing malware, monetizing stolen credentials, providing web hosting, etc.). A typical Internet fraud involves the actions of many of these individuals, such as malware writers, botnet herders, spammers, data brokers, and money launderers.Assessing the relationships among various malfeasants is an essential piece of information for discussing economic, technical, and legal proposals to address cybercrime. This paper presents a framework for understanding the interactions between these individuals and how they operate. We follow three steps.First, we present the general architecture of common computer attacks, and discuss the flow of goods and services that supports the underground economy. We discuss the general flow of resources between criminal groups and victims, and the interactions between diff erent specialized cybercriminals.Second, we describe the need to estimate the social costs of cybercrime and the profi ts of cybercriminals in order to identify optimal levels of protection. One of the main problems in quantifying the precise impact of cybercrime is that computer attacks are not always detected, or reported. Therefore we propose the need to develop a more systematic and transparent way of reporting computer breaches and their eff ects.Finally, we propose some possible countermeasures against criminal activities. In particular, we analyze the role private and public protection, and the incentives of multiple stake holders.", "This paper studies an active underground economy which specializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs collected from an active underground market operating on public Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societ al subs trate mature enough to steal wealth into the millions of dollars in less than one year.", "Cybercrime's tentacles reach deeply into the Internet. A complete, underground criminal economy has developed that lets malicious actors steal money through the Web. The authors detail this enterprise, showing how information, expertise, and money flow through it. Understanding the underground economy's structure is critical for fighting it.", "This paper reports statistical analyses performed on simulated data from a stochastic multi-agent model of speculative behaviour in a financial market. The price dynamics resulting from this artificial market process exhibits the same type of scaling laws as do empirical data from stock markets and foreign exchange markets: (i) one observes scaling in the probability distribution of relative price changes with a Pareto exponent around 2.6, (ii) volatility shows significant long-range correlation with a self-similarity parameter H around 0.85. This happens although we assume that news about the intrinsic or fundamental value of the asset follows a white noise process and, hence, incorporation of news about fundamental factors is insufficient to explain either of the characteristics (i) or (ii). As a consequence, in our model, the main stylised facts of financial data originate from the working of the market itself and one need not resort to scaling in unobservable extraneous signals as an explanation of the source of scaling in financial prices. The emergence of power laws can be explained by the existence of a critical state which is repeatedly approached in the course of the system's development.", "Today, several actors within the Internet's burgeoning underground economy specialize in providing services to like-minded criminals. At the same time, gray and white markets exist for services on the Internet providing reasonably similar products. In this paper we explore a hypothetical arbitrage between these two markets by purchasing \"Human Intelligence\" on Amazon's Mechanical Turk service, determining the vulnerability of and cost to compromise the computers being used by the humans to provide this service, and estimating the underground value of the computers which are vulnerable to exploitation. We show that it is economically feasible for an attacker to purchase access to high value hosts via Mechanical Turk, compromise the subset with unpatched, vulnerable browser plugins, and sell access to these hosts via Pay-Per-Install programs for a tidy profit. We also present supplementary statistics gathered regarding Mechanical Turk workers' browser security, antivirus usage, and willingness to run arbitrary programs in exchange for a small monetary reward.", "We live in a computerized and networked society where many of our actions leave a digital trace and affect other people’s actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.", "With the growing sophistication and use of information technology, the past decade has witnessed a major growth in financial cybercrime. This paper focuses specifically on credit card fraud and identity theft, examining the globalisation of these activities within a ‘digital ecosystem’ conceptual framework. The relevance of concepts and analytical tools typically used to study legitimate businesses, such as value chains, dynamic capabilities and business models, is explored and tested for their relevance in understanding the scale and nature of illegal activities which are dependant upon innovation and the collective activities of global participants. It is argued that developing a better understanding of how such illegal activities are organised and operate will assist policy makers, law enforcement agencies and security firms to identify trends and concentrate limited resources in a most effective manner.", "As consumers are increasingly using the Internet to manage their finances, there has been a concomitant increase in the risk of theft and fraud by cybercriminals. Hackers who acquire sensitive consumer data utilise information on their own, or sell the information in online forums for a significant profit. Few have considered the organisational composition of the participants engaged in the sale of stolen data, including the presence of managerial oversight, division of labour, coordination of roles and purposive associations between buyers, sellers and forum operators. Thus, this qualitative study will apply Best and Luckenbill's framework of social organisation to a sample of threads from publicly accessible web forums where individuals buy and sell stolen financial information. The implications of this study for criminologists, law enforcement, the intelligence community and information security researchers will be discussed in depth.", "This chapter documents what we believe to be the first systematic study of the costs of cybercrime. The initial workshop paper was prepared in response to a request from the UK Ministry of Defence following scepticism that previous studies had hyped the problem. For each of the main categories of cybercrime we set out what is and is not known of the direct costs, indirect costs and defence costs – both to the UK and to the world as a whole. We distinguish carefully between traditional crimes that are now “cyber” because they are conducted online (such as tax and welfare fraud); transitional crimes whose modus operandi has changed substantially as a result of the move online (such as credit card fraud); new crimes that owe their existence to the Internet; and what we might call platform crimes such as the provision of botnets which facilitate other crimes rather than being used to extract money from victims directly. As far as direct costs are concerned, we find that traditional offences such as tax and welfare fraud cost the typical citizen in the low hundreds of pounds euros dollars a year; transitional frauds cost a few pounds euros dollars; while the new computer crimes cost in the tens of pence cents. However, the indirect costs and defence costs are much higher for transitional and new crimes. For the former they may be roughly comparable to what the criminals earn, while for the latter they may be an order of magnitude more. As a striking example, the botnet behind a third of the spam sent in 2010 earned its owners around $2.7 million, while worldwide expenditures on spam prevention probably exceeded a billion dollars. We are extremely inefficient at fighting cybercrime; or to put it another way, cyber-crooks are like terrorists or met al thieves in that their activities impose disproportionate costs on society. Some of the reasons for this are well-known: cybercrimes are global and have strong externalities, while traditional crimes such as burglary and car theft are local, and the associated equilibria have emerged after many years of optimisation. As for the more direct question of what should be done, our figures suggest that we should spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more in response – that is, on the prosaic business of hunting down cyber-criminals and throwing them in jail." ] }
1708.04866
2766615649
Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.
Recent work studied the services and monetization schemes of cyber criminals, e.g. to launder money through acquisition of expensive goods @cite_50 , or renting infected systems @cite_16 @cite_74 . The provision of the technological means by which these attacks are perpetrated remain however relatively unexplored @cite_36 , with the exception of a few technical insights from industrial reports @cite_38 @cite_57 . Similarly, a few studies estimated the economic effects of cybercrime activities on the real-world economy, for example by analyzing the monetization of stolen credit cards and banking information @cite_72 , the realization of profits from spam campaigns @cite_6 , the registration of fake online accounts @cite_5 , and the provision of booter services for distributed denial of service attacks @cite_46 . However, a characterization of the costs of the technology (as opposed to the earnings it generates), and the relation of trade factors on the realization of an attack is still missing. This work provides a first insight on the value of vulnerability exploits in the underground markets, and the effects of market factors on presence of attacks in the wild.
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_6", "@cite_57", "@cite_72", "@cite_74", "@cite_50", "@cite_5", "@cite_46", "@cite_16" ], "mid": [ "2110271878", "1412796528", "1972791208", "2153532176", "2004621612", "2132280055", "2042830765", "2890718808", "2110522107", "2783095957" ], "abstract": [ "The rise of cybercrime in the last decade is an economic case of individuals responding to monetary and psychological incentives. Two main drivers for cybercrime can be identi fied: the potential gains from cyberattacks are increasing with the growth of importance of the Internet, and malefactors' expected costs (e.g., the penalties and the likelihood of being apprehended and prosecuted) are frequently lower compared with traditional crimes. In short, computer-mediated crimes are more convenient, and pro table, and less expensive and risky than crimes not mediated by the Internet. The increase in cybercriminal activities, coupled with ineff ective legislation and ineffective law enforcement pose critical challenges for maintaining the trust and security of ourcomputer infrastructures.Modern computer attacks encompass a broad spectrum of economic activity, where various malfeasants specialize in developing speci c goods (exploits, botnets, mailers) and services (distributing malware, monetizing stolen credentials, providing web hosting, etc.). A typical Internet fraud involves the actions of many of these individuals, such as malware writers, botnet herders, spammers, data brokers, and money launderers.Assessing the relationships among various malfeasants is an essential piece of information for discussing economic, technical, and legal proposals to address cybercrime. This paper presents a framework for understanding the interactions between these individuals and how they operate. We follow three steps.First, we present the general architecture of common computer attacks, and discuss the flow of goods and services that supports the underground economy. We discuss the general flow of resources between criminal groups and victims, and the interactions between diff erent specialized cybercriminals.Second, we describe the need to estimate the social costs of cybercrime and the profi ts of cybercriminals in order to identify optimal levels of protection. One of the main problems in quantifying the precise impact of cybercrime is that computer attacks are not always detected, or reported. Therefore we propose the need to develop a more systematic and transparent way of reporting computer breaches and their eff ects.Finally, we propose some possible countermeasures against criminal activities. In particular, we analyze the role private and public protection, and the incentives of multiple stake holders.", "This chapter documents what we believe to be the first systematic study of the costs of cybercrime. The initial workshop paper was prepared in response to a request from the UK Ministry of Defence following scepticism that previous studies had hyped the problem. For each of the main categories of cybercrime we set out what is and is not known of the direct costs, indirect costs and defence costs – both to the UK and to the world as a whole. We distinguish carefully between traditional crimes that are now “cyber” because they are conducted online (such as tax and welfare fraud); transitional crimes whose modus operandi has changed substantially as a result of the move online (such as credit card fraud); new crimes that owe their existence to the Internet; and what we might call platform crimes such as the provision of botnets which facilitate other crimes rather than being used to extract money from victims directly. As far as direct costs are concerned, we find that traditional offences such as tax and welfare fraud cost the typical citizen in the low hundreds of pounds euros dollars a year; transitional frauds cost a few pounds euros dollars; while the new computer crimes cost in the tens of pence cents. However, the indirect costs and defence costs are much higher for transitional and new crimes. For the former they may be roughly comparable to what the criminals earn, while for the latter they may be an order of magnitude more. As a striking example, the botnet behind a third of the spam sent in 2010 earned its owners around $2.7 million, while worldwide expenditures on spam prevention probably exceeded a billion dollars. We are extremely inefficient at fighting cybercrime; or to put it another way, cyber-crooks are like terrorists or met al thieves in that their activities impose disproportionate costs on society. Some of the reasons for this are well-known: cybercrimes are global and have strong externalities, while traditional crimes such as burglary and car theft are local, and the associated equilibria have emerged after many years of optimisation. As for the more direct question of what should be done, our figures suggest that we should spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more in response – that is, on the prosaic business of hunting down cyber-criminals and throwing them in jail.", "Cybercrime activities are supported by infrastructures and services originating from an underground economy. The current understanding of this phenomenon is that the cybercrime economy ought to be fraught with information asymmetry and adverse selection problems. They should make the effects that we observe every day impossible to sustain. In this paper, we show that the market structure and design used by cyber criminals have evolved toward a market design that is similar to legitimate, thriving, online forum markets such as eBay. We illustrate this evolution by comparing the market regulatory mechanisms of two underground forum markets: 1) a failed market for credit cards and other illegal goods and 2) another, extremely active marketplace for vulnerabilities, exploits, and cyber attacks in general. The comparison shows that cybercrime markets evolved from unruly, scam for scammers market mechanisms to mature, regulated mechanisms that greatly favors trade efficiency.", "Over the last decade, the nature of cybercrime has transformed from naive vandalism to profit-driven, leading to the emergence of a global underground economy. A noticeable trend which has surfaced in this economy is the repeated use of forums to operate online stolen data markets. Using interaction data from three prominent carding forums: Shadowcrew, Cardersmarket and Darkmarket, this study sets out to understand why forums are repeatedly chosen to operate online stolen data markets despite numerous successful infiltrations by law enforcement in the past. Drawing on theories from criminology, social psychology, economics and network science, this study has identified four fundamental socio-economic mechanisms offered by carding forums: (1) formal control and coordination; (2) social networking; (3) identity uncertainty mitigation; (4) quality uncertainty mitigation. Together, they give rise to a sophisticated underground market regulatory system that facilitates underground trading over the Internet and thus drives the expansion of the underground economy.", "As consumers are increasingly using the Internet to manage their finances, there has been a concomitant increase in the risk of theft and fraud by cybercriminals. Hackers who acquire sensitive consumer data utilise information on their own, or sell the information in online forums for a significant profit. Few have considered the organisational composition of the participants engaged in the sale of stolen data, including the presence of managerial oversight, division of labour, coordination of roles and purposive associations between buyers, sellers and forum operators. Thus, this qualitative study will apply Best and Luckenbill's framework of social organisation to a sample of threads from publicly accessible web forums where individuals buy and sell stolen financial information. The implications of this study for criminologists, law enforcement, the intelligence community and information security researchers will be discussed in depth.", "This paper studies an active underground economy which specializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs collected from an active underground market operating on public Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societ al subs trate mature enough to steal wealth into the millions of dollars in less than one year.", "With the growing sophistication and use of information technology, the past decade has witnessed a major growth in financial cybercrime. This paper focuses specifically on credit card fraud and identity theft, examining the globalisation of these activities within a ‘digital ecosystem’ conceptual framework. The relevance of concepts and analytical tools typically used to study legitimate businesses, such as value chains, dynamic capabilities and business models, is explored and tested for their relevance in understanding the scale and nature of illegal activities which are dependant upon innovation and the collective activities of global participants. It is argued that developing a better understanding of how such illegal activities are organised and operate will assist policy makers, law enforcement agencies and security firms to identify trends and concentrate limited resources in a most effective manner.", "Abstract Due to the rapid growth of the Internet, users change their preference from traditional shopping to the electronic commerce. Instead of bank shop robbery, nowadays, criminals try to find their victims in the cyberspace with some specific tricks. By using the anonymous structure of the Internet, attackers set out new techniques, such as phishing, to deceive victims with the use of false websites to collect their sensitive information such as account IDs, usernames, passwords, etc. Understanding whether a web page is legitimate or phishing is a very challenging problem, due to its semantics-based attack structure, which mainly exploits the computer users’ vulnerabilities. Although software companies launch new anti-phishing products, which use blacklists, heuristics, visual and machine learning-based approaches, these products cannot prevent all of the phishing attacks. In this paper, a real-time anti-phishing system, which uses seven different classification algorithms and natural language processing (NLP) based features, is proposed. The system has the following distinguishing properties from other studies in the literature: language independence, use of a huge size of phishing and legitimate data, real-time execution, detection of new websites, independence from third-party services and use of feature-rich classifiers. For measuring the performance of the system, a new dataset is constructed, and the experimental results are tested on it. According to the experimental and comparative results from the implemented classification algorithms, Random Forest algorithm with only NLP based features gives the best performance with the 97.98 accuracy rate for detection of phishing URLs.", "Despite general awareness of the importance of keeping one's system secure, and widespread availability of consumer security technologies, actual investment in security remains highly variable across the Internet population, allowing attacks such as distributed denial-of-service (DDoS) and spam distribution to continue unabated. By modeling security investment decision-making in established (e.g., weakest-link, best-shot) and novel games (e.g., weakest-target), and allowing expenditures in self-protection versus self-insurance technologies, we can examine how incentives may shift between investment in a public good (protection) and a private good (insurance), subject to factors such as network size, type of attack, loss probability, loss magnitude, and cost of technology. We can also characterize Nash equilibria and social optima for different classes of attacks and defenses. In the weakest-target game, an interesting result is that, for almost all parameter settings, more effort is exerted at Nash equilibrium than at the social optimum. We may attribute this to the \"strategic uncertainty\" of players seeking to self-protect at just slightly above the lowest protection level.", "Bitcoin, a peer-to-peer payment system and digital currency, is often involved in illicit activities such as scamming, ransomware attacks, illegal goods trading, and thievery. At the time of writing, the Bitcoin ecosystem has not yet been mapped and as such there is no estimate of the share of illicit activities. This paper provides the first estimation of the portion of cyber-criminal entities in the Bitcoin ecosystem. Our dataset consists of 854 observations categorised into 12 classes (out of which 5 are cybercrime-related) and a total of 100,000 uncategorised observations. The dataset was obtained from the data provider who applied three types of clustering of Bitcoin transactions to categorise entities: co-spend, intelligence-based, and behaviour-based. Thirteen supervised learning classifiers were then tested, of which four prevailed with a cross-validation accuracy of 77.38 , 76.47 , 78.46 , 80.76 respectively. From the top four classifiers, Bagging and Gradient Boosting classifiers were selected based on their weighted average and per class precision on the cybercrime-related categories. Both models were used to classify 100,000 uncategorised entities, showing that the share of cybercrime-related is 29.81 according to Bagging, and 10.95 according to Gradient Boosting with number of entities as the metric. With regard to the number of addresses and current coins held by this type of entities, the results are: 5.79 and 10.02 according to Bagging; and 3.16 and 1.45 according to Gradient Boosting." ] }
1708.04866
2766615649
Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.
The presence of a cybcercrime economy that absorbs vulnerabilities and generates attacks motivated the security community to study the devision of legitimate' vulnerability markets that attract security researchers away from the illegal marketplaces @cite_44 . Whereas several market mechanisms have been proposed @cite_31 @cite_41 , their effectiveness in deterring attacks is not clear @cite_2 @cite_63 @cite_41 . The so-called responsible vulnerability disclosure is incentivized by the presence of multiple bug-hunting programs by several providers such as Google, Facebook, and Microsoft , or umbrella' organizations that coordinate vulnerability reporting and disclosure @cite_3 @cite_36 @cite_44 . It is however unclear how do these compare against the cybercrime economy, as several key parameters such as exploit pricing in the underground are currently unknown. Further, it remains uncertain whether the adoption of vulnerability disclosure mechanisms has a clear effect on risk of attack in the wild @cite_2 . This study fills this gap by providing an empirical analysis of exploit pricing in the underground, and evaluating the effect of cybercrime market factors on the actual realization of attacks in the wild.
{ "cite_N": [ "@cite_41", "@cite_36", "@cite_3", "@cite_44", "@cite_63", "@cite_2", "@cite_31" ], "mid": [ "1972791208", "2153532176", "2117405938", "2110271878", "2021348304", "2513861733", "2514333820" ], "abstract": [ "Cybercrime activities are supported by infrastructures and services originating from an underground economy. The current understanding of this phenomenon is that the cybercrime economy ought to be fraught with information asymmetry and adverse selection problems. They should make the effects that we observe every day impossible to sustain. In this paper, we show that the market structure and design used by cyber criminals have evolved toward a market design that is similar to legitimate, thriving, online forum markets such as eBay. We illustrate this evolution by comparing the market regulatory mechanisms of two underground forum markets: 1) a failed market for credit cards and other illegal goods and 2) another, extremely active marketplace for vulnerabilities, exploits, and cyber attacks in general. The comparison shows that cybercrime markets evolved from unruly, scam for scammers market mechanisms to mature, regulated mechanisms that greatly favors trade efficiency.", "Over the last decade, the nature of cybercrime has transformed from naive vandalism to profit-driven, leading to the emergence of a global underground economy. A noticeable trend which has surfaced in this economy is the repeated use of forums to operate online stolen data markets. Using interaction data from three prominent carding forums: Shadowcrew, Cardersmarket and Darkmarket, this study sets out to understand why forums are repeatedly chosen to operate online stolen data markets despite numerous successful infiltrations by law enforcement in the past. Drawing on theories from criminology, social psychology, economics and network science, this study has identified four fundamental socio-economic mechanisms offered by carding forums: (1) formal control and coordination; (2) social networking; (3) identity uncertainty mitigation; (4) quality uncertainty mitigation. Together, they give rise to a sophisticated underground market regulatory system that facilitates underground trading over the Internet and thus drives the expansion of the underground economy.", "Software vulnerability disclosure has become a critical area of concern for policymakers. Traditionally, a Computer Emergency Response Team (CERT) acts as an infomediary between benign identifiers (who voluntarily report vulnerability information) and software users. After verifying a reported vulnerability, CERT sends out a public advisory so that users can safeguard their systems against potential exploits. Lately, firms such as iDefense have been implementing a new market-based approach for vulnerability information. The market-based infomediary provides monetary rewards to identifiers for each vulnerability reported. The infomediary then shares this information with its client base. Using this information, clients protect themselves against potential attacks that exploit those specific vulnerabilities.The key question addressed in our paper is whether movement toward such a market-based mechanism for vulnerability disclosure leads to a better social outcome. Our analysis demonstrates that an active unregulated market-based mechanism for vulnerabilities almost always underperforms a passive CERT-type mechanism. This counterintuitive result is attributed to the market-based infomediary's incentive to leak the vulnerability information inappropriately. If a profit-maximizing firm is not allowed to (or chooses not to) leak vulnerability information, we find that social welfare improves. Even a regulated market-based mechanism performs better than a CERT-type one, but only under certain conditions. Finally, we extend our analysis and show that a proposed mechanism--federally funded social planner--always performs better than a market-based mechanism.", "The rise of cybercrime in the last decade is an economic case of individuals responding to monetary and psychological incentives. Two main drivers for cybercrime can be identi fied: the potential gains from cyberattacks are increasing with the growth of importance of the Internet, and malefactors' expected costs (e.g., the penalties and the likelihood of being apprehended and prosecuted) are frequently lower compared with traditional crimes. In short, computer-mediated crimes are more convenient, and pro table, and less expensive and risky than crimes not mediated by the Internet. The increase in cybercriminal activities, coupled with ineff ective legislation and ineffective law enforcement pose critical challenges for maintaining the trust and security of ourcomputer infrastructures.Modern computer attacks encompass a broad spectrum of economic activity, where various malfeasants specialize in developing speci c goods (exploits, botnets, mailers) and services (distributing malware, monetizing stolen credentials, providing web hosting, etc.). A typical Internet fraud involves the actions of many of these individuals, such as malware writers, botnet herders, spammers, data brokers, and money launderers.Assessing the relationships among various malfeasants is an essential piece of information for discussing economic, technical, and legal proposals to address cybercrime. This paper presents a framework for understanding the interactions between these individuals and how they operate. We follow three steps.First, we present the general architecture of common computer attacks, and discuss the flow of goods and services that supports the underground economy. We discuss the general flow of resources between criminal groups and victims, and the interactions between diff erent specialized cybercriminals.Second, we describe the need to estimate the social costs of cybercrime and the profi ts of cybercriminals in order to identify optimal levels of protection. One of the main problems in quantifying the precise impact of cybercrime is that computer attacks are not always detected, or reported. Therefore we propose the need to develop a more systematic and transparent way of reporting computer breaches and their eff ects.Finally, we propose some possible countermeasures against criminal activities. In particular, we analyze the role private and public protection, and the incentives of multiple stake holders.", "In recent years, many organizations have established bounty programs that attract white hat hackers who contribute vulnerability reports of web systems. In this paper, we collect publicly available data of two representative web vulnerability discovery ecosystems (Wooyun and HackerOne) and study their characteristics, trajectory, and impact. We find that both ecosystems include large and continuously growing white hat communities which have provided significant contributions to organizations from a wide range of business sectors. We also analyze vulnerability trends, response and resolve behaviors, and reward structures of participating organizations. Our analysis based on the HackerOne dataset reveals that a considerable number of organizations exhibit decreasing trends for reported web vulnerabilities. We further conduct a regression study which shows that monetary incentives have a significantly positive correlation with the number of vulnerabilities reported. Finally, we make recommendations aimed at increasing participation by white hats and organizations in such ecosystems.", "A software defect that exposes a software system to a cyber security attack is known as a software vulnerability. A software security exploit is an engineered software solution that successfully exploits the vulnerability. Exploits are used to break into computer systems, but exploits are currently used also for security testing, security analytics, intrusion detection, consultation, and other legitimate and legal purposes. A well-established market emerged in the 2000s for software vulnerabilities. The current market segments populated by small and medium-sized companies exhibit signals that may eventually lead to a similar industrialization of software exploits. To these ends and against these industry trends, this paper observes the first online market place for trading exploits between buyers and sellers. The paper adopts three different perspectives to study the case. The paper (a) portrays the studied exploit market place against the historical background in the software security industry. A qualitative assessment is made to (b) evaluate the case against the common characteristics of traditional online market places. The qualitative observations are used in the quantitative part (c) for predicting the price of exploits with partial least squares regression. The results show that (i) the case is unique from a historical perspective, although (ii) the online market place characteristics are familiar. The regression estimates also indicate that (iii) the pricing of exploits is only partially dependent on such factors as the targeted platform, the date of disclosure of the exploited vulnerability, and the quality assurance service provided by the market place provider. The results allow to contemplate (iv) practical means for enhancing the market place.", "Abstract Several competing human behavior models have been proposed to model boundedly rational adversaries in repeated Stackelberg Security Games (SSG). However, these existing models fail to address three main issues which are detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries' past actions (“attacks on targets”), they fail to take into account adversaries' future adaptation based on successes or failures of these past actions. Second, existing algorithms fail to learn a reliable model of the adversary unless there exists sufficient data collected by exposing enough of the attack surface – a situation that often arises in initial rounds of the repeated SSG. Third, current leading models have failed to include probability weighting functions, even though it is well known that human beings' weighting of probability is typically nonlinear. To address these limitations of existing models, this article provides three main contributions. Our first contribution is a new human behavior model, SHARP, which mitigates these three limitations as follows: (i) SHARP reasons based on success or failure of the adversary's past actions on exposed portions of the attack surface to model adversary adaptivity; (ii) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary's lack of exposure to enough of the attack surface; and (iii) SHARP integrates a non-linear probability weighting function to capture the adversary's true weighting of probability. Our second contribution is a first “repeated measures study” – at least in the context of SSGs – of competing human behavior models. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects on the Amazon Mechanical Turk platform, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP. Our third major contribution is to demonstrate SHARP's superiority by conducting real-world human subjects experiments at the Bukit Barisan Seletan National Park in Indonesia against wildlife security experts." ] }
1708.04903
2745522234
Non-linear, especially convex, objective functions have been extensively studied in recent years in which approaches relies crucially on the convexity property of cost functions. In this paper, we present primal-dual approaches based on configuration linear programs to design competitive online algorithms for problems with arbitrarily-grown objective. This approach is particularly appropriate for non-linear (non-convex) objectives in online setting. We first present a simple greedy algorithm for a general cost-minimization problem. The competitive ratio of the algorithm is characterized by the mean of a notion, called smoothness, which is inspired by a similar concept in the context of algorithmic game theory. The algorithm gives optimal (up to a constant factor) competitive ratios while applying to different contexts such as network routing, vector scheduling, energy-efficient scheduling and non-convex facility location. Next, we consider the online @math covering problems with non-convex objective. Building upon the resilient ideas from the primal-dual framework with configuration LPs, we derive a competitive algorithm for these problems. Our result generalizes the online primal-dual algorithm developed recently by for convex objectives with monotone gradients to non-convex objectives. The competitive ratio is now characterized by a new concept, called local smoothness --- a notion inspired by the smoothness. Our algorithm yields tight competitive ratio for the objectives such as the sum of @math -norms and gives competitive solutions for online problems of submodular minimization and some natural non-convex minimization under covering constraints.
In this paper, we systematically strengthen natural LPs by the construction of the configuration LPs presented in @cite_15 . propose a scheme that consists of solving the new LPs (with exponential number of variables) and rounding the fractional solutions to integer ones using decoupling inequalities. By this method, they derive approximation algorithms for several (offline) optimization problems which can formulated by linear constraints and objective function as a power of some constant @math . Specifically, the approximation ratio is proved to be the Bell number @math for several problems. In our approach, a crucial element to characterize the performance of an algorithm is the smoothness property of functions. The smooth argument is introduced by in the context of algorithmic game theory and it has successfully characterized the performance of equilibria (price of anarchy) in many classes of games such as congestion games, etc @cite_12 . This notion inspires the definition of smoothness in our paper.
{ "cite_N": [ "@cite_15", "@cite_12" ], "mid": [ "2079669094", "2133243407" ], "abstract": [ "In a seminal paper, Karp, Vazirani, and Vazirani show that a simple ranking algorithm achieves a competitive ratio of 1-1 e for the online bipartite matching problem in the standard adversarial model, where the ratio of 1-1 e is also shown to be optimal. Their result also implies that in the random arrivals model defined by Goel and Mehta, where the online nodes arrive in a random order, a simple greedy algorithm achieves a competitive ratio of 1-1 e. In this paper, we study the ranking algorithm in the random arrivals model, and show that it has a competitive ratio of at least 0.696, beating the 1-1 e ≈ 0.632 barrier in the adversarial model. Our result also extends to the i.i.d. distribution model of , removing the assumption that the distribution is known. Our analysis has two main steps. First, we exploit certain dominance and monotonicity properties of the ranking algorithm to derive a family of factor-revealing linear programs (LPs). In particular, by symmetry of the ranking algorithm in the random arrivals model, we have the monotonicity property on both sides of the bipartite graph, giving good \"strength\" to the LPs. Second, to obtain a good lower bound on the optimal values of all these LPs and hence on the competitive ratio of the algorithm, we introduce the technique of strongly factor-revealing LPs. In particular, we derive a family of modified LPs with similar strength such that the optimal value of any single one of these new LPs is a lower bound on the competitive ratio of the algorithm. This enables us to leverage the power of computer LP solvers to solve for large instances of the new LPs to establish bounds that would otherwise be difficult to attain by human analysis.", "The price of anarchy (POA) is a worst-case measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium. This drawback motivates the search for inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash and correlated equilibria; or to sequences of outcomes generated by natural experimentation strategies, such as successive best responses or simultaneous regret-minimization. We prove a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective. First, we identify a \"canonical sufficient condition\" for an upper bound of the POA for pure Nash equilibria, which we call a smoothness argument. Second, we show that every bound derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of regret-minimizing players (or \"price of total anarchy\"). Smoothness arguments also have automatic implications for the inefficiency of approximate and Bayesian-Nash equilibria and, under mild additional assumptions, for bicriteria bounds and for polynomial-length best-response sequences. We also identify classes of games --- most notably, congestion games with cost functions restricted to an arbitrary fixed set --- that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worst-case upper bound on the POA, even for the smallest set of interest (pure Nash equilibria). Byproducts of our proof of this result include the first tight bounds on the POA in congestion games with non-polynomial cost functions, and the first structural characterization of atomic congestion games that are universal worst-case examples for the POA." ] }
1708.04675
2746492579
Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.
Besides above papers on constructing convolution layer on graphs, many others studied the problem from a different angle. @cite_31 first investigated learning a network from a set of heterogeneous graphs to predict node-level feature as well as to do graph completion, although it is based on node sequence selection. @cite_25 introduced a graph diffusion process, which delivers equivalent effect as convolution has, but @cite_25 's DCNN has no dependency on the indexing of nodes. Its constrains are the highly restricted locality by diffusion process and the expensive dense matrix multiplication.
{ "cite_N": [ "@cite_31", "@cite_25" ], "mid": [ "2963984147", "2798598284" ], "abstract": [ "We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on a GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks.", "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture." ] }
1708.04675
2746492579
Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.
Recently, @cite_19 investigated a similar problem as ours by learning edge-conditioned feature weight matrix from edge features using a separate filter-generating network @cite_22 , while @cite_19 's application is on point cloud classification. There are other studies about learning on graph data such as @cite_5 that proposed a kernel embedding methods on feature space for graph-structured data. Another similar work is @cite_17 , but their models do not fall into the kingdom of feed-forward CNN analogs on graphs.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_22", "@cite_17" ], "mid": [ "2773003563", "2963830382", "2902302021", "2909765569" ], "abstract": [ "Recognizing fine-grained categories (e.g., bird species) highly relies on discriminative part localization and part-based fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that part localization (e.g., head of a bird) and fine-grained feature learning (e.g., head shape) are mutually correlated. In this paper, we propose a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other. MA-CNN consists of convolution, channel grouping and part classification sub-networks. The channel grouping network takes as input feature channels from convolutional layers, and generates multiple parts by clustering, weighting and pooling from spatially-correlated channels. The part classification network further classifies an image by each individual part, through which more discriminative fine-grained features can be learned. Two losses are proposed to guide the multi-task learning of channel grouping and part classification, which encourages MA-CNN to generate more discriminative parts from feature channels and learn better fine-grained features from parts in a mutual reinforced way. MA-CNN does not need bounding box part annotation and can be trained end-to-end. We incorporate the learned parts from MA-CNN with part-CNN for recognition, and show the best performances on three challenging published fine-grained datasets, e.g., CUB-Birds, FGVC-Aircraft and Stanford-Cars.", "Feature learning on point clouds has shown great promise, with the introduction of effective and generalizable deep learning frameworks such as pointnet++. Thus far, however, point features have been abstracted in an independent and isolated manner, ignoring the relative layout of neighboring points as well as their features. In the present article, we propose to overcome this limitation by using spectral graph convolution on a local graph, combined with a novel graph pooling strategy. In our approach, graph convolution is carried out on a nearest neighbor graph constructed from a point’s neighborhood, such that features are jointly learned. We replace the standard max pooling step with a recursive clustering and pooling strategy, devised to aggregate information from within clusters of nodes that are close to one another in their spectral coordinates, leading to richer overall feature descriptors. Through extensive experiments on diverse datasets, we show a consistent demonstrable advantage for the tasks of both point set classification and segmentation. Our implementations are available at https: github.com fate3439 LocalSpecGCN.", "We present a simple and general framework for feature learning from point cloud. The key to the success of CNNs is the convolution operator that is capable of leveraging spatially-local correlation in data represented densely in grids (e.g. images). However, point cloud are irregular and unordered, thus a direct convolving of kernels against the features associated with the points will result in deserting the shape information while being variant to the orders. To address these problems, we propose to learn a X-transformation from the input points, which is used for simultaneously weighting the input features associated with the points and permuting them into latent potentially canonical order. Then element-wise product and sum operations of typical convolution operator are applied on the X-transformed features. The proposed method is a generalization of typical CNNs into learning features from point cloud, thus we call it PointCNN. Experiments show that PointCNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks.", "We present a novel approach to learning a point-wise, meaningful embedding for point-clouds in an unsupervised manner, through the use of neural-networks. The domain of point-cloud processing via neural-networks is rapidly evolving, with novel architectures and applications frequently emerging. Within this field of research, the availability and plethora of unlabeled point-clouds as well as their possible applications make finding ways of characterizing this type of data appealing. Though significant advancement was achieved in the realm of unsupervised learning, its adaptation to the point-cloud representation is not trivial. Previous research focuses on the embedding of entire point-clouds representing an object in a meaningful manner. We present a deep learning framework to learn point-wise description from a set of shapes without supervision. Our approach leverages self-supervision to define a relevant loss function to learn rich per-point features. We train a neural-network with objectives based on context derived directly from the raw data, with no added annotation. We use local structures of point-clouds to incorporate geometric information into each point's latent representation. In addition to using local geometric information, we encourage adjacent points to have similar representations and vice-versa, creating a smoother, more descriptive representation. We demonstrate the ability of our method to capture meaningful point-wise features through three applications. By clustering the learned embedding space, we perform unsupervised part-segmentation on point clouds. By calculating euclidean distance in the latent space we derive semantic point-analogies. Finally, by retrieving nearest-neighbors in our learned latent space we present meaningful point-correspondence within and among point-clouds." ] }
1708.04675
2746492579
Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.
For chemical compounds, naturally modeled as graphs, @cite_6 @cite_10 @cite_7 made several successful trials of applying neural networks to learn representations for predictive tasks, which were usually tackled by handcrafted features @cite_28 or hashing @cite_26 . Whereas, due to the constraints of spatial convolution, their models failed to make full use of the atom-connectivities, which are more than bond features by @cite_16 . More recent explorations on progressive network, multi-task learning and low-shot or one-shot learning have been accomplished @cite_1 @cite_18 . Lastly, Deepchem [1] https: github.com deepchem deepchem is an outstanding open-source chem-informatics machine learning benchmark. Our codes and demos were built and tested upon it.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_28", "@cite_1", "@cite_6", "@cite_16", "@cite_10" ], "mid": [ "2772465140", "2620906374", "2785325870", "2777416523", "2214665483", "2962742544", "2963711743", "2189911347" ], "abstract": [ "With access to large datasets, deep neural networks (DNN) have achieved human-level accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and many chemical properties of research interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed from the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical properties that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models and other contemporary DNN models. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a general-purpose plug-and-play deep neural network for the prediction of novel small-molecule chemical properties.", "Neural networks are being used to make new types of empirical chemical models as inexpensive as force fields, but with accuracy similar to the ab initio methods used to build them. In this work, we present a neural network that predicts the energies of molecules as a sum of intrinsic bond energies. The network learns the total energies of the popular GDB9 database to a competitive MAE of 0.94 kcal mol on molecules outside of its training set, is naturally linearly scaling, and applicable to molecules consisting of thousands of bonds. More importantly, it gives chemical insight into the relative strengths of bonds as a function of their molecular environment, despite only being trained on total energy information. We show that the network makes predictions of relative bond strengths in good agreement with measured trends and human predictions. A Bonds-in-Molecules Neural Network (BIM-NN) learns heuristic relative bond strengths like expert synthetic chemists, and compares well with ab initio bond order mea...", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "Inspired by natural language processing techniques, we here introduce Mol2vec, which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Like the Word2vec models, where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that point in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing the vectors of the individual substructures and, for instance, be fed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pretrained once, yields dense vector representations, and overcomes drawbacks of common compound feature representations such as sparseness and bit collision...", "Deep convolutional neural networks comprise a subclass of deep neural networks (DNN) with a constrained architecture that leverages the spatial and temporal structure of the domain they model. Convolutional networks achieve the best predictive performance in areas such as speech and image recognition by hierarchically composing simple local features into complex models. Although DNNs have been used in drug discovery for QSAR and ligand-based bioactivity predictions, none of these models have benefited from this powerful convolutional architecture. This paper introduces AtomNet, the first structure-based, deep convolutional neural network designed to predict the bioactivity of small molecules for drug discovery applications. We demonstrate how to apply the convolutional concepts of feature locality and hierarchical composition to the modeling of bioactivity and chemical interactions. In further contrast to existing DNN techniques, we show that AtomNet’s application of local convolutional filters to structural target information successfully predicts new active molecules for targets with no previously known modulators. Finally, we show that AtomNet outperforms previous docking approaches on a diverse set of benchmarks by a large margin, achieving an AUC greater than 0.9 on 57.8 of the targets in the DUDE benchmark.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.", "Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space. While convolutional neural networks have proven to be the first choice for images, audio and video data, the atoms in molecules are not restricted to a grid. Instead, their precise locations contain essential physical information, that would get lost if discretized. Thus, we propose to use continuous-filter convolutional layers to be able to model local correlations without requiring the data to lie on a grid. We apply those layers in SchNet: a novel deep learning architecture modeling quantum interactions in molecules. We obtain a joint model for the total energy and interatomic forces that follows fundamental quantum-chemical principles. Our architecture achieves state-of-the-art performance for benchmarks of equilibrium molecules and molecular dynamics trajectories. Finally, we introduce a more challenging benchmark with chemical and structural variations that suggests the path for further work.", "The Tox21 Data Challenge has been the largest effort of the scientific community to compare computational methods for toxicity prediction. This challenge comprised 12,000 environmental chemicals and drugs which were measured for 12 different toxic effects by specifically designed assays. We participated in this challenge to assess the performance of Deep Learning in computational toxicity prediction. Deep Learning has already revolutionized image processing, speech recognition, and language understanding but has not yet been applied to computational toxicity. Deep Learning is founded on novel algorithms and architectures for artificial neural networks together with the recent availability of very fast computers and massive datasets. It discovers multiple levels of distributed representations of the input, with higher levels representing more abstract concepts. We hypothesized that the construction of a hierarchy of chemical features gives Deep Learning the edge over other toxicity prediction methods. Furthermore, Deep Learning naturally enables multi-task learning, that is, learning of all toxic effects in one neural network and thereby learning of highly informative chemical features. In order to utilize Deep Learning for toxicity prediction, we have developed the DeepTox pipeline. First, DeepTox normalizes the chemical representations of the compounds. Then it computes a large number of chemical descriptors that are used as input to machine learning methods. In its next step, DeepTox trains models, evaluates them, and combines the best of them to ensembles. Finally, DeepTox predicts the toxicity of new compounds. In the Tox21 Data Challenge, DeepTox had the highest performance of all computational methods winning the grand challenge, the nuclear receptor panel, the stress response panel, and six single assays (teams Bioinf@JKU''). We found that Deep Learning excelled in toxicity prediction and outperformed many other computational approaches like naive Bayes, support vector machines, and random forests." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
The computational complexity of coalitional manipulation problems was studied extensively. For general scoring rules @math , most earlier work considered the case where the number of candidates is bounded: Conitzer at al. @cite_1 show that when @math is bounded, @math -UCM is solvable in polynomial time.
{ "cite_N": [ "@cite_1" ], "mid": [ "1605139543" ], "abstract": [ "Simple coalitional games are a fundamental class of cooperative games and voting games which are used to model coalition formation, resource allocation and decision making in computer science, artificial intelligence and multiagent systems. Although simple coalitional games are well studied in the domain of game theory and social choice, their algorithmic and computational complexity aspects have received less attention till recently. The computational aspects of simple coalitional games are of increased importance as these games are used by computer scientists to model distributed settings. This thesis fits in the wider setting of the interplay between economics and computer science which has led to the development of algorithmic game theory and computational social choice. A unified view of the computational aspects of simple coalitional games is presented here for the first time. Certain complexity results also apply to other coalitional games such as skill games and matching games. The following issues are given special consideration: influence of players, limit and complexity of manipulations in the coalitional games and complexity of resource allocation on networks. The complexity of comparison of influence between players in simple games is characterized. The simple games considered are represented by winning coalitions, minimal winning coalitions, weighted voting games or multiple weighted voting games. A comprehensive classification of weighted voting games which can be solved in polynomial time is presented. An efficient algorithm which uses generating functions and interpolation to compute an integer weight vector for target power indices is proposed. Voting theory, especially the Penrose Square Root Law, is used to investigate the fairness of a real life voting model. Computational complexity of manipulation in social choice protocols can determine whether manipulation is computationally feasible or not. The computational complexity and bounds of manipulation are considered from various angles including control, false-name manipulation and bribery. Moreover, the computational complexity of computing various cooperative game solutions of simple games in dierent representations is studied. Certain structural results regarding least core payos extend to the general monotone cooperative game. The thesis also studies a coalitional game called the spanning connectivity game. It is proved that whereas computing the Banzhaf values and Shapley-Shubik indices of such games is #P-complete, there is a polynomial time combinatorial algorithm to compute the nucleolus. The results have interesting significance for optimal strategies for the wiretapping game which is a noncooperative game defined on a network." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
In the weighted case, the situation is different: for all positional scoring rules @math , except plurality-like rules, @math -WCM is @math -hard when @math @cite_1 @cite_19 @cite_3 . In particular, this holds for Borda-WCM. However, the computational hardness of Borda-UCM still remained open for quite some time, until finally shown to be @math -hard as well @cite_0 @cite_23 in 2011, even for the case of @math and adding @math manipulators.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_23" ], "mid": [ "2052020474", "104706817", "2174198470", "2949121808", "2005524240" ], "abstract": [ "We investigate manipulation of the Borda voting rule, as well as two elimination style voting rules, Nanson's and Baldwin's voting rules, which are based on Borda voting. We argue that these rules have a number of desirable computational properties. For unweighted Borda voting, we prove that it is NP-hard for a coalition of two manipulators to compute a manipulation. This resolves a long-standing open problem in the computational complexity of manipulating common voting rules. We prove that manipulation of Baldwin's and Nanson's rules is computationally more difficult than manipulation of Borda, as it is NP-hard for a single manipulator to compute a manipulation. In addition, for Baldwin's and Nanson's rules with weighted votes, we prove that it is NP-hard for a coalition of manipulators to compute a manipulation with a small number of candidates.Because of these NP-hardness results, we compute manipulations using heuristic algorithms that attempt to minimise the number of manipulators. We propose several new heuristic methods. Experiments show that these methods significantly outperform the previously best known heuristic method for the Borda rule. Our results suggest that, whilst computing a manipulation of the Borda rule is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice. In contrast to the Borda rule, our experiments with Baldwin's and Nanson's rules demonstrate that both of them are often more difficult to manipulate in practice. These results suggest that elimination style voting rules deserve further study.", "The Borda voting rule is a positional scoring rule where, for m candidates, for every vote the first candidate receives m- 1 points, the second m- 2 points and so on. A Borda winner is a candidate with highest total score. It has been a prominent open problem to determine the computational complexity of UNWEIGHTED COALITIONAL MANIPULATION UNDER BORDA: Can one add a certain number of additional votes (called manipulators) to an election such that a distinguished candidate becomes a winner? We settle this open problem by showing NP-hardness even for two manipulators and three input votes. Moreover, we discuss extensions and limitations of this hardness result.", "Positional scoring rules in voting compute the score of an alternative by summing the scores for the alternative induced by every vote. This summation principle ensures that all votes contribute equally to the score of an alternative. We relax this assumption and, instead, aggregate scores by taking into account the rank of a score in the ordered list of scores obtained from the votes. This defines a new family of voting rules, rank-dependent scoring rules (RDSRs), based on ordered weighted average (OWA) operators, which, include all scoring rules, and many others, most of which of new. We study some properties of these rules, and show, empirically, that certain RDSRs are less manipulable than Borda voting, across a variety of statistical cultures.", "Let @math be a @math -ary predicate over a finite alphabet. Consider a random CSP @math instance @math over @math variables with @math constraints. When @math the instance @math will be unsatisfiable with high probability, and we want to find a refutation - i.e., a certificate of unsatisfiability. When @math is the @math -ary OR predicate, this is the well studied problem of refuting random @math -SAT formulas, and an efficient algorithm is known only when @math . Understanding the density required for refutation of other predicates is important in cryptography, proof complexity, and learning theory. Previously, it was known that for a @math -ary predicate, having @math constraints suffices for refutation. We give a criterion for predicates that often yields efficient refutation algorithms at much lower densities. Specifically, if @math fails to support a @math -wise uniform distribution, then there is an efficient algorithm that refutes random CSP @math instances @math whp when @math . Indeed, our algorithm will \"somewhat strongly\" refute @math , certifying @math , if @math then we get the strongest possible refutation, certifying @math . This last result is new even in the context of random @math -SAT. Regarding the optimality of our @math requirement, prior work on SDP hierarchies has given some evidence that efficient refutation of random CSP @math may be impossible when @math . Thus there is an indication our algorithm's dependence on @math is optimal for every @math , at least in the context of SDP hierarchies. Along these lines, we show that our refutation algorithm can be carried out by the @math -round SOS SDP hierarchy. Finally, as an application of our result, we falsify assumptions used to show hardness-of-learning results in recent work of Daniely, Linial, and Shalev-Shwartz.", "In this paper, we introduce a simple coalition formation game in the environment of bidding, which is a special case of the weighted majority game (WMG), and is named the weighted simple-majority game (WSMG). In WSMG, payoff is allocated to the winners proportional to the players’ powers, which can be measured in various ways. We define a new kind of stability: the counteraction-stability (C-stability), where any potential deviating players will confront counteractions of the other players. We show that C-stable coalition structures in WSMG always contains a minimal winning coalition of minimum total power. For the variant where powers are measured directly by their weights, we show that it is NP-hard to find a C-stable coalition structure and design a pseudo-polynomial time algorithm. Sensitivity analysis for this variant, which shows many interesting properties, is also done. We also prove that it is NP-hard to compute the Holler-Packel indices in WSMGs, and hence in WMGs as well." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
@cite_10 present two additional heuristics: iteratively, assign the largest un-allocated score to the candidate with the largest gap (), or to the candidate with the largest ratio of gap divided by the number of scores yet-to-be-allocated to this candidate (). To the best of our knowledge, these algorithms do not have a counterpart for the weighted case.
{ "cite_N": [ "@cite_10" ], "mid": [ "2531043207" ], "abstract": [ "We prove the first non-trivial performance ratios strictly above 0.5 for weighted versions of the oblivious matching problem. Even for the unweighted version, since Aronson, Dyer, Frieze, and Suen first proved a non-trivial ratio above 0.5 in the mid-1990s, during the next twenty years several attempts have been made to improve this ratio, until Chan, Chen, Wu and Zhao successfully achieved a significant ratio of 0.523 very recently (SODA 2014). To the best of our knowledge, our work is the first in the literature that considers the node-weighted and edge-weighted versions of the problem in arbitrary graphs (as opposed to bipartite graphs). (1) For arbitrary node weights, we prove that a weighted version of the Ranking algorithm has ratio strictly above 0.5. We have discovered a new structural property of the ranking algorithm: if a node has two unmatched neighbors at the end of algorithm, then it will still be matched even when its rank is demoted to the bottom. This property allows us to form LP constraints for both the node-weighted and the unweighted oblivious matching problems. As a result, we prove that the ratio for the node-weighted case is at least 0.501512. Interestingly via the structural property, we can also improve slightly the ratio for the unweighted case to 0.526823 (from the previous best 0.523166 in SODA 2014). (2) For a bounded number of distinct edge weights, we show that ratio strictly above 0.5 can be achieved by partitioning edges carefully according to the weights, and running the (unweighted) Ranking algorithm on each part. Our analysis is based on a new primal-dual framework known as , in which dual feasibility is bypassed. Instead, only dual constraints corresponding to edges in an optimal matching are satisfied. Using this framework we also design and analyze an algorithm for the edge-weighted online bipartite matching problem with free disposal. We prove that for the case of bounded online degrees, the ratio is strictly above 0.5." ] }
1708.04862
2745485852
We study the problem of coalitional manipulation---where @math manipulators try to manipulate an election on @math candidates---under general scoring rules, with a focus on the Borda protocol. We do so both in the weighted and unweighted settings. We focus on minimizing the maximum score obtainable by a non-preferred candidate. In the strongest, most general setting, we provide an algorithm for any scoring rule as described by a vector @math : for some @math , it obtains an additive approximation equal to @math , where @math is the sum of voter weights. For Borda, both the weighted and unweighted variants are known to be @math -hard. For the unweighted case, our simpler algorithm provides a randomized, additive @math approximation; in other words, if there exists a strategy enabling the preferred candidate to win by an @math margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, which implicitly implied a strategy that provides an @math -additive approximation to the maximum score of a non-preferred candidate. For the weighted case, our generalized algorithm provides an @math -additive approximation, where @math is the sum of voter weights. This is a clear advantage over previous methods: some of them do not generalize to the weighted case, while others---which approximate the number of manipulators---pose restrictions on the weights of extra manipulators added. Our methods are based on carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle.
As discussed, configuration linear programs were also used in scheduling literature, for example for the following two problems which were extensively studied before: In the so-called @cite_11 , Santa Claus has @math presents that he wishes to distribute between @math kids, and @math is the value that kid @math has to present @math . The goal is to the happiness of the least happy kid: @math , where @math is the presents allocated to kid @math . In the problem of @cite_15 . We need to assign @math jobs between @math machines, and @math is the time required for machine @math to execute job @math . The goal is to the makespan @math , where @math is the jobs assigned to machine @math . Both papers researched a natural and well-researched restricted assignment' variant of the two problems where @math . @cite_11 , they obtained an @math -multiplicative approximation to the first problem and in @cite_15 , they obtained a @math -multiplicative approximation to the second.
{ "cite_N": [ "@cite_15", "@cite_11" ], "mid": [ "1978593916", "2135932424" ], "abstract": [ "We consider the following problem: The Santa Claus has n presents that he wants to distribute among m kids. Each kid has an arbitrary value for each present. Let pij be the value that kid i has for present j. The Santa's goal is to distribute presents in such a way that the least lucky kid is as happy as possible, i.e he tries to maximize mini=1,...,m sumj ∈ Si pij where Si is a set of presents received by the i-th kid.Our main result is an O(log log m log log log m) approximation algorithm for the restricted assignment case of the problem when pij ∈ pj,0 (i.e. when present j has either value pj or 0 for each kid). Our algorithm is based on rounding a certain natural exponentially large linear programming relaxation usually referred to as the configuration LP. We also show that the configuration LP has an integrality gap of Ω(m1 2) in the general case, when pij can be arbitrary.", "One of the classic results in scheduling theory is the @math -approximation algorithm by Lenstra, Shmoys, and Tardos for the problem of scheduling jobs to minimize makespan on unrelated machines; i.e., job @math requires time @math if processed on machine @math . More than two decades after its introduction it is still the algorithm of choice even in the restricted model where processing times are of the form @math . This problem, also known as the restricted assignment problem, is NP-hard to approximate within a factor less than @math , which is also the best known lower bound for the general version. Our main result is a polynomial time algorithm that estimates the optimal makespan of the restricted assignment problem within a factor @math , where @math is an arbitrarily small constant. The result is obtained by upper bounding the integrality gap of a certain strong linear program, known as the configuration LP, that was previously successfu..." ] }