aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1812.11252 | 2908253384 | As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency. | @cite_23 addressed the problem of citation recommendation using singular value decomposition on the adjacency matrix associated with the citation graph to construct a latent semantic space: a lower-dimensional space where correlated papers can be easily identified. Their experiments on Citeseer show this approach achieves significant success compared with Collaborative Filtering methods. @cite_11 proposes to include textual information to build an topic model of the papers and adds an additional latent variable to distinguish between the focus of a paper and the context of the paper. =-1 | {
"cite_N": [
"@cite_23",
"@cite_11"
],
"mid": [
"1978059262",
"2135790056"
],
"abstract": [
"Scientists continue to find challenges in the ever increasing amount of information that has been produced on a world wide scale, during the last decades. When writing a paper, an author searches for the most relevant citations that started or were the foundation of a particular topic, which would very likely explain the thinking or algorithms that are employed. The search is usually done using specific keywords submitted to literature search engines such as Google Scholar and CiteSeer. However, finding relevant citations is distinctive from producing articles that are only topically similar to an author's proposal. In this paper, we address the problem of citation recommendation using a singular value decomposition approach. The models are trained and evaluated on the Citeseer digital library. The results of our experiments show that the proposed approach achieves significant success when compared with collaborative filtering methods on the citation recommendation task.",
"Researchers have access to large online archives of scientific articles. As a consequence, finding relevant papers has become more difficult. Newly formed online communities of researchers sharing citations provides a new way to solve this problem. In this paper, we develop an algorithm to recommend scientific articles to users of an online community. Our approach combines the merits of traditional collaborative filtering and probabilistic topic modeling. It provides an interpretable latent structure for users and items, and can form recommendations about both existing and newly published articles. We study a large subset of data from CiteULike, a bibliography sharing service, and show that our algorithm provides a more effective recommender system than traditional collaborative filtering."
]
} |
1812.11252 | 2908253384 | As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency. | A typical related paper search scenario is that a user starts with a seed of one or more papers, by reading the available text and searching related cited references. Sofia is a system that automates this recursive process @cite_33 . | {
"cite_N": [
"@cite_33"
],
"mid": [
"2095368000"
],
"abstract": [
"When working on a new project, researchers need to devote a significant amount of time and effort to surveying the relevant literature. This is required in order to gain expertise, evaluate the significance of their work and gain useful insights about a particular scientific domain. While necessary, relevant-work search is also a time-consuming and arduous process, requiring the continuous participation of the user. In this work, we introduce Sofia Search, a tool that fully automates the search and retrieval of the literature related to a topic. Given a seed of papers submitted by the user, Sofia Search searches the Web for candidate related papers, evaluates their relevance to the seed and downloads them for the user. The tool also provides modules for the evaluation and ranking of authors and papers, in the context of the retrieved papers. In the demo, we will demonstrate the functionality of our tool, by allowing users to use it via a simple and intuitive interface."
]
} |
1812.11252 | 2908253384 | As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency. | The approach proposed by @cite_9 returns a set of relevant articles by optimizing a function based on a fine-grained notion of influence between documents; and also claim that, for paper recommendation, defining a query as a small set of known-to-be-relevant papers is better than a string of keywords. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2000613522"
],
"abstract": [
"In scientific research, it is often difficult to express information needs as simple keyword queries. We present a more natural way of searching for relevant scientific literature. Rather than a string of keywords, we define a query as a small set of papers deemed relevant to the research task at hand. By optimizing an objective function based on a fine-grained notion of influence between documents, our approach efficiently selects a set of highly relevant articles. Moreover, as scientists trust some authors more than others, results are personalized to individual preferences. In a user study, researchers found the papers recommended by our method to be more useful, trustworthy and diverse than those selected by popular alternatives, such as Google Scholar and a state-of-the-art topic modeling approach."
]
} |
1812.11252 | 2908253384 | As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency. | @cite_2 examined the effectiveness of various text-based and citation-based features on citation recommendation, they find that neither text-based nor citation-based features performed very well in isolation, while text similarity alone achieves a surprisingly poor performance at this task. @cite_10 considered the problem of recommending citations for placeholder in query manuscripts and a proposed non-parametric probabilistic model to measure the relevance between a citation context and a candidate citation. To reduce the burden on users, @cite_32 proposed different models for automatically finding citation contexts in an unlabeled query manuscript. | {
"cite_N": [
"@cite_10",
"@cite_32",
"@cite_2"
],
"mid": [
"",
"2002642763",
"1995326326"
],
"abstract": [
"",
"Automatic recommendation of citations for a manuscript is highly valuable for scholarly activities since it can substantially improve the efficiency and quality of literature search. The prior techniques placed a considerable burden on users, who were required to provide a representative bibliography or to mark passages where citations are needed. In this paper we present a system that considerably reduces this burden: a user simply inputs a query manuscript (without a bibliography) and our system automatically finds locations where citations are needed. We show that naive approaches do not work well due to massive noise in the document corpus. We produce a successful approach by carefully examining the relevance between segments in a query manuscript and the representative segments extracted from a document corpus. An extensive empirical evaluation using the CiteSeerX data set shows that our approach is effective.",
"We approach the problem of academic literature search by considering an unpublished manuscript as a query to a search system. We use the text of previous literature as well as the citation graph that connects it to find relevant related material. We evaluate our technique with manual and automatic evaluation methods, and find an order of magnitude improvement in mean average precision as compared to a text similarity baseline."
]
} |
1812.11252 | 2908253384 | As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency. | Recently, citation recommendation from heterogeneous network mining perspective has attracted more attention. Besides papers, metadata such as authors or keywords are also considered as entities in the graph schema. Two entities can be connected via different paths, called meta-paths, which usually carry different semantic meanings. Many work build discriminative models for citation prediction and recommendation based on meta-paths @cite_4 @cite_21 @cite_26 @cite_6 . | {
"cite_N": [
"@cite_21",
"@cite_4",
"@cite_6",
"@cite_26"
],
"mid": [
"2029099433",
"2394638270",
"2164259805",
"2091002342"
],
"abstract": [
"Citation relationship between scientific publications has been successfully used for scholarly bibliometrics, information retrieval and data mining tasks, and citation-based recommendation algorithms are well documented. While previous studies investigated citation relations from various viewpoints, most of them share the same assumption that, if paper1 cites paper2 (or author1 cites author2), they are connected, regardless of citation importance, sentiment, reason, topic, or motivation. However, this assumption is oversimplified. In this study, we employ an innovative \"context-rich heterogeneous network\" approach, which paves a new way for citation recommendation task. In the network, we characterize 1) the importance of citation relationships between citing and cited papers, and 2) the topical citation motivation. Unlike earlier studies, the citation information, in this paper, is characterized by citation textual contexts extracted from the full-text citing paper. We also propose algorithm to cope with the situation when large portion of full-text missing information exists in the bibliographic repository. Evaluation results show that, context-rich heterogeneous network can significantly enhance the citation recommendation performance.",
"To reveal information hiding in link space of bibliographical networks, link analysis has been studied from different perspectives in recent years. In this paper, we address a novel problem namely citation prediction, that is: given information about authors, topics, target publication venues as well as time of certain research paper, finding and predicting the citation relationship between a query paper and a set of previous papers. Considering the gigantic size of relevant papers, the loosely connected citation network structure as well as the highly skewed citation relation distribution, citation prediction is more challenging than other link prediction problems which have been studied before. By building a meta-path based prediction model on a topic discriminative search space, we here propose a two-phase citation probability learning approach, in order to predict citation relationship effectively and efficiently. Experiments are performed on real-world dataset with comprehensive measurements, which demonstrate that our framework has substantial advantages over commonly used link prediction approaches in predicting citation relations in bibliographical networks.",
"Citation recommendation is an interesting but challenging research problem. Most existing studies assume that all papers adopt the same criterion and follow the same behavioral pattern in deciding relevance and authority of a paper. However, in reality, papers have distinct citation behavioral patterns when looking for different references, depending on paper content, authors and target venues. In this study, we investigate the problem in the context of heterogeneous bibliographic networks and propose a novel cluster-based citation recommendation framework, called ClusCite, which explores the principle that citations tend to be softly clustered into interest groups based on multiple types of relationships in the network. Therefore, we predict each query's citations based on related interest groups, each having its own model for paper authority and relevance. Specifically, we learn group memberships for objects and the significance of relevance features for each interest group, while also propagating relative authority between objects, by solving a joint optimization problem. Experiments on both DBLP and PubMed datasets demonstrate the power of the proposed approach, with 17.68 improvement in Recall@50 and 9.57 growth in MRR over the best performing baseline.",
"The sheer volume of scholarly publications available online significantly challenges how scholars retrieve the new information available and locate the candidate reference papers. While classical text retrieval and pseudo relevance feedback (PRF) algorithms can assist scholars in accessing needed publications, in this study, we propose an innovative publication ranking method with PRF by leveraging a number of meta-paths on the heterogeneous bibliographic graph. Different meta-paths on the graph address different ranking hypotheses, whereas the pseudo-relevant papers (from the retrieval results) are used as the seed nodes on the graph. Meanwhile, unlike prior studies, we propose \"restricted meta-path\" facilitated by a new context-rich heterogeneous network extracted from full-text publication content along with citation context. By using learning-to-rank, we integrate 18 different meta-path-based ranking features to derive the final ranking scores for candidate cited papers. Experimental results with ACM full-text corpus show that meta-path-based ranking with PRF on the new graph significantly (p"
]
} |
1812.11252 | 2908253384 | As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency. | The vocabulary used in the citation context and in the content of papers are usually quite different. To address this problem, some works propose to use translation model, which can bridge the gap between two heterogeneous languages @cite_5 @cite_22 . Based on previous work @cite_10 @cite_32 @cite_22 , built a citation recommendation system called RefSeer http: refseer.ist.psu.edu @cite_30 which perform both topic-based global recommendations and citation-context-based local recommendations. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_32",
"@cite_5",
"@cite_10"
],
"mid": [
"2023930240",
"2122778642",
"2002642763",
"2088772104",
""
],
"abstract": [
"Citations are important in academic dissemination. To help researchers check the completeness of citations while authoring a paper, we introduce a citation recommendation system called RefSeer. Researchers can use it to find related works to cited while authoring papers. It can also be used by reviewers to check the completeness of a paper's references. RefSeer presents both topic based global recommendation and also citation-context based local recommendation. By evaluating the quality of recommendation, we show that such recommendation system can recommend citations with good precision and recall. We also show that our recommendation system is very efficient and scalable.",
"When we write or prepare to write a research paper, we always have appropriate references in mind. However, there are most likely references we have missed and should have been read and cited. As such a good citation recommendation system would not only improve our paper but, overall, the efficiency and quality of literature search. Usually, a citation's context contains explicit words explaining the citation. Using this, we propose a method that \"translates\" research papers into references. By considering the citations and their contexts from existing papers as parallel data written in two different \"languages\", we adopt the translation model to create a relationship between these two \"vocabularies\". Experiments on both CiteSeer and CiteULike dataset show that our approach outperforms other baseline methods and increase the precision, recall and f-measure by at least 5 to 10 , respectively. In addition, our approach runs much faster in the both training and recommending stage, which proves the effectiveness and the scalability of our work.",
"Automatic recommendation of citations for a manuscript is highly valuable for scholarly activities since it can substantially improve the efficiency and quality of literature search. The prior techniques placed a considerable burden on users, who were required to provide a representative bibliography or to mark passages where citations are needed. In this paper we present a system that considerably reduces this burden: a user simply inputs a query manuscript (without a bibliography) and our system automatically finds locations where citations are needed. We show that naive approaches do not work well due to massive noise in the document corpus. We produce a successful approach by carefully examining the relevance between segments in a query manuscript and the representative segments extracted from a document corpus. An extensive empirical evaluation using the CiteSeerX data set shows that our approach is effective.",
"Citation Recommendation is useful for an author to find out the papers or books that can support the materials she is writing about. It is a challengeable problem since the vocabulary used in the content of papers and in the citation contexts are usually quite different. To address this problem, we propose to use translation model, which can bridge the gap between two heterogeneous languages. We conduct an experiment and find the translation model can provide much better candidates of citations than the state-of-the-art methods.",
""
]
} |
1812.11252 | 2908253384 | As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency. | Based on the hypothesis that an author's published works constitute a clean signal of the latent interests of a researcher, @cite_34 examined the effect of modeling a researcher's past works in recommending papers. Specifically, they first construct a user profile based on her his recent works, then rank candidate papers according to the content similarity between the candidate and the user profile. Furthermore, in order to achieve a better representation of candidate paper, @cite_28 exploit potential citation papers through the use of collaborative filtering. | {
"cite_N": [
"@cite_28",
"@cite_34"
],
"mid": [
"2163089586",
"2062340319"
],
"abstract": [
"To help generate relevant suggestions for researchers, recommendation systems have started to leverage the latent interests in the publication profiles of the researchers themselves. While using such a publication citation network has been shown to enhance performance, the network is often sparse, making recommendation difficult. To alleviate this sparsity, we identify \"potential citation papers\" through the use of collaborative filtering. Also, as different logical sections of a paper have different significance, as a secondary contribution, we investigate which sections of papers can be leveraged to represent papers effectively. On a scholarly paper recommendation dataset, we show that recommendation accuracy significantly outperforms state-of-the-art recommendation baselines as measured by nDCG and MRR, when we discover potential citation papers using imputed similarities via collaborative filtering and represent candidate papers using both the full text and assigning more weight to the conclusion sections.",
"We examine the effect of modeling a researcher's past works in recommending scholarly papers to the researcher. Our hypothesis is that an author's published works constitute a clean signal of the latent interests of a researcher. A key part of our model is to enhance the profile derived directly from past works with information coming from the past works' referenced papers as well as papers that cite the work. In our experiments, we differentiate between junior researchers that have only published one paper and senior researchers that have multiple publications. We show that filtering these sources of information is advantageous -- when we additionally prune noisy citations, referenced papers and publication history, we achieve statistically significant higher levels of recommendation accuracy."
]
} |
1812.11485 | 2908196222 | Memory-Augmented Neural Networks (MANNs) are a class of neural networks equipped with an external memory, and are reported to be effective for tasks requiring a large long-term memory and its selective use. The core module of a MANN is called a controller, which is usually implemented as a recurrent neural network (RNN) (e.g., LSTM) to enable the use of contextual information in controlling the other modules. However, such an RNN-based controller often allows a MANN to directly solve the given task by using the (small) internal memory of the controller, and prevents the MANN from making the best use of the external memory, thereby resulting in a suboptimally trained model. To address this problem, we present a novel type of RNN-based controller that is partially non-recurrent and avoids the direct use of its internal memory for solving the task, while keeping the ability of using contextual information in controlling the other modules. Our empirical experiments using Neural Turing Machines and Differentiable Neural Computers on the Toy and bAbI tasks demonstrate that the proposed controllers give substantially better results than standard RNN-based controllers. | NTM-based MANNs have been actively studied since the advent of the NTM @cite_18 @cite_15 @cite_8 . proposed Sparse Access Memory (SAM), which is a scalable end-to-end differentiable memory access scheme. One of the biggest restrictions of MANNs is that the capacity of memory depends on the size of the external memory, while larger external memory requires more computational cost. SAM enables efficient training of a MANN with a very large memory. Zaremba and Sutskever used a reinforcement learning algorithm on the NTM to apply it for tasks that require discrete interfaces, which are not differentiable. | {
"cite_N": [
"@cite_8",
"@cite_18",
"@cite_15"
],
"mid": [
"2950308898",
"2472819217",
""
],
"abstract": [
"Deep learning models are often not easily adaptable to new tasks and require task-specific adjustments. The differentiable neural computer (DNC), a memory-augmented neural network, is designed as a general problem solver which can be used in a wide range of tasks. But in reality, it is hard to apply this model to new tasks. We analyze the DNC and identify possible improvements within the application of question answering. This motivates a more robust and scalable DNC (rsDNC). The objective precondition is to keep the general character of this model intact while making its application more reliable and speeding up its required training time. The rsDNC is distinguished by a more robust training, a slim memory unit and a bidirectional architecture. We not only achieve new state-of-the-art performance on the bAbI task, but also minimize the performance variance between different initializations. Furthermore, we demonstrate the simplified applicability of the rsDNC to new tasks with passable results on the CNN RC task without adaptions.",
"Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms.",
""
]
} |
1812.11423 | 2962969859 | The delivery of mental health interventions via ubiquitous devices has shown much promise. A conversational chatbot is a promising oracle for delivering appropriate just-in-time interventions. However, designing emotionally-aware agents, specially in this context, is under-explored. Furthermore, the feasibility of automating the delivery of just-in-time mHealth interventions via such an agent has not been fully studied. In this paper, we present the design and evaluation of EMMA (EMotion-Aware mHealth Agent) through a two-week long human-subject experiment with N=39 participants. EMMA provides emotionally appropriate micro-activities in an empathetic manner. We show that the system can be extended to detect a user's mood purely from smartphone sensor data. Our results show that our personalized machine learning model was perceived as likable via self-reports of emotion from users. Finally, we provide a set of guidelines for the design of emotion-aware bots for mHealth. | Conversational agents have shown promise in automating the detection of psychological symptoms for both assessment and the evaluation of treatment impact @cite_13 . There is evidence suggesting that the general population can also benefit from such eHealth interventions. Anxiety and depression prevention EMIs are associated with small but positive effects on symptom reduction. The medium to long-term effects of such interventions need further exploration @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_13"
],
"mid": [
"2751552421",
"2525807009"
],
"abstract": [
"Abstract Background Anxiety and depression are associated with a range of adverse outcomes and represent a large global burden to individuals and health care systems. Prevention programs are an important way to avert a proportion of the burden associated with such conditions both at a clinical and subclinical level. eHealth interventions provide an opportunity to offer accessible, acceptable, easily disseminated globally low-cost interventions on a wide scale. However, the efficacy of these programs remains unclear. The aim of this study is to review and evaluate the effects of eHealth prevention interventions for anxiety and depression. Method A systematic search was conducted on four relevant databases to identify randomized controlled trials of eHealth interventions aimed at the prevention of anxiety and depression in the general population published between 2000 and January 2016. The quality of studies was assessed and a meta-analysis was performed using pooled effect size estimates obtained from a random effects model. Results Ten trials were included in the systematic review and meta-analysis. All studies were of sufficient quality and utilized cognitive behavioural techniques. At post-treatment, the overall mean difference between the intervention and control groups was 0.25 (95 confidence internal: 0.09, 0.41; p = 0.003) for depression outcome studies and 0.31 (95 CI: 0.10, 0.52; p = 0.004) for anxiety outcome studies, indicating a small but positive effect of the eHealth interventions. The effect sizes for universal and indicated selective interventions were similar (0.29 and 0.25 respectively). However, there was inadequate evidence to suggest that such interventions have an effect on long-term disorder incidence rates. Conclusions Evidence suggests that eHealth prevention interventions for anxiety and depression are associated with small but positive effects on symptom reduction. However, there is inadequate evidence on the medium to long-term effect of such interventions, and importantly, on the reduction of incidence of disorders. Further work to explore the impact of eHealth psychological interventions on long-term incidence rates.",
"A study deployed the mental health Relational Frame Theory as grounding for an analysis of sentiment dynamics in human-language dialogs. The work takes a step towards enabling use of conversational agents in mental health settings. Sentiment tendencies and mirroring behaviors in 11k human-human dialogs were compared with behaviors when humans interacted with conversational agents in a similar-sized collection. The study finds that human sentiment-related interaction norms persist in human-agent dialogs, but that humans are twice as likely to respond negatively when faced with a negative utterance by a robot than in a comparable situation with humans. Similarly, inhibition towards use of obscenity is greatly reduced. We introduce a new Affective Neural Net implementation that specializes in analyzing sentiment in real time."
]
} |
1812.11039 | 2907931557 | In this paper, we study the loss surface of the over-parameterized fully connected deep neural networks. We prove that for any continuous activation functions, the loss function has no bad strict local minimum, both in the regular sense and in the sense of sets. This result holds for any convex and continuous loss function, and the data samples are only required to be distinct in at least one dimension. Furthermore, we show that bad local minima do exist for a class of activation functions. | Finally, landscape analysis is just one part of the deep learning theory, which includes representation, algorithm convergence, optimization landscape and generalization. In terms of algorithm convergence, there is much recent interest in analyzing algorithms that escape saddle points for generic non-convex functions @cite_47 @cite_37 @cite_32 @cite_9 , since escaping saddle points can help converge to local minima. Converging to local minima itself is not that interesting, but will be very interesting if the hypothesis that all local minima are close to global minima holds for certain problems. Our study takes advantage of the structure of the neural networks, and is orthogonal to the research on escaping saddle points. In terms of generalization, many recent works @cite_20 @cite_10 @cite_11 try to understand why over-parameteriztion does not cause overfitting. This is a very interesting line of research, but its underlying assumption that over-parameterization can lead to small training error still requires rigorous justification. Again, our study is orthogonal to the research on the generalization error analysis of over-parameterized networks. | {
"cite_N": [
"@cite_37",
"@cite_9",
"@cite_32",
"@cite_47",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2283214199",
"2769394111",
"2592651140",
"1697075315",
"2709553318",
"2565538933",
"813605148"
],
"abstract": [
"We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory.",
"Nesterov's accelerated gradient descent (AGD), an instance of the general family of \"momentum methods\", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stationary point in @math iterations, faster than the @math iterations required by GD. To the best of our knowledge, this is the first Hessian-free algorithm to find a second-order stationary point faster than GD, and also the first single-loop algorithm with a faster rate than GD even in the setting of finding a first-order stationary point. Our analysis is based on two key ideas: (1) the use of a simple Hamiltonian function, inspired by a continuous-time perspective, which AGD monotonically decreases per step even for nonconvex functions, and (2) a novel framework called improve or localize, which is useful for tracking the long-term behavior of gradient-based optimization algorithms. We believe that these techniques may deepen our understanding of both acceleration algorithms and nonconvex optimization.",
"This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost \"dimension-free\"). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.",
"We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee.",
"This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized \"spectral complexity\": their Lipschitz constant, meaning the product of the spectral norms of the weight matrices, times a certain correction factor. This bound is empirically investigated for a standard AlexNet network trained with SGD on the mnist and cifar10 datasets, with both original and random labels; the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and secondly that the presented bound is sensitive to this complexity.",
"An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. @PARASPLIT In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. @PARASPLIT Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.",
"Techniques involving factorization are found in a wide range of applications and have enjoyed significant empirical success in many fields. However, common to a vast majority of these problems is the significant disadvantage that the associated optimization problems are typically non-convex due to a multilinear form or other convexity destroying transformation. Here we build on ideas from convex relaxations of matrix factorizations and present a very general framework which allows for the analysis of a wide range of non-convex factorization problems - including matrix factorization, tensor factorization, and deep neural network training formulations. We derive sufficient conditions to guarantee that a local minimum of the non-convex optimization problem is a global minimum and show that if the size of the factorized variables is large enough then from any initialization it is possible to find a global minimizer using a purely local descent algorithm. Our framework also provides a partial theoretical justification for the increasingly common use of Rectified Linear Units (ReLUs) in deep neural networks and offers guidance on deep network architectures and regularization strategies to facilitate efficient optimization."
]
} |
1812.11004 | 2908356592 | Recent progress has been made in using attention based encoder-decoder framework for image and video captioning. Most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention. Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of visual captioning. Furthermore, the hierarchy of LSTMs enables more complex representation of visual data, capturing information at different scales. To address these issues, we propose a hierarchical LSTM with adaptive attention (hLSTMat) approach for image and video captioning. Specifically, the proposed framework utilizes the spatial or temporal attention for selecting specific regions or frames to predict the related words, while the adaptive attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and high-level language context information to support the caption generation. We initially design our hLSTMat for video captioning task. Then, we further refine it and apply it to image captioning task. To demonstrate the effectiveness of our proposed framework, we test our method on both video and image captioning tasks. Experimental results show that our approach achieves the state-of-the-art performance for most of the evaluation metrics on both tasks. The effect of important components is also well exploited in the ablation study. | At the earlier stage of visual captioning, several models such as @cite_46 @cite_8 @cite_68 have been proposed by directly bring together previous advances in natural language processing and computer vision. More specifically, semantic representation of an image is captured by a CNN network and then decoded into a caption using various architectures, such as recurrent neural networks. For example, Venugopalan @cite_46 proposed a S2VT approach, which incorporates a stacked LSTM by firstly reading the visual sequence, comprised of RGB and or optical flow CNN outputs, and then generating a sequence of words. Oriol @cite_8 presented a generative model based on a deep recurrent neural network. This model consists of a vision CNN followed by a language generating RNN, and is trained to maximize the likelihood of the target description sentence given the training image. In @cite_68 , the proposed framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In the joint embedding model, the distance of the outputs of the deep video model and compositional language model is minimized in the joint space. | {
"cite_N": [
"@cite_46",
"@cite_68",
"@cite_8"
],
"mid": [
"",
"877909479",
"1895577753"
],
"abstract": [
"",
"Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art."
]
} |
1812.11004 | 2908356592 | Recent progress has been made in using attention based encoder-decoder framework for image and video captioning. Most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention. Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of visual captioning. Furthermore, the hierarchy of LSTMs enables more complex representation of visual data, capturing information at different scales. To address these issues, we propose a hierarchical LSTM with adaptive attention (hLSTMat) approach for image and video captioning. Specifically, the proposed framework utilizes the spatial or temporal attention for selecting specific regions or frames to predict the related words, while the adaptive attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and high-level language context information to support the caption generation. We initially design our hLSTMat for video captioning task. Then, we further refine it and apply it to image captioning task. To demonstrate the effectiveness of our proposed framework, we test our method on both video and image captioning tasks. Experimental results show that our approach achieves the state-of-the-art performance for most of the evaluation metrics on both tasks. The effect of important components is also well exploited in the ablation study. | Later on, researchers found that different regions in images and frames in videos have different weights, and thus various attention mechanisms are introduced to guide captioning models by telling where to look at for sentence generation, such as @cite_28 @cite_30 @cite_69 @cite_11 . Yao @cite_69 proposed to incorporate both the local dynamic of videos as well as their global temporal structure for describing videos. For simplicity, they were focusing on highlighting only the region having the maximum attention. A Hierarchical Recurrent Neural Encoder (HRNE) @cite_28 , was introduced to generate video representation with emphasis on temporal modeling by applying a LSTM along with attention mechanism to each temporal time step. In @cite_41 , it combines multiple forms of attention for video captioning. Temporal, motion and semantic features are weighted via an attention mechanism. | {
"cite_N": [
"@cite_30",
"@cite_69",
"@cite_28",
"@cite_41",
"@cite_11"
],
"mid": [
"1957740064",
"2950307714",
"2963843052",
"2949828251",
"2950178297"
],
"abstract": [
"We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively.",
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.",
"Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"Recently, video captioning has been attracting an increasing amount of interest, due to its potential for improving accessibility and information retrieval. While existing methods rely on different kinds of visual features and model structures, they do not fully exploit relevant semantic information. We present an extensible approach to jointly leverage several sorts of visual features and semantic attributes. Our novel architecture builds on LSTMs for sentence generation, with several attention layers and two multimodal layers. The attention mechanism learns to automatically select the most salient visual features or semantic attributes, and the multimodal layer yields overall representations for the input and outputs of the sentence generation component. Experimental results on the challenging MSVD and MSR-VTT datasets show that our framework outperforms the state-of-the-art approaches, while ground truth based semantic attributes are able to further elevate the output quality to a near-human level.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO."
]
} |
1812.11004 | 2908356592 | Recent progress has been made in using attention based encoder-decoder framework for image and video captioning. Most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention. Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of visual captioning. Furthermore, the hierarchy of LSTMs enables more complex representation of visual data, capturing information at different scales. To address these issues, we propose a hierarchical LSTM with adaptive attention (hLSTMat) approach for image and video captioning. Specifically, the proposed framework utilizes the spatial or temporal attention for selecting specific regions or frames to predict the related words, while the adaptive attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and high-level language context information to support the caption generation. We initially design our hLSTMat for video captioning task. Then, we further refine it and apply it to image captioning task. To demonstrate the effectiveness of our proposed framework, we test our method on both video and image captioning tasks. Experimental results show that our approach achieves the state-of-the-art performance for most of the evaluation metrics on both tasks. The effect of important components is also well exploited in the ablation study. | Semantic attention has been proposed in previous work @cite_34 @cite_36 @cite_72 by adopting attributes or concepts generated by other pre-trained models to enhance captioning performance. Basically, semantic attention is able to attend to a semantically important concepts or attributes or region of interest in an image, and able to weight the relative strength of attention paid on multiple concepts @cite_34 . In addition, Yao presented Long Short-Term Memory with Attributes (LSTM-A) to integrates attributes into the successful CNNs and RNNs for image captioning. Variant architectures were constructed to feed image features and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Moreover, Pan @cite_72 proposed Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA), which takes advantages of incorporating transferred semantic attributes learnt from images and videos into sequence learning for video captioning. | {
"cite_N": [
"@cite_36",
"@cite_72",
"@cite_34"
],
"mid": [
"2552161745",
"2951159095",
""
],
"abstract": [
"Automatically describing an image with a natural language has been an emerging challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner. Particularly, the learning of attributes is strengthened by integrating inter-attribute correlations into Multiple Instance Learning (MIL). To incorporate attributes into captioning, we construct variants of architectures by feeding image representations and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework shows clear improvements when compared to state-of-the-art deep models. More remarkably, we obtain METEOR CIDEr-D of 25.5 100.2 on testing data of widely used and publicly available splits in [10] when extracting image representations by GoogleNet and achieve superior performance on COCO captioning Leaderboard.",
"Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD.",
""
]
} |
1812.11004 | 2908356592 | Recent progress has been made in using attention based encoder-decoder framework for image and video captioning. Most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention. Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of visual captioning. Furthermore, the hierarchy of LSTMs enables more complex representation of visual data, capturing information at different scales. To address these issues, we propose a hierarchical LSTM with adaptive attention (hLSTMat) approach for image and video captioning. Specifically, the proposed framework utilizes the spatial or temporal attention for selecting specific regions or frames to predict the related words, while the adaptive attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and high-level language context information to support the caption generation. We initially design our hLSTMat for video captioning task. Then, we further refine it and apply it to image captioning task. To demonstrate the effectiveness of our proposed framework, we test our method on both video and image captioning tasks. Experimental results show that our approach achieves the state-of-the-art performance for most of the evaluation metrics on both tasks. The effect of important components is also well exploited in the ablation study. | Recently, several researchers have started to utilize reinforcement learning to optimize image captioning @cite_3 @cite_63 @cite_65 @cite_35 . To collaboratively generate caption, Ren @cite_35 incorporated a policy network" for generating sentence and a value network" for evaluating predicting sentence globally, and they improved the policy and value networks with deep reinforcement learning. Furthermore, Liu @cite_65 proposed to improve image captioning via policy gradient optimization of a linear combination of SPICE and CIDEr, while Rennis @cite_63 utilized the output of its own test-time inference algorithm to normalize the rewards it experiences. All the previous methods show that reinforcement learning has the potential to boost image captioning. | {
"cite_N": [
"@cite_35",
"@cite_63",
"@cite_65",
"@cite_3"
],
"mid": [
"2952591111",
"2963084599",
"2949376505",
"2176263492"
],
"abstract": [
"Image captioning is a challenging problem owing to the complexity in understanding the image content and diverse ways of describing it in natural language. Recent advances in deep neural networks have substantially improved the performance of this task. Most state-of-the-art approaches follow an encoder-decoder framework, which generates captions using a sequential recurrent prediction model. However, in this paper, we introduce a novel decision-making framework for image captioning. We utilize a \"policy network\" and a \"value network\" to collaboratively generate captions. The policy network serves as a local guidance by providing the confidence of predicting the next word according to the current state. Additionally, the value network serves as a global and lookahead guidance by evaluating all possible extensions of the current state. In essence, it adjusts the goal of predicting the correct words towards the goal of generating captions similar to the ground truth captions. We train both networks using an actor-critic reinforcement learning model, with a novel reward defined by visual-semantic embedding. Extensive experiments and analyses on the Microsoft COCO dataset show that the proposed framework outperforms state-of-the-art approaches across different evaluation metrics.",
"Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.",
"Current image captioning methods are usually trained via (penalized) maximum likelihood estimation. However, the log-likelihood score of a caption does not correlate well with human assessments of quality. Standard syntactic evaluation metrics, such as BLEU, METEOR and ROUGE, are also not well correlated. The newer SPICE and CIDEr metrics are better correlated, but have traditionally been hard to optimize for. In this paper, we show how to use a policy gradient (PG) method to directly optimize a linear combination of SPICE and CIDEr (a combination we call SPIDEr): the SPICE score ensures our captions are semantically faithful to the image, while CIDEr score ensures our captions are syntactically fluent. The PG method we propose improves on the prior MIXER approach, by using Monte Carlo rollouts instead of mixing MLE training with PG. We show empirically that our algorithm leads to easier optimization and improved results compared to MIXER. Finally, we show that using our PG method we can optimize any of the metrics, including the proposed SPIDEr metric which results in image captions that are strongly preferred by human raters compared to captions generated by the same model but trained to optimize MLE or the COCO metrics.",
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster."
]
} |
1812.11149 | 2888540334 | We consider the online problem in which an intermediary trades identical items with a sequence of n buyers and n sellers, each of unit demand. We assume that the values of the traders are selected by an adversary and the sequence is randomly permuted. We give competitive algorithms for two objectives: welfare and gain-from-trade. | The wide range of applications of secretary models (and the related prophet inequalities) have led to the design of posted price mechanisms, that are simple to describe, robust, truthful and achieve surprisingly good approximation ratios. introduced prophet inequality techniques in online auction in @cite_14 . The @math -choice secretary described above was then studied in @cite_16 which combined with @cite_12 yielded an asympotically optimal, truthful mechanism. For more general auction settings, posted-price mechanisms have been used by in @cite_1 for unit demand agents and expanded by in @cite_3 for combinatorial auctions and @cite_10 for online budgeted settings. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_3",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"1530458910",
"2077124610",
"2950351404",
"2150582214",
"2746812626",
"2061418963"
],
"abstract": [
"Recent work on online auctions for digital goods has explored the role of optimal stopping theory -- particularly secretary problems -- in the design of approximately optimal online mechanisms. This work generally assumes that the size of the market (number of bidders) is known a priori, but that the mechanism designer has no knowledge of the distribution of bid values. However, in many real-world applications (such as online ticket sales), the opposite is true: the seller has distributional knowledge of the bid values (e.g., via the history of past transactions in the market), but there is uncertainty about market size. Adopting the perspective of automated mechanism design, introduced by Conitzer and Sandholm, we develop algorithms that compute an optimal, or approximately optimal, online auction mechanism given access to this distributional knowledge. Our main results are twofold. First, we show that when the seller does not know the market size, no constant-approximation to the optimum efficiency or revenue is achievable in the worst case, even under the very strong assumption that bid values are i.i.d. samples from a distribution known to the seller. Second, we show that when the seller has distributional knowledge of the market size as well as the bid values, one can do well in several senses. Perhaps most interestingly, by combining dynamic programming with prophet inequalities (a technique from optimal stopping theory) we are able to design and analyze online mechanisms which are temporally strategyproof (even with respect to arrival and departure times) and approximately efficiency (revenue)-maximizing. In exploring the interplay between automated mechanism design and prophet inequalities, we prove new prophet inequalities motivated by the auction setting.",
"We study the classic mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [20]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [1], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [25]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. We prove that these mechanisms are approximately optimal in single-dimensional settings. These posted-price mechanisms avoid many of the properties of optimal mechanisms that make the latter impractical. Furthermore, these mechanisms generalize naturally to multi-dimensional settings where they give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time. This work can be viewed as an extension and improvement of the single-agent algorithmic pricing work of [9] to the setting of multiple agents where the designer has combinatorial feasibility constraints on which agents can simultaneously obtain each service.",
"We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in an arbitrary order, each purchasing her favorite bundle from among the unsold items at the posted prices. These mechanisms are simple, transparent and trivially dominant strategy incentive compatible (DSIC). We show that when agent preferences are fractionally subadditive (which includes all submodular functions), there always exist prices that, in expectation, obtain at least half of the optimal welfare. Our result is constructive: given black-box access to a combinatorial auction algorithm A, sample access to the prior distribution, and appropriate query access to the sampled valuations, one can compute, in polytime, prices that guarantee at least half of the expected welfare of A. As a corollary, we obtain the first polytime (in n and m) constant-factor DSIC mechanism for Bayesian submodular combinatorial auctions, given access to demand query oracles. Our results also extend to valuations with complements, where the approximation factor degrades linearly with the level of complementarity.",
"We study a limited-supply online auction problem, in which an auctioneer has k goods to sell and bidders arrive and depart dynamically. We suppose that agent valuations are drawn independently from some unknown distribution and construct an adaptive auction that is nevertheless value- andtime-strategy proof. For the k=1 problem we have a strategyproof variant on the classic secretary problem. We present a 4-competitive (e-competitive) strategyproof online algorithm with respect to offline Vickrey for revenue (efficiency). We also show (in a model that slightly generalizes the assumption of independent valuations) that no mechanism can be better than 3 2-competitive (2-competitive) for revenue (efficiency). Our general approach considers a learning phase followed by an accepting phase, and is careful to handle incentive issues for agents that span the two phases. We extend to the k›1 case, by deriving strategyproof mechanisms which are constant-competitive for revenue and efficiency. Finally, we present some strategyproof competitive algorithms for the case in which adversary uses a distribution known to the mechanism.",
"We study online multi-unit auctions in which each agent’s private type consists of the agent’s arrival and departure times, valuation function and budget. Similarly to secretary settings, the different attributes of the agents’ types are determined by an adversary, but the arrival process is random. We establish a general framework for devising truthful random sampling mechanisms for online multi-unit settings with budgeted agents. We demonstrate the applicability of our framework by applying it to different objective functions (revenue and liquid welfare), and a range of assumptions about the agents’ valuations (additive or general) and the items’ nature (divisible or indivisible). Our main result is the design of mechanisms for additive bidders with budget constraints that extract a constant fraction of the optimal revenue, for divisible and indivisible items (under a standard large market assumption). We also show a mechanism that extracts a constant fraction of the optimal liquid welfare for general valuations over divisible items.",
"In the classical secretary problem, a set S of numbers is presented to an online algorithm in random order. At any time the algorithm may stop and choose the current element, and the goal is to maximize the probability of choosing the largest element in the set. We study a variation in which the algorithm is allowed to choose k elements, and the goal is to maximize their sum. We present an algorithm whose competitive ratio is 1-O(√1 k). To our knowledge, this is the first algorithm whose competitive ratio approaches 1 as k ← ∞. As an application we solve an open problem in the theory of online auction mechanisms."
]
} |
1907.03572 | 2956148802 | Emotional aspects play an important part in our interaction with music. However, modelling these aspects in MIR systems have been notoriously challenging since emotion is an inherently abstract and subjective experience, thus making it difficult to quantify or predict in the first place, and to make sense of the predictions in the next. In an attempt to create a model that can give a musically meaningful and intuitive explanation for its predictions, we propose a VGG-style deep neural network that learns to predict emotional characteristics of a musical piece together with (and based on) human-interpretable, mid-level perceptual features. We compare this to predicting emotion directly with an identical network that does not take into account the mid-level features and observe that the loss in predictive performance of going through the mid-level features is surprisingly low, on average. The design of our network allows us to visualize the effects of perceptual features on individual emotion predictions, and we argue that the small loss in performance in going through the mid-level features is justified by the gain in explainability of the predictions. | In the MIR field, audio-based music emotion recognition (MER) has traditionally been done by extracting selected features from the audio and predicting emotion based on subsequent processing of these features @cite_1 . Methods such as linear regression, regression trees, support vector regression, and variants have been used for prediction as mentioned in the systematic evaluation study by @cite_12 . Techniques using regression-like algorithms have generally focused on predicting arousal and valence as per the well-known Russell's of emotion @cite_5 . Deep learning methods have also been employed for predicting arousal and valence, for example @cite_2 , that investigated BLSTM-RNNs in tandem with other methods, and @cite_8 , that used LSTM-RNNs. Others such as @cite_6 and @cite_10 use support vector classification to predict the emotion class. @cite_7 provide a summary of entries to the MediaEval emotion characterization challenge and quote results for arousal and valence prediction. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2592535880",
"2403697441",
"2341090665",
"",
"2400268313",
"2149628368",
"",
"2023001347"
],
"abstract": [
"Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.",
"In this paper we describe our approach for the MediaEval's \"Emotion in Music\" task. Our method consists of deep Long-Short Term Memory Recurrent Neural Networks (LSTM-RNN) for dynamic Arousal and Valence regression, using acoustic and psychoacoustic features extracted from the songs that have been previously proven as effective for emotion prediction in music. Results on the challenge test demonstrate an excellent performance for Arousal estimation (r = 0.613 ± 0.278), but not for Valence (r = 0.026 ± 0.500). Issues regarding the quality of the test set annotations' reliability and distributions are indicated as plausible justifications for these results. By using a subset of the development set that was left out for performance estimation, we could determine that the performance of our approach may be underestimated for Valence (Arousal: r = 0.596 ± 0.386; Valence: r = 0.458 ± 0.551).",
"This paper surveys the state of the art in automatic emotion recognition in music. Music is oftentimes referred to as a “language of emotion” [1], and it is natural for us to categorize music in terms of its emotional associations. Myriad features, such as harmony, timbre, interpretation, and lyrics affect emotion, and the mood of a piece may also change over its duration. But in developing automated systems to organize music in terms of emotional content, we are faced with a problem that oftentimes lacks a welldefined answer; there may be considerable disagreement regarding the perception and interpretation of the emotions of a song or ambiguity within the piece itself. When compared to other music information retrieval tasks (e.g., genre identification), the identification of musical mood is still in its early stages, though it has received increasing attention in recent years. In this paper we explore a wide range of research in music emotion recognition, particularly focusing on methods that use contextual text information (e.g., websites, tags, and lyrics) and content-based approaches, as well as systems combining multiple feature domains.",
"",
"The goal of the “Emotion in Music” task in MediaEval 2015 is to automatically estimate the emotions expressed by music (in terms of Arousal and Valence) in a time-continuous fashion. In this paper, considering the high context correlation among the music feature sequence, we study several multiscale approaches at different levels, including acoustic feature learning with Deep Brief Networks (DBNs) followed a modified Autoencoder (AE), bi-directional Long-Short Term Memory Recurrent Neural Networks (BLSTM-RNNs) based multi-scale regression fusion with Extreme Learning Machine (ELM), and hierarchical prediction with Support Vector Regression (SVR). The evaluation performances of all runs submitted are significantly better than the baseline provided by the organizers, illustrating the effectiveness of the proposed approaches.",
"",
"",
"Abstract Automated music emotion recognition (MER) is a challenging task in Music Information Retrieval with wide-ranging applications. Some recent studies pose MER as a continuous regression problem in the Arousal-Valence (AV) plane. These consist of variations on a common architecture having a universal model of emotional response, a common repertoire of low-level audio features, a bag-of-frames approach to audio analysis, and relatively small data sets. These approaches achieve some success at MER and suggest that further improvements are possible with current technology. Our contribution to the state of the art is to examine just how far one can go within this framework, and to investigate what the limitations of this framework are. We present the results of a systematic study conducted in an attempt to maximize the prediction performance of an automated MER system using the architecture described. We begin with a carefully constructed data set, emphasizing quality over quantity. We address affect ind..."
]
} |
1907.03572 | 2956148802 | Emotional aspects play an important part in our interaction with music. However, modelling these aspects in MIR systems have been notoriously challenging since emotion is an inherently abstract and subjective experience, thus making it difficult to quantify or predict in the first place, and to make sense of the predictions in the next. In an attempt to create a model that can give a musically meaningful and intuitive explanation for its predictions, we propose a VGG-style deep neural network that learns to predict emotional characteristics of a musical piece together with (and based on) human-interpretable, mid-level perceptual features. We compare this to predicting emotion directly with an identical network that does not take into account the mid-level features and observe that the loss in predictive performance of going through the mid-level features is surprisingly low, on average. The design of our network allows us to visualize the effects of perceptual features on individual emotion predictions, and we argue that the small loss in performance in going through the mid-level features is justified by the gain in explainability of the predictions. | Deep neural networks are preferable for many tasks due to their high performance but can be considered black boxes due to their non-linear and nested structure. While in some fields such as healthcare or criminal justice the use of predictive analytics can have life-affecting consequences @cite_9 , the decisions of MIR models are generally not as severe. Nevertheless, also in MIR it would be desirable to be able to obtain explanations for the decisions of a music recommendation or search system, for various reasons (see also Section ). Many current methods for obtaining insights into deep network-based audio classification systems do not explain the predictions in a human understandable way but rather design special filters that can be visualized @cite_16 , or analyze neuron activations @cite_3 . To the best of our knowledge, @cite_13 is the only attempt to build an interpretable model for MER. They performed the task of feature extraction and selection and built models from different model classes on top of them. The only interpretation offered is the reporting of coefficients from their logistic regression models, without further explanation. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_13",
"@cite_3"
],
"mid": [
"2894881080",
"2964052309",
"2414752589",
"2918524527"
],
"abstract": [
"The authors developed and implemented transparent machine-learning models that call into question the use of black-box machine-learning models in healthcare and criminal justice applications.",
"Deep learning is progressively gaining popularity as a viable alternative to i-vectors for speaker recognition. Promising results have been recently obtained with Convolutional Neural Networks (CNNs) when fed by raw speech samples directly. Rather than employing standard hand-crafted features, the latter CNNs learn low-level speech representations from waveforms, potentially allowing the network to better capture important narrow-band speaker characteristics such as pitch and formants. Proper design of the neural network is crucial to achieve this goal.This paper proposes a novel CNN architecture, called SincNet, that encourages the first convolutional layer to discover more meaningful filters. SincNet is based on parametrized sinc functions, which implement band-pass filters. In contrast to standard CNNs, that learn all elements of each filter, only low and high cutoff frequencies are directly learned from data with the proposed method. This offers a very compact and efficient way to derive a customized filter bank specifically tuned for the desired application.Our experiments, conducted on both speaker identification and speaker verification tasks, show that the proposed architecture converges faster and performs better than a standard CNN on raw waveforms.",
"Music emotion recognition (MER) is an important topic in music understanding, recommendation, retrieval and human computer interaction. Great success has been achieved by machine learning methods in estimating human emotional response to music. However, few of them pay much attention in semantic interpret for emotion response. In our work, we first train an interpretable model between acoustic audio and emotion. Filter, wrapper and shrinkage methods are applied to select important features. We then apply statistical models to build and explain the emotion model. Extensive experimental results reveal that the shrinkage methods outperform the wrapper methods and the filter methods in arousal emotion. In addition, we observed that only a small set of the extracted features have the key effects to arousal. While, most of our extracted features have small contribution to valence music perception. Ultimately, we obtain a higher average accuracy rate in arousal, compared to that in valence.",
""
]
} |
1907.03336 | 2955271505 | Over the past 10 years, many recommendation techniques have been based on embedding users and items in latent vector spaces, where the inner product of a (user,item) pair of vectors represents the predicted affinity of the user to the item. A wealth of literature has focused on the various modeling approaches that result in embeddings, and has compared their quality metrics, learning complexity, etc. However, much less attention has been devoted to the issues surrounding productization of an embeddings-based high throughput, low latency recommender system. In particular, how the system might keep up with the changing embeddings as new models are learnt. This paper describes a reference architecture of a high-throughput, large scale recommendation service which leverages a search engine as its runtime core. We describe how the search index and the query builder adapt to changes in the embeddings, which often happen at a different cadence than index builds. We provide solutions for both id-based and feature-based embeddings, as well as for batch indexing and incremental indexing setups. The described system is at the core of a Web content discovery service that serves tens of billions recommendations per day in response to billions of user requests. | Matrix Factorization @cite_9 , made popular by the Netflix prize competition, embeds both users and the items into a latent feature space of a given dimension. Factorization Machines and their Field-Aware extension (FM) @cite_2 @cite_7 extend the basic matrix factorization model to model feature interactions. Every feature (an ID or attribute in general) has a latent vector, and the dot product of all pairs of feature latent vectors are summed together. OffSet @cite_3 uses embeddings to solve for interactions between user features and item features. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_7",
"@cite_2"
],
"mid": [
"2054141820",
"1992554260",
"2509235963",
""
],
"abstract": [
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"One of the most challenging recommendation tasks is recommending to a new, previously unseen user. This is known as the user cold start problem. Assuming certain features or attributes of users are known, one approach for handling new users is to initially model them based on their features. Motivated by an ad targeting application, this paper describes an extreme online recommendation setting where the cold start problem is perpetual. Every user is encountered by the system just once, receives a recommendation, and either consumes or ignores it, registering a binary reward. We introduce One-pass Factorization of Feature Sets, 'OFF-Set', a novel recommendation algorithm based on Latent Factor analysis, which models users by mapping their features to a latent space. OFF-Set is able to model non-linear interactions between pairs of features, and updates its model per each recommendation-reward observation in a pure online fashion. We evaluate OFF-Set against several state of the art baselines, and demonstrate its superiority on real ad-targeting data.",
"Click-through rate (CTR) prediction plays an important role in computational advertising. Models based on degree-2 polynomial mappings and factorization machines (FMs) are widely used for this task. Recently, a variant of FMs, field-aware factorization machines (FFMs), outperforms existing models in some world-wide CTR-prediction competitions. Based on our experiences in winning two of them, in this paper we establish FFMs as an effective method for classifying large sparse data including those from CTR prediction. First, we propose efficient implementations for training FFMs. Then we comprehensively analyze FFMs and compare this approach with competing models. Experiments show that FFMs are very useful for certain classification problems. Finally, we have released a package of FFMs for public use.",
""
]
} |
1907.03336 | 2955271505 | Over the past 10 years, many recommendation techniques have been based on embedding users and items in latent vector spaces, where the inner product of a (user,item) pair of vectors represents the predicted affinity of the user to the item. A wealth of literature has focused on the various modeling approaches that result in embeddings, and has compared their quality metrics, learning complexity, etc. However, much less attention has been devoted to the issues surrounding productization of an embeddings-based high throughput, low latency recommender system. In particular, how the system might keep up with the changing embeddings as new models are learnt. This paper describes a reference architecture of a high-throughput, large scale recommendation service which leverages a search engine as its runtime core. We describe how the search index and the query builder adapt to changes in the embeddings, which often happen at a different cadence than index builds. We provide solutions for both id-based and feature-based embeddings, as well as for batch indexing and incremental indexing setups. The described system is at the core of a Web content discovery service that serves tens of billions recommendations per day in response to billions of user requests. | More recent techniques use deep neural nets to perform collaborative filtering (CF) @cite_5 and to extend factorization machines @cite_10 . Another neural approach is Facebook's Starspace @cite_8 , which embeds objects of different types into a common vector space. @cite_1 use deep learning to generate embeddings for serving news recommendations. | {
"cite_N": [
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_8"
],
"mid": [
"2742272831",
"2605350416",
"2951001079",
"2962779279"
],
"abstract": [
"It is necessary to understand the content of articles and user preferences to make effective news recommendations. While ID-based methods, such as collaborative filtering and low-rank factorization, are well known for making recommendations, they are not suitable for news recommendations because candidate articles expire quickly and are replaced with new ones within short spans of time. Word-based methods, which are often used in information retrieval settings, are good candidates in terms of system performance but have issues such as their ability to cope with synonyms and orthographical variants and define \"queries\" from users' historical activities. This paper proposes an embedding-based method to use distributed representations in a three step end-to-end manner: (i) start with distributed representations of articles based on a variant of a denoising autoencoder, (ii) generate user representations by using a recurrent neural network (RNN) with browsing histories as input sequences, and (iii) match and list articles for users based on inner-product operations by taking system performance into consideration. The proposed method performed well in an experimental offline evaluation using past access data on Yahoo! JAPAN's homepage. We implemented it on our actual news distribution system based on these experimental results and compared its online performance with a method that was conventionally incorporated into the system. As a result, the click-through rate (CTR) improved by 23 and the total duration improved by 10 , compared with the conventionally incorporated method. Services that incorporated the method we propose are already open to all users and provide recommendations to over ten million individual users per day who make billions of accesses per month.",
"In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation --- collaborative filtering --- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering --- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.",
"Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its \"wide\" and \"deep\" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.",
""
]
} |
1907.03395 | 2954414836 | Predicting the future trajectories of multiple interacting agents in a scene has become an increasingly important problem for many different applications ranging from control of autonomous vehicles and social robots to security and surveillance. This problem is compounded by the presence of social interactions between humans and their physical interactions with the scene. While the existing literature has explored some of these cues, they mainly ignored the multimodal nature of each human's future trajectory. In this paper, we present Social-BiGAT, a graph-based generative adversarial network that generates realistic, multimodal trajectory predictions by better modelling the social interactions of pedestrians in a scene. Our method is based on a graph attention network (GAT) that learns reliable feature representations that encode the social interactions between humans in the scene, and a recurrent encoder-decoder architecture that is trained adversarially to predict, based on the features, the humans' paths. We explicitly account for the multimodal nature of the prediction problem by forming a reversible transformation between each scene and its latent noise vector, as in Bicycle-GAN. We show that our framework achieves state-of-the-art performance comparing it to several baselines on existing trajectory forecasting benchmarks. | In recent years due to the rise of popularity in development of autonomous driving systems and social robots, the problem of trajectory forecasting has received significant attention from many researchers in the community. The majority of existing works have been focused on the effects of incorporating physical features of the scene into human-space models @cite_25 @cite_39 , as well as learning how to model social behavior between pedestrians in human-human models @cite_36 @cite_28 . Other works have approached the problem from a generative setting @cite_20 and have jointly modeled these features in one framework @cite_41 . While these works have greatly advanced the field, they have drawbacks that we address by incorporating graph attention networks @cite_12 and image translation networks @cite_32 . | {
"cite_N": [
"@cite_36",
"@cite_28",
"@cite_41",
"@cite_32",
"@cite_39",
"@cite_12",
"@cite_25",
"@cite_20"
],
"mid": [
"2134944993",
"",
"2805102305",
"2963330667",
"",
"2626778328",
"2952024198",
"2794787653"
],
"abstract": [
"In crowded spaces such as city centers or train stations, human mobility looks complex, but is often influenced only by a few causes. We propose to quantitatively study crowded environments by introducing a dataset of 42 million trajectories collected in train stations. Given this dataset, we address the problem of forecasting pedestrians' destinations, a central problem in understanding large-scale crowd mobility. We need to overcome the challenges posed by a limited number of observations (e.g. sparse cameras), and change in pedestrian appearance cues across different cameras. In addition, we often have restrictions in the way pedestrians can move in a scene, encoded as priors over origin and destination (OD) preferences. We propose a new descriptor coined as Social Affinity Maps (SAM) to link broken or unobserved trajectories of individuals in the crowd, while using the OD-prior in our framework. Our experiments show improvement in performance through the use of SAM features and OD prior. To the best of our knowledge, our work is one of the first studies that provides encouraging results towards a better understanding of crowd behavior at the scale of million pedestrians.",
"",
"This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present ; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with a physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks.",
"Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",
"We present an interpretable framework for path prediction that learns scene-specific causations behind agents' behaviors. We exploit two sources of information: the past motion trajectory of the agent of interest and a wide top-down view of the scene. We propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns \"where to look\" in the large image when solving the path prediction task. While previous works on trajectory prediction are constrained to either use semantic information or hand-crafted regions centered around the agent, our method has the capacity to select any region within the image, e.g., a far-away curve when predicting the change of speed of vehicles. To study our goal towards learning observable causality behind agents' behaviors, we have built a new dataset made of top view images of hundreds of scenes (e.g., F1 racing circuits) where the vehicles are governed by known specific regions within the images (e.g., upcoming curves). Our algorithm successfully selects these regions, learns navigation patterns that generalize to unseen maps, outperforms previous works in terms of prediction accuracy on publicly available datasets, and provides human-interpretable static scene-specific dependencies.",
"Understanding human motion behavior is critical for autonomous moving platforms (like self-driving cars and social robots) if they are to navigate human-centric environments. This is challenging because human motion is inherently multimodal: given a history of human motion paths, there are many socially plausible ways that people could move in the future. We tackle this problem by combining tools from sequence prediction and generative adversarial networks: a recurrent sequence-to-sequence model observes motion histories and predicts future behavior, using a novel pooling mechanism to aggregate information across people. We predict socially plausible futures by training adversarially against a recurrent discriminator, and encourage diverse predictions with a novel variety loss. Through experiments on several datasets we demonstrate that our approach outperforms prior work in terms of accuracy, variety, collision avoidance, and computational complexity."
]
} |
1907.03395 | 2954414836 | Predicting the future trajectories of multiple interacting agents in a scene has become an increasingly important problem for many different applications ranging from control of autonomous vehicles and social robots to security and surveillance. This problem is compounded by the presence of social interactions between humans and their physical interactions with the scene. While the existing literature has explored some of these cues, they mainly ignored the multimodal nature of each human's future trajectory. In this paper, we present Social-BiGAT, a graph-based generative adversarial network that generates realistic, multimodal trajectory predictions by better modelling the social interactions of pedestrians in a scene. Our method is based on a graph attention network (GAT) that learns reliable feature representations that encode the social interactions between humans in the scene, and a recurrent encoder-decoder architecture that is trained adversarially to predict, based on the features, the humans' paths. We explicitly account for the multimodal nature of the prediction problem by forming a reversible transformation between each scene and its latent noise vector, as in Bicycle-GAN. We show that our framework achieves state-of-the-art performance comparing it to several baselines on existing trajectory forecasting benchmarks. | The field of image domain translation has gone through several seminal advancements in the past couple years. The first advancement was made with the framework @cite_21 , which enabled translation but was limited by requiring paired training examples. Zhu al improved this model with CycleGAN @cite_3 , which was able to learn these domain mappings with unpaired examples from each domain through a cycle consistency loss. Newer research has focused on learning multimodality of the output: InfoGAN @cite_18 focuses on maximizing variational mutual information, while BicycleGAN @cite_32 introduces a latent noise encoder and learns a bijection between noise and output. In our model we draw upon the advancements suggested by BicycleGAN to propose a latent space encoder that allows for multimodal pedestrian trajectory generation. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_32",
"@cite_3"
],
"mid": [
"2434741482",
"2963073614",
"2963330667",
"2962793481"
],
"abstract": [
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
]
} |
1907.03351 | 2953990100 | Most amino acids are encoded by multiple synonymous codons. For an amino acid, some of its synonymous codons are used much more rarely than others. Analyses of positions of such rare codons in protein sequences revealed that rare codons can impact co-translational protein folding and that positions of some rare codons are evolutionary conserved. Analyses of positions of rare codons in proteins' 3-dimensional structures, which are richer in biochemical information than sequences alone, might further explain the role of rare codons in protein folding. We analyze a protein set recently annotated with codon usage information, considering non-redundant proteins with sufficient structural information. We model the proteins' structures as networks and study potential differences between network positions of amino acids encoded by evolutionary conserved rare, evolutionary non-conserved rare, and commonly used codons. In 84 of the proteins, at least one of the three codon categories occupies significantly more or less network-central positions than the other codon categories. Different protein groups showing different codon centrality trends (i.e., different types of relationships between network positions of the three codon categories) are enriched in different biological functions, implying the existence of a link between codon usage, protein folding, and protein function. | The genetic code is redundant, meaning that most amino acids are encoded by more than one codon. Codons that code for the same amino acid are called synonymous codons. For an amino acid, it is usually the case that some of its synonymous codons encode it in the given genome relatively more commonly than the others @cite_22 @cite_1 . Henceforth, intuitively, when we say common'' codon, we mean a synonymous codon that is used frequently, and when we say rare'' codon, we mean a synonymous codon that is used infrequently. Precise definitions depend on which computational model is used to characterize synonymous codons as common or rare. Several such models exist @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_1",
"@cite_22"
],
"mid": [
"2785468798",
"",
"2070163065"
],
"abstract": [
"The unequal utilization of synonymous codons affects numerous cellular processes including translation rates, protein folding and mRNA degradation. In order to understand the biological impact of variable codon usage bias (CUB) between genes and genomes, it is crucial to be able to accurately measure CUB for a given sequence. A large number of metrics have been developed for this purpose, but there is currently no way of systematically testing the accuracy of individual metrics or knowing whether metrics provide consistent results. This lack of standardization can result in false-positive and false-negative findings if underpowered or inaccurate metrics are applied as tools for discovery. Here, we show that the choice of CUB metric impacts both the significance and measured effect sizes in numerous empirical datasets, raising questions about the generality of findings in published research. To bring about standardization, we developed a novel method to create synthetic protein-coding DNA sequences according to different models of codon usage. We use these benchmark sequences to identify the most accurate and robust metrics with regard to sequence length, GC content and amino acid heterogeneity. Finally, we show how our benchmark can aid the development of new metrics by providing feedback on its performance compared to the state of the art.",
"",
"Observed patterns of synonymous codon usage are explained in terms of the joint effects of mutation, selection, and random drift. Examination of the codon usage in 165Escherichia coli genes reveals a consistent trend of increasing bias with increasing gene expression level. Selection on codon usage appears to be unidirectional, so that the pattern seen in lowly expressed genes is best explained in terms of an absence of strong selection. A measure of directional synonymous-codon usage bias, the Codon Adaptation Index, has been developed. In enterobacteria, rates of synonymous substitution are seen to vary greatly among genes, and genes with a high codon bias evolve more slowly. A theoretical study shows that the patterns of extreme codon bias observed for someE. coli (and yeast) genes can be generated by rather small selective differences. The relative plausibilities of various theoretical models for explaining nonrandom codon usage are discussed."
]
} |
1907.03351 | 2953990100 | Most amino acids are encoded by multiple synonymous codons. For an amino acid, some of its synonymous codons are used much more rarely than others. Analyses of positions of such rare codons in protein sequences revealed that rare codons can impact co-translational protein folding and that positions of some rare codons are evolutionary conserved. Analyses of positions of rare codons in proteins' 3-dimensional structures, which are richer in biochemical information than sequences alone, might further explain the role of rare codons in protein folding. We analyze a protein set recently annotated with codon usage information, considering non-redundant proteins with sufficient structural information. We model the proteins' structures as networks and study potential differences between network positions of amino acids encoded by evolutionary conserved rare, evolutionary non-conserved rare, and commonly used codons. In 84 of the proteins, at least one of the three codon categories occupies significantly more or less network-central positions than the other codon categories. Different protein groups showing different codon centrality trends (i.e., different types of relationships between network positions of the three codon categories) are enriched in different biological functions, implying the existence of a link between codon usage, protein folding, and protein function. | Rare codons are associated with lower tRNA levels, expression levels, and translational accuracy @cite_12 @cite_18 @cite_5 . As a result, it has been hypothesized that since common codons show efficient translation, they are more likely to be under selective pressure to occupy important regions in protein structures. This has been supported by several prior efforts that have shown evolutionary conservation of optimal (i.e., common) codons in structurally important regions @cite_27 @cite_7 @cite_25 , while non-optimal (i.e., rare) codons tend to occur in structurally disordered regions of a protein structure @cite_10 . | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_27",
"@cite_5",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2018590530",
"",
"2042548693",
"2141997506",
"1593377767",
"",
"2164618276"
],
"abstract": [
"A simple, effective measure of synonymous codon usage bias, the Codon Adaptation Index, is detailed. The index uses a reference set of highly expressed genes from a species to assess the relative merits of each codon, and a score for a gene is calculated from the frequency of use of all codons in that gene. The index assesses the extent to which selection has been effective in moulding the pattern of codon usage. In that respect it is useful for predicting the level of expression of a gene, for assessing the adaptation of viral genes to their hosts, and for making comparisons of codon usage in different organisms. The index may also give an approximate indication of the likely success of heterologous gene expression.",
"",
"The mistranslation-induced protein misfolding hypothesis predicts that selection should prefer high-fidelity codons at sites at which translation errors are structurally disruptive and lead to protein misfolding and aggregation. To test this hypothesis, we analyzed the relationship between codon usage bias and protein structure in the genomes of four model organisms, Escherichia coli, yeast, fly, and mouse. Using both the Mantel–Haenszel procedure, which applies to categorical data, and a newly developed association test for continuous variables, we find that translationally optimal codons associate with buried residues and also with residues at sites where mutations lead to large changes in free energy (ΔΔG). In each species, only a subset of all amino acids show this signal, but most amino acids show the signal in at least one species. By repeating the analysis on a reduced data set that excludes interdomain linkers, we show that our results are not caused by an association of rare codons with solvent-accessible linker regions. Finally, we find that our results depend weakly on expression level; the association between optimal codons and buried sites exists at all expression levels, but increases in strength as expression level increases.",
"Estimates of missense error rates (misreading) during protein synthesis vary from 10−3 to 10−4 per codon. The experiments reporting these rates have measured several distinct errors using several methods and reporter systems. Variation in reported rates may reflect real differences in rates among the errors tested or in sensitivity of the reporter systems. To develop a more accurate understanding of the range of error rates, we developed a system to quantify the frequency of every possible misreading error at a defined codon in Escherichia coli. This system uses an essential lysine in the active site of firefly luciferase. Mutations in Lys529 result in up to a 1600-fold reduction in activity, but the phenotype varies with amino acid. We hypothesized that residual activity of some of the mutant genes might result from misreading of the mutant codons by tRNALys UUUU, the cognate tRNA for the lysine codons, AAA and AAG. Our data validate this hypothesis and reveal details about relative missense error rates of near-cognate codons. The error rates in E. coli do, in fact, vary widely. One source of variation is the effect of competition by cognate tRNAs for the mutant codons; higher error frequencies result from lower competition from low-abundance tRNAs. We also used the system to study the effect of ribosomal protein mutations known to affect error rates and the effect of error-inducing antibiotics, finding that they affect misreading on only a subset of near-cognate codons and that their effect may be less general than previously thought.",
"Summary Synonymous codons are not used with equal frequencies in most genomes. Codon usage has been proposed to play a role in regulating translation kinetics and co-translational protein folding. The relationship between codon usage and protein structures and the in vivo role of codon usage in eukaryotic protein folding is not clear. Here, we show that there is a strong codon usage bias in the filamentous fungus Neurospora. Importantly, we found genome-wide correlations between codon choices and predicted protein secondary structures: Nonoptimal codons are preferentially used in intrinsically disordered regions, and more optimal codons are used in structured domains. The functional importance of such correlations in vivo was confirmed by structure-based codon manipulation of codons in the Neurospora circadian clock gene frequency (frq). The codon optimization of the predicted disordered, but not well-structured regions of FRQ impairs clock function and altered FRQ structures. Furthermore, the correlations between codon usage and protein disorder tendency are conserved in other eukaryotes. Together, these results suggest that codon choices and protein structures co-evolve to ensure proper protein folding in eukaryotic organisms.",
"",
"Choices of synonymous codons in unicellular organisms are here reviewed, and differences in synonymous codon usages between Escherichia coli and the yeast Saccharomyces cerevisiae are attributed to differences in the actual populations of isoaccepting tRNAs. There exists a strong positive correlation between codon usage and tRNA content in both organisms, and the extent of this correlation relates to the protein production levels of individual genes. Codon-choice patterns are believed to have been well conserved during the course of evolution. Examination of silent substitutions and tRNA populations in Enterobacteriaceae revealed that the evolutionary constraint imposed by tRNA content on codon usage decelerated rather than accelerated the silent-substitution rate, at least insofar as pairs of taxonomically related organisms were examined. Codon-choice patterns of multicellular organisms are briefly reviewed, and diversity in G+C percentage at the third position of codons in vertebrate genes-as well as a possible causative factor in the production of this diversity-is discussed."
]
} |
1907.03305 | 2953604211 | This paper presents a real-time control system for surface inspection using multiple unmanned aerial vehicles (UAVs). The UAVs are coordinated in a specific formation to collect data of the inspecting objects. The communication platform for data transmission is based on the Internet of things (IoT). In the proposed architecture, the UAV formation is established via angle-encoded particle swarm optimization to generate an inspecting path and redistribute it to each UAV where communication links are embedded with an IoT board for network and data processing capabilities. The data collected are transmitted in real time through the network to remote computational units. To detect potential damage or defects, an online image-processing technique is proposed and implemented based on histograms. Extensive simulation, experiments, and comparisons have been conducted to verify the validity and performance of the proposed system. | For automatic inspection, sophisticated systems featured path planning and path-following control algorithms have been proposed. In @cite_36 , an aerial robotic system for the contact-based surface inspection has been introduced using not only optimal trajectory tracking but also accurate force control techniques. In @cite_33 , an iterative viewpoint re-sampling path planning algorithm was proposed for the inspection of complex 3D structures. The inspection of outdoor structures under windy conditions was addressed in @cite_37 by using the viewpoint selection and optimal time route algorithms. Besides, an UAV-based inspection system for wind turbines was developed in @cite_46 with the capability of creating smooth and collision-free flight paths based on the data recorded from LIDAR sensors. These studies however focus more on data collection rather than defect detection. | {
"cite_N": [
"@cite_36",
"@cite_37",
"@cite_46",
"@cite_33"
],
"mid": [
"2219720796",
"2011272678",
"2516781553",
"1607046631"
],
"abstract": [
"The challenge of aerial robotic contact-based inspection is the driving motivation of this paper. The problem is approached on both levels of control and path-planning by introducing algorithms and control laws that ensure optimal inspection through contact and controlled aerial robotic physical interaction. Regarding the flight and physical interaction stabilization, a hybrid model predictive control framework is proposed, based on which a typical quadrotor becomes capable of stable and active interaction, accurate trajectory tracking on environmental surfaces as well as force control. Convex optimization techniques enabled the explicit computation of such a controller which accounts for the dynamics in free-flight as well as during physical interaction, ensures the global stability of the hybrid system and provides optimal responses while respecting the physical limitations of the vehicle. Further augmentation of this scheme, allowed the incorporation of a last-resort obstacle avoidance mechanism at the control level. Relying on such a control law, a contact-based inspection planner was developed which computes the optimal route within a given set of inspection points while avoiding any obstacles or other no-fly zones on the environmental surface. Extensive experimental studies that included complex \"aerial-writing\" tasks, interaction with non-planar and textured surfaces, execution of multiple inspection operations and obstacle avoidance maneuvers, indicate the efficiency of the proposed methods and the potential capabilities of aerial robotic inspection through contact.",
"In this paper, we consider the structure inspection problem using a miniature unmanned aerial vehicle (UAV). The influence of the wind on the UAV behavior and onboard energy limitations are important parameters that must be taken into account in the structure inspection problem. To tackle these problems, we derive three methods to inspect a structure. First, we develop a Zermelo-Traveling Salesman Problem (TSP) method to compute the optimal route to inspect a simple virtual structure. Second, we derive a method that combines meshing techniques with the Zermelo-TSP method. In this approach, the inspection coordinates for the interest points are obtained automatically by means of a meshing algorithm, then, the Zermelo-TSP method is used to compute the time-optimal route to inspect all the interest points in minimal time. Finally, we derive a method for structure inspection based on the Zermelo-Vehicle Routing Problem (VRP). These methods have been validated in a simulated environment.",
"A concept for a multicopter unmanned aerial vehicle (UAV) automatically performing inspection flights at a wind turbine is proposed. Key aspects of the concept are (1) a priori 3D mapping of the plant and (2) spline-based flight path planning as well as (3) a collision avoidance and distance control system. A quadrotor UAV prototype and its dynamical model are presented. Validation of the different aspects is carried out in simulation and partially in indoor tests using Robot Operating System (ROS). Existence of a 3D map is an essential precondition for path planning and collision-free flight. A brief initial flight preceding the actual inspection with a 2D LiDAR sensor yields a point cloud of the plant which is used for 3D mapping. This map is efficiently generated and represented using octrees, a hierarchical data structure that can be used for 3D maps. Subsequently a smooth and collision-free flight path is generated using splines. For redundancy's sake navigation tasks not only rely on GPS but also on the LiDAR sensor mentioned before. The sensor allows for continuous estimation of the distance between multicopter and wind turbine. A distance control algorithm guarantees collision-free flight.",
"Within this paper, a new fast algorithm that provides efficient solutions to the problem of inspection path planning for complex 3D structures is presented. The algorithm assumes a triangular mesh representation of the structure and employs an alternating two-step optimization paradigm to find good viewpoints that together provide full coverage and a connecting path that has low cost. In every iteration, the viewpoints are chosen such that the connection cost is reduced and, subsequently, the tour is optimized. Vehicle and sensor limitations are respected within both steps. Sample implementations are provided for rotorcraft and fixed-wing unmanned aerial systems. The resulting algorithm characteristics are evaluated using simulation studies as well as multiple real-world experimental test-cases with both vehicle types."
]
} |
1907.03305 | 2953604211 | This paper presents a real-time control system for surface inspection using multiple unmanned aerial vehicles (UAVs). The UAVs are coordinated in a specific formation to collect data of the inspecting objects. The communication platform for data transmission is based on the Internet of things (IoT). In the proposed architecture, the UAV formation is established via angle-encoded particle swarm optimization to generate an inspecting path and redistribute it to each UAV where communication links are embedded with an IoT board for network and data processing capabilities. The data collected are transmitted in real time through the network to remote computational units. To detect potential damage or defects, an online image-processing technique is proposed and implemented based on histograms. Extensive simulation, experiments, and comparisons have been conducted to verify the validity and performance of the proposed system. | In terms of surface inspection, several studies have been conducted for defect detection tasks. In @cite_28 , a fast and effective defect detection method has been proposed using the size-based estimation with data obtained from both color and infrared cameras. In @cite_17 , Haar-like features and a cascading classifier were applied to UAV-taken images to identify cracks on wind turbine blade surfaces. Self-organizing map optimization was introduced in @cite_0 , based on image recognition and processing model for crack detection to reduce human involvement. On another note, @cite_15 assessed the quality and feasibility of images taken by UAVs for defect detection and discussed methods to improve the image quality. | {
"cite_N": [
"@cite_28",
"@cite_15",
"@cite_0",
"@cite_17"
],
"mid": [
"2520427900",
"2019949051",
"2527787519",
"2597358295"
],
"abstract": [
"Abstract The rapid, cost-effective, and non-disruptive assessment of bridge deck condition has emerged as a critical challenge for bridge maintenance. Deck delaminations are a common form of deterioration which has been assessed, historically, through chain-drag techniques and more recently through nondestructive evaluation (NDE) including both acoustic and optical methods. Although NDE methods have proven to be capable to provide information related to the existence of delaminations in bridge decks, many of them are time-consuming, labor-intensive, expensive, while they further require significant disruptions to traffic. In this context, this article demonstrates the capability of unmanned aerial vehicles (UAVs) equipped with both color and infrared cameras to rapidly and effectively detect and estimate the size of regions where subsurface delaminations exist. To achieve this goal, a novel image post-processing algorithm was developed to use such multispectral imagery obtained by a UAV. To evaluate the capabilities of the presented approach, a bridge deck mockup with pre-manufactured defects was tested. The major advantages of the presented approach include its capability to rapidly identify locations where delaminations exist, as well as its potential to automate bridge-deck related damage detection procedures and further guide investigations using other higher accuracy and ground-based approaches.",
"This paper discusses the application of Unmanned Aerial Vehicles (UAV) for visual inspection and damage detection on civil structures. The quality of photos and videos taken by using such airborne vehicles is strongly influenced by numerous parameters such as lighting conditions, distance to the object and vehicle motion induced by environmental effects. Whilst such devices feature highly sophisticated sensors and control algorithms, specifically the effects of fluctuating wind speeds and directions affect the vehicle motion. The nature of vehicle movements during photo and video acquisition in turn affect the quality of the data and hence the degree to which damages can be identified. This paper discusses the properties of such flight systems, the factors influencing their movements and the resulting photo quality. Based on the processed data logged by the high precision sensors on the UAV the influences are studied and a method is shown by which the damage assessment quality may be quantified.",
"Abstract The current deterioration inspection method for bridges heavily depends on human recognition, which is time consuming and subjective. This research adopts Self Organizing Map Optimization (SOMO) integrated with image processing techniques to develop a crack recognition model for bridge inspection. Bridge crack data from 216 images was collected from the database of the Taiwan Bridge Management System (TBMS), which provides detailed information on the condition of bridges. This study selected 40 out of 216 images to be used as training and testing datasets. A case study on the developed model implementation is also conducted in the severely damage Hsichou Bridge in Taiwan. The recognition results achieved high accuracy rates of 89 for crack recognition and 91 for non-crack recognition. This model demonstrates the feasibility of accurate computerized recognition for crack inspection in bridge management.",
"In this paper, a data-driven framework is proposed to automatically detect wind turbine blade surface cracks based on images taken by unmanned aerial vehicles (UAVs). Haar-like features are applied to depict crack regions and train a cascading classifier for detecting cracks. Two sets of Haar-like features, the original and extended Haar-like features, are utilized. Based on selected Haar-like features, an extended cascading classifier is developed to perform the crack detection through stage classifiers selected from a set of base models, the LogitBoost, Decision Tree, and Support Vector Machine. In the detection, a scalable scanning window is applied to locate crack regions based on developed cascading classifiers using the extended feature set. The effectiveness of the proposed data-driven crack detection framework is validated by both UAV-taken images collected from a commercial wind farm and artificially generated. The extended cascading classifier is compared with a cascading classifier developed by the LogitBoost only to show its advantages in the image-based crack detection. A computational study is performed to further demonstrate the success of the proposed framework in identifying the number of cracks and locating them in original images."
]
} |
1907.03196 | 2954869531 | This paper presents a novel deep neural network (DNN) for multimodal fusion of audio, video and text modalities for emotion recognition. The proposed DNN architecture has independent and shared layers which aim to learn the representation for each modality, as well as the best combined representation to achieve the best prediction. Experimental results on the AVEC Sentiment Analysis in the Wild dataset indicate that the proposed DNN can achieve a higher level of Concordance Correlation Coefficient (CCC) than other state-of-the-art systems that perform early fusion of modalities at feature-level (i.e., concatenation) and late fusion at score-level (i.e., weighted average) fusion. The proposed DNN has achieved CCCs of 0.606, 0.534, and 0.170 on the development partition of the dataset for predicting arousal, valence and liking, respectively. | Over the past decade, facial expression recognition (ER) has been a topic of significant interest. Many ER techniques have been proposed to automatically detect the seven universally recognizable types of emotions -- joy, surprise, anger, fear, disgust, sadness and neutral -- from a single still facial image @cite_10 @cite_15 @cite_2 @cite_22 @cite_13 @cite_1 . These static techniques for facial ER tend to follow either appearance-based or geometric-based approaches. More recently, dynamic techniques for facial ER has emerged as a promising approach to improve performance, where the expression type is estimate from a sequence of images or video frames captured during physical facial expression process of a subject @cite_8 . This allows to extract not only facial appearance information in the spatial domain, but also its evolution in the temporal domain. Techniques are either shape-based, appearance-based or motion-based methods @cite_9 . | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_1",
"@cite_2",
"@cite_15",
"@cite_13"
],
"mid": [
"2947369712",
"2110885456",
"2556027894",
"2302333535",
"2161362162",
"2021890937",
"1669230375",
"2066332159"
],
"abstract": [
"Facial expression recognition is a major problem in the domain of artificial intelligence. One of the best ways to solve this problem is the use of convolutional neural networks (CNNs). However, a large amount of data is required to train properly these networks but most of the datasets available for facial expression recognition are relatively small. A common way to circumvent the lack of data is to use CNNs trained on large datasets of different domains and fine-tuning the layers of such networks to the target domain. However, the fine-tuning process does not preserve the memory integrity as CNNs have the tendency to forget patterns they have learned. In this paper, we evaluate different strategies of fine-tuning a CNN with the aim of assessing the memory integrity of such strategies in a cross-dataset scenario. A CNN pre-trained on a source dataset is used as the baseline and four adaptation strategies have been evaluated: fine-tuning its fully connected layers; fine-tuning its last convolutional layer and its fully connected layers; retraining the CNN on a target dataset; and the fusion of the source and target datasets and retraining the CNN. Experimental results on four datasets have shown that the fusion of the source and the target datasets provides the best trade-off between accuracy and memory integrity.",
"Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87 is achieved.",
"",
"In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison.",
"This paper presents a novel method for facial expression classification that employs the combination of two different feature sets in an ensemble approach. A pool of base classifiers is created using two feature sets: Gabor filters and local binary patterns (LBP). Then a multi-objective genetic algorithm is used to search for the best ensemble using as objective functions the accuracy and the size of the ensemble. The experimental results on two databases have shown the efficiency of the proposed strategy by finding powerful ensembles, which improves the recognition rates between 5 and 10 .",
"Although it shows enormous potential as a feature extractor, 2D principal component analysis produces numerous coefficients. Using a feature-selection algorithm based on a multiobjective genetic algorithm to analyze and discard irrelevant coefficients offers a solution that considerably reduces the number of coefficients, while also improving recognition rates.",
"Abstract Automatic facial expression recognition system has many applications including, but not limited to, human behavior understanding, detection of mental disorders, and synthetic human expressions. Two popular methods utilized mostly in the literature for the automatic FER systems are based on geometry and appearance. Even though there is lots of research using static images, the research is still going on for the development of new methods which would be quiet easy in computation and would have less memory usage as compared to previous methods. This paper presents a quick survey of facial expression recognition. A comparative study is also carried out using various feature extraction techniques on JAFFE dataset.",
"This paper presents a novel method for facial expression recognition that employs the combination of two different feature sets in an ensemble approach. A pool of base support vector machine classifiers is created using Gabor filters and Local Binary Patterns. Then a multi-objective genetic algorithm is used to search for the best ensemble using as objective functions the minimization of both the error rate and the size of the ensemble. Experimental results on JAFFE and Cohn-Kanade databases have shown the efficiency of the proposed strategy in finding powerful ensembles, which improves the recognition rates between 5 and 10 over conventional approaches that employ single feature sets and single classifiers."
]
} |
1907.03196 | 2954869531 | This paper presents a novel deep neural network (DNN) for multimodal fusion of audio, video and text modalities for emotion recognition. The proposed DNN architecture has independent and shared layers which aim to learn the representation for each modality, as well as the best combined representation to achieve the best prediction. Experimental results on the AVEC Sentiment Analysis in the Wild dataset indicate that the proposed DNN can achieve a higher level of Concordance Correlation Coefficient (CCC) than other state-of-the-art systems that perform early fusion of modalities at feature-level (i.e., concatenation) and late fusion at score-level (i.e., weighted average) fusion. The proposed DNN has achieved CCCs of 0.606, 0.534, and 0.170 on the development partition of the dataset for predicting arousal, valence and liking, respectively. | Shape based methods like the constrained local model (CLM) describe facial component shapes based on salient anchor points. The movement of those landmarks provides discriminant information to guide the recognition process. Appearance based methods like LBP-TOP extract image intensity or other texture features from facial images to characterize facial expressions. Finally, motion based methods like free-form deformation model spatial-temporal evolution of facial expressions, and require reliability face alignment methods. For instance, Guo @cite_9 used an atlas construction and sparse representation to extract spatial and temporal information from a dynamic expression. Although the computational complexity is higher, including temporal information along with spatial information, greater recognition accuracy was achieved compared to that of static image FER. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2302333535"
],
"abstract": [
"In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison."
]
} |
1907.03196 | 2954869531 | This paper presents a novel deep neural network (DNN) for multimodal fusion of audio, video and text modalities for emotion recognition. The proposed DNN architecture has independent and shared layers which aim to learn the representation for each modality, as well as the best combined representation to achieve the best prediction. Experimental results on the AVEC Sentiment Analysis in the Wild dataset indicate that the proposed DNN can achieve a higher level of Concordance Correlation Coefficient (CCC) than other state-of-the-art systems that perform early fusion of modalities at feature-level (i.e., concatenation) and late fusion at score-level (i.e., weighted average) fusion. The proposed DNN has achieved CCCs of 0.606, 0.534, and 0.170 on the development partition of the dataset for predicting arousal, valence and liking, respectively. | This paper is focused on exploiting deep learning architectures to produce accurate mixtures of affect recognition systems. For instance, Kim @cite_21 proposed a hierarchical 3-level CNN architecture to combine multi-modal sources. DNNs are considered to learn a transformation sequence with the specific objective to obtain features that will combined in one system. Since feature-level and score-level fusion do not necessarily procure a high level of accuracy, a hybrid approach is proposed where features and classifier are learned such that they are optimized for multi-modal fusion. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2294427751"
],
"abstract": [
"We present a pattern recognition framework to improve committee machines of deep convolutional neural networks (deep CNNs) and its application to static facial expression recognition in the wild (SFEW). In order to generate enough diversity of decisions, we trained multiple deep CNNs by varying network architectures, input normalization, and weight initialization as well as by adopting several learning strategies to use large external databases. Moreover, with these deep models, we formed hierarchical committees using the validation-accuracy-based exponentially-weighted average (VA-Expo-WA) rule. Through extensive experiments, the great strengths of our committee machines were demonstrated in both structural and decisional ways. On the SFEW2.0 dataset released for the 3rd Emotion Recognition in the Wild (EmotiW) sub-challenge, a test accuracy of 57.3 was obtained from the best single deep CNN, while the single-level committees yielded 58.3 and 60.5 with the simple average rule and with the VA-Expo-WA rule, respectively. Our final submission based on the 3-level hierarchy using the VA-Expo-WA achieved 61.6 , significantly higher than the SFEW baseline of 39.1 ."
]
} |
1907.03331 | 2954943135 | Cryptocurrencies, which promise to become a global means of money transactions, are typically implemented with blockchain protocols. Blockchains utilize a variety of consensus algorithms, and their performance is advancing rapidly. However, a bottleneck remains: each node processes all transactions in the system. We present Ostraka, a blockchain node architecture that scales linearly with the available resources. Ostraka shards (parallelizes) the nodes themselves, embracing the fact that major actors have the resources to deploy multi-server nodes. We show that, in common scenarios, previous sharding solutions have the same property, requiring most node operators resources to process almost all blockchain transactions, while reducing system security. We prove that replacing a unified node with a sharded Ostraka node does not affect the security of the underlying consensus mechanism and that Ostraka does not expose additional vulnerabilities due to its sharding. We identify a partial denial-of-service attack that is exposed by previous sharding solutions. We evaluate analytically and experimentally block propagation and processing in various settings. Ostraka achieves linear scaling when the network allows it, unlike previous systems that require costly coordination for transactions that affect multiple shards. In our experiments, Ostraka nodes reach a rate of nearly 400,000 transactions per second with 64 shards, opening the door to truly high-frequency blockchains. | * Client partitioning In Aspen @cite_58 and SplitScale @cite_60 , a leader is chosen using like in Bitcoin. The elected leader creates blocks on several concurrent chains. Clients can track a portion of the state, one sub-chain, but overall performance is not affected, in contrast to Ostraka. | {
"cite_N": [
"@cite_58",
"@cite_60"
],
"mid": [
"2551240797",
"2949975911"
],
"abstract": [
"The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.",
"The Bitcoin protocol is a significant milestone in the history of money. However, its adoption is currently constrained by the transaction limits of the system. As the chief problem of blockchain technology, the scaling issue has attracted many valuable solutions both on-chain and off-chain. In this paper, our goal is to explore the notion of unspent transaction outputs (UTXOs) to propose an augmented Bitcoin protocol that can scale gracefully. Our proposal aims to increase the transaction throughput by partitioning the UTXO space and splitting the blockchain. In addition, a new type of Bitcoin node is introduced to preserve the capability to run validating nodes in low-bandwidth environments, despite the increased transaction throughput."
]
} |
1907.03331 | 2954943135 | Cryptocurrencies, which promise to become a global means of money transactions, are typically implemented with blockchain protocols. Blockchains utilize a variety of consensus algorithms, and their performance is advancing rapidly. However, a bottleneck remains: each node processes all transactions in the system. We present Ostraka, a blockchain node architecture that scales linearly with the available resources. Ostraka shards (parallelizes) the nodes themselves, embracing the fact that major actors have the resources to deploy multi-server nodes. We show that, in common scenarios, previous sharding solutions have the same property, requiring most node operators resources to process almost all blockchain transactions, while reducing system security. We prove that replacing a unified node with a sharded Ostraka node does not affect the security of the underlying consensus mechanism and that Ostraka does not expose additional vulnerabilities due to its sharding. We identify a partial denial-of-service attack that is exposed by previous sharding solutions. We evaluate analytically and experimentally block propagation and processing in various settings. Ostraka achieves linear scaling when the network allows it, unlike previous systems that require costly coordination for transactions that affect multiple shards. In our experiments, Ostraka nodes reach a rate of nearly 400,000 transactions per second with 64 shards, opening the door to truly high-frequency blockchains. | * Storage sharding Dietcoin @cite_47 shards the UTXO set of a blockchain and uses Merkle trees of UTXO-shards for SPV. Clients can efficiently obtain proofs of inclusion for their transactions by obtaining only the relevant UTXO-shard. @cite_10 reduce node storage complexity by using lower-degree replication of the blockchain itself, storing each block in a small number of nodes and locating blocks with consistent hashing. In contrast, Ostraka improves on the block validation process. In both solutions, miners keep the full state and perform full validation of each block, therefore these techniques can be used with Ostraka nodes. | {
"cite_N": [
"@cite_47",
"@cite_10"
],
"mid": [
"2795364690",
"2902571157"
],
"abstract": [
"Blockchains have a storage scalability issue. Their size is not bounded and they grow indefinitely as time passes. As of August 2017, the Bitcoin blockchain is about 120 GiB big while it was only 75 GiB in August 2016. To benefit from Bitcoin full security model, a bootstrapping node has to download and verify the entirety of the 120 GiB. This poses a challenge for low-resource devices such as smartphones. Thankfully, an alternative exists for such devices which consists of downloading and verifying just the header of each block. This partial block verification enables devices to reduce their bandwidth requirements from 120 GiB to 35 MiB.However, this drastic decrease comes with a safety cost implied by a partial block verification. In this work, we enable low-resource devices to fully verify subchains of blocks without having to pay the onerous price of a full chain download and verification; a few additional MiB of bandwidth suffice. To do so, we propose the design of diet nodes that can securely query full nodes for shards of the UTXO set, which is needed to perform full block verification and can otherwise only be built by sequentially parsing the chain.",
"A type of Bitcoin node called \"Full Node\" has to hold the entire of historical transaction data called \"Blockchain\" to verify that new transactions are correct or not. To operate nodes as Full Nodes, the required storage size will be too large for resource-constrained devices. In this paper, to mitigate storage size, we propose a storage load balancing scheme by distributed storage based on Distributed Hash Table (DHT). By our scheme, nodes in a DHT cluster can behave like Full Nodes without holding the entire of the blockchain."
]
} |
1907.03248 | 2962988034 | Face alignment consists in aligning a shape model on a face in an image. It is an active domain in computer vision as it is a preprocessing for applications like facial expression recognition, face recognition and tracking, face animation, etc. Current state-of-the-art methods already perform well on "easy" datasets, i.e. those that present moderate variations in head pose, expression, illumination or partial occlusions, but may not be robust to "in-the-wild" data. In this paper, we address this problem by using an ensemble of deep regressors instead of a single large regressor. Furthermore, instead of averaging the ouputs of each regressor, we propose an adaptative weighting scheme that uses a tree-structured gate. Experiments on several challenging face datasets demonstrate that our approach outperforms the state-of-the-art methods. | Other methods aim to be robust to all sources of variation. @cite_20 proposes a global framework, trained in a cascaded manner, which simultaneously performs facial landmarks localization, occlusion detection, head pose estimation and deformation estimation with separate modules. Relationships between these allows the modules to benefit from each other. However, once again, each module requires additional annotations in the trainset (e.g. occlusion and head pose labelling). @cite_8 improves the performance of deep learning based approaches by learning multiple tasks simultaneously thanks to auxiliary attributes. Training is then better conditioned and allows to increase the capacity of generalization. For example, knowing if glasses are present on the face can improve the model's robustness to occlusions. @cite_25 also use auxiliary attributes for landmarks localization, but improve learning with a semi-supervised procedure. However, these techniques require additional data, either unlabelled or annotated with auxiliary attributes. | {
"cite_N": [
"@cite_25",
"@cite_20",
"@cite_8"
],
"mid": [
"2962887041",
"202494559",
"1795776638"
],
"abstract": [
"We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available. First, we propose the framework of sequential multitasking and explore it here through an architecture for landmark localization where training with class labels acts as an auxiliary signal to guide the landmark localization on unlabeled data. A key aspect of our approach is that errors can be backpropagated through a complete landmark localization model. Second, we propose and explore an unsupervised learning technique for landmark localization based on having a model predict equivariant landmarks with respect to transformations applied to the image. We show that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels. We present results on two toy datasets and four real datasets, with hands and faces, and report new state-of-the-art on two datasets in the wild, e.g. with only 5 of labeled images we outperform previous state-of-the-art trained on the AFLW dataset.",
"We address the problem of robust facial feature localization in the presence of occlusions, which remains a lingering problem in facial analysis despite intensive long-term studies. Recently, regression-based approaches to localization have produced accurate results in many cases, yet are still subject to significant error when portions of the face are occluded. To overcome this weakness, we propose an occlusion-robust regression method by forming a consensus from estimates arising from a set of occlusion-specific regressors. That is, each regressor is trained to estimate facial feature locations under the precondition that a particular pre-defined region of the face is occluded. The predictions from each regressor are robustly merged using a Bayesian model that models each regressor’s prediction correctness likelihood based on local appearance and consistency with other regressors with overlapping occlusion regions. After localization, the occlusion state for each landmark point is estimated using a Gaussian MRF semi-supervised learning method. Experiments on both non-occluded and occluded face databases demonstrate that our approach achieves consistently better results over state-of-the-art methods for facial landmark localization and occlusion detection.",
"In this study, we show that landmark detection or face alignment task is not a single and independent problem. Instead, its robustness can be greatly improved with auxiliary information. Specifically, we jointly optimize landmark detection together with the recognition of heterogeneous but subtly correlated facial attributes, such as gender, expression, and appearance attributes. This is non-trivial since different attribute inference tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, which not only learns the inter-task correlation but also employs dynamic task coefficients to facilitate the optimization convergence when learning multiple complex tasks. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing face alignment methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art methods based on cascaded deep model."
]
} |
1907.03248 | 2962988034 | Face alignment consists in aligning a shape model on a face in an image. It is an active domain in computer vision as it is a preprocessing for applications like facial expression recognition, face recognition and tracking, face animation, etc. Current state-of-the-art methods already perform well on "easy" datasets, i.e. those that present moderate variations in head pose, expression, illumination or partial occlusions, but may not be robust to "in-the-wild" data. In this paper, we address this problem by using an ensemble of deep regressors instead of a single large regressor. Furthermore, instead of averaging the ouputs of each regressor, we propose an adaptative weighting scheme that uses a tree-structured gate. Experiments on several challenging face datasets demonstrate that our approach outperforms the state-of-the-art methods. | A second approach may be to parallelize a set of small networks instead of a single large network. The idea of using a set of regressors within an end-to-end system was firstly introduced by @cite_16 and more recently taken up by @cite_14 , presenting promising results and well adapted to our problem. design a Mixture-of-Experts (MoE) layer, consisting in jointly learning a set of expert'' subnetworks with gates, allowing to learn to combine a number of experts depending on the input. In the same vein, @cite_4 introduce sparsity in MoE in order to save computation and to increase representation capacity. | {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_4"
],
"mid": [
"2963280294",
"2150884987",
"2581624817"
],
"abstract": [
"Mixtures of Experts combine the outputs of several “expert” networks, each of which specializes in a different part of the input space. This is achieved by training a “gating” network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent (“where”) experts at the first layer, and class-specific (“what”) experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.",
"We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network.",
"The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost."
]
} |
1907.03398 | 2955639709 | To meet the women appearance needs, we present a novel virtual experience approach of facial makeup transfer, developed into windows platform application software. The makeup effects could present on the user's input image in real time, with an only single reference image. The input image and reference image are divided into three layers by facial feature points landmarked: facial structure layer, facial color layer, and facial detail layer. Except for the above layers are processed by different algorithms to generate output image, we also add illumination transfer, so that the illumination effect of the reference image is automatically transferred to the input image. Our approach has the following three advantages: (1) Black or dark and white facial makeup could be effectively transferred by introducing illumination transfer; (2) Efficiently transfer facial makeup within seconds compared to those methods based on deep learning frameworks; (3) Reference images with the air-bangs could transfer makeup perfectly. | In 2015, @cite_10 of Zhejiang University proposed a facial image makeup editing method based on intrinsic images. The method uses the intrinsic image decomposition method to directly decompose the input facial image into the illumination layer and the reflectance layer, and then edits the makeup information of the facial image in the reflectivity layer, rather than need reference image, and finally decomposes the previous image. The illumination and shadow layers are combined to obtain a makeup editing effect. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1916003155"
],
"abstract": [
"We present a method for simulating makeup in a face image. To generate realistic results without detailed geometric and reflectance measurements of the user, we propose to separate the image into intrinsic image layers and alter them according to proposed adaptations of physically-based reflectance models. Through this layer manipulation, the measured properties of cosmetic products are applied while preserving the appearance characteristics and lighting conditions of the target face. This approach is demonstrated on various forms of cosmetics including foundation, blush, lipstick, and eye shadow. Experimental results exhibit a close approximation to ground truth images, without artifacts such as transferred personal features and lighting effects that degrade the results of image-based makeup transfer methods."
]
} |
1907.03398 | 2955639709 | To meet the women appearance needs, we present a novel virtual experience approach of facial makeup transfer, developed into windows platform application software. The makeup effects could present on the user's input image in real time, with an only single reference image. The input image and reference image are divided into three layers by facial feature points landmarked: facial structure layer, facial color layer, and facial detail layer. Except for the above layers are processed by different algorithms to generate output image, we also add illumination transfer, so that the illumination effect of the reference image is automatically transferred to the input image. Our approach has the following three advantages: (1) Black or dark and white facial makeup could be effectively transferred by introducing illumination transfer; (2) Efficiently transfer facial makeup within seconds compared to those methods based on deep learning frameworks; (3) Reference images with the air-bangs could transfer makeup perfectly. | In 2016, @cite_28 of NVIDIA Research designed a new deep convolutional neural network for makeup transfer, which not only could transfer makeup, eye shadow, lip makeup, but also recommend the most suitable input image’s makeup. The network consists of two consecutive steps. The first step is to use the FCN network to parse the facial and resolve different parts, which are distinguished by different colors. The input image and the facial decomposition image of the input image, and the reference image and the facial decomposition image of the reference image are used as input of the makeup transfer network. According to the characteristics of the facial makeup, eye shadow and lip makeup are processed by different loss functions, and the three are integrated. And adding a part of the retained facial image of the input image to get the final result image. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2963832775"
],
"abstract": [
"In this paper, we propose a novel Deep Localized Makeup Transfer Network to automatically recommend the most suitable makeup for a female and synthesis the makeup on her face. Given a before-makeup face, her most suitable makeup is determined automatically. Then, both the before-makeup and the reference faces are fed into the proposed Deep Transfer Network to generate the after-makeup face. Our end-to-end makeup transfer network have several nice properties including: (1) with complete functions: including foundation, lip gloss, and eye shadow transfer; (2) cosmetic specific: different cosmetics are transferred in different manners; (3) localized: different cosmetics are applied on different facial regions; (4) producing naturally looking results without obvious artifacts; (5) controllable makeup lightness: various results from light makeup to heavy makeup can be generated. Qualitative and quantitative experiments show that our network performs much better than the methods of [Guo and Sim, 2009] and two variants of NerualStyle [, 2015a]."
]
} |
1907.03398 | 2955639709 | To meet the women appearance needs, we present a novel virtual experience approach of facial makeup transfer, developed into windows platform application software. The makeup effects could present on the user's input image in real time, with an only single reference image. The input image and reference image are divided into three layers by facial feature points landmarked: facial structure layer, facial color layer, and facial detail layer. Except for the above layers are processed by different algorithms to generate output image, we also add illumination transfer, so that the illumination effect of the reference image is automatically transferred to the input image. Our approach has the following three advantages: (1) Black or dark and white facial makeup could be effectively transferred by introducing illumination transfer; (2) Efficiently transfer facial makeup within seconds compared to those methods based on deep learning frameworks; (3) Reference images with the air-bangs could transfer makeup perfectly. | In 2018, @cite_1 of Princeton University in the United States proposed the PairedCycleGAN network for transfering the facial makeup of the reference image to the input image. The main idea is to train the generation network @math and the authentication network @math to transfer a specific makeup style. @cite_1 trained three generators separately, focusing the network capacity and resolution on the unique features of each region. For each pair of images before and after makeup, firstly apply a facial analysis algorithm to segment each facial component, such as eyes, eyebrows, lips, nose, and etc. Finally each component is separately calculated and recombined. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2798600195"
],
"abstract": [
"This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles."
]
} |
1907.03402 | 2961134045 | We propose a new semi-supervised learning method on face-related tasks based on Multi-Task Learning (MTL) and data distillation. The proposed method exploits multiple datasets with different labels for different-but-related tasks such as simultaneous age, gender, race, facial expression estimation. Specifically, when there are only a few well-labeled data for a specific task among the multiple related ones, we exploit the labels of other related tasks in different domains. Our approach is composed of (1) a new MTL method which can deal with weakly labeled datasets and perform several tasks simultaneously, and (2) an MTL-based data distillation framework which enables network generalization for the training and test data from different domains. Experiments show that the proposed multi-task system performs each task better than the baseline single task. It is also demonstrated that using different domain datasets along with the main dataset can enhance network generalization and overcome the domain differences between datasets. Also, comparing data distillation both on the baseline and MTL framework, the latter shows more accurate predictions on unlabeled data from different domains. Furthermore, by proposing a new learning-rate optimization method, our proposed network is able to dynamically tune its learning rate. | There are a large number of researches attempt to transfer knowledge from a teacher model to a student model. Romero al @cite_1 proposed FitNets, a two-stage strategy to train networks by providing from the teacher middle layers. Knowledge Distillation (KD) proposed by Hinton al @cite_28 leverage the predictions of a larger model as the to better training of a smaller model. After that, Chen al @cite_41 improved the efficiency and the accuracy of an object detector by transferring the knowledge from a powerful teacher in case of model architecture or the input data resolution to a weaker student. Zagoruyko al @cite_34 proposed several ways to transfer the attention from a teacher network to a student. Polino al @cite_31 proposed quantized distillation to compress a network in terms of depth by using knowledge distillation. Furlanello al @cite_12 used knowledge distillation on a student the same as the teacher to improve the performance of the networks by teaching selves. | {
"cite_N": [
"@cite_28",
"@cite_41",
"@cite_1",
"@cite_31",
"@cite_34",
"@cite_12"
],
"mid": [
"1821462560",
"",
"",
"2787752464",
"2561238782",
"2803023299"
],
"abstract": [
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
"",
"",
"Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.",
"Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures.",
"Knowledge distillation (KD) consists of transferring knowledge from one machine learning model (the teacher ) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the student's compactness. we desire a compact model with performance close to the teacher's. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5 ) and CIFAR-100 (15.5 ) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating a role of the teacher outputs on both predicted and non-predicted classes. We present experiments with students of various capacities, focusing on the under-explored case where students overpower teachers. Our experiments show significant advantages from transferring knowledge between DenseNets and ResNets in either direction."
]
} |
1907.03402 | 2961134045 | We propose a new semi-supervised learning method on face-related tasks based on Multi-Task Learning (MTL) and data distillation. The proposed method exploits multiple datasets with different labels for different-but-related tasks such as simultaneous age, gender, race, facial expression estimation. Specifically, when there are only a few well-labeled data for a specific task among the multiple related ones, we exploit the labels of other related tasks in different domains. Our approach is composed of (1) a new MTL method which can deal with weakly labeled datasets and perform several tasks simultaneously, and (2) an MTL-based data distillation framework which enables network generalization for the training and test data from different domains. Experiments show that the proposed multi-task system performs each task better than the baseline single task. It is also demonstrated that using different domain datasets along with the main dataset can enhance network generalization and overcome the domain differences between datasets. Also, comparing data distillation both on the baseline and MTL framework, the latter shows more accurate predictions on unlabeled data from different domains. Furthermore, by proposing a new learning-rate optimization method, our proposed network is able to dynamically tune its learning rate. | Saenko al @cite_19 was one of the first researchers who proposed a method to solve the domain shift problem. More recent works are based on deep neural network aiming to align features by minimizing domain gaps using some distance function @cite_2 @cite_37 . In these methods, domain discriminator trains to distinguish different domains while the generator tries to fool discriminator through the learning of more general representation and features. | {
"cite_N": [
"@cite_19",
"@cite_37",
"@cite_2"
],
"mid": [
"1722318740",
"",
"2015112703"
],
"abstract": [
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.",
"",
"Images, while easy to acquire, view, publish, and share, they lack critical depth information. This poses a serious bottleneck for many image manipulation, editing, and retrieval tasks. In this paper we consider the problem of adding depth to an image of an object, effectively 'lifting' it back to 3D, by exploiting a collection of aligned 3D models of related objects. Our key insight is that, even when the imaged object is not contained in the shape collection, the network of shapes implicitly characterizes a shape-specific deformation subspace that regularizes the problem and enables robust diffusion of depth information from the shape collection to the input image. We evaluate our fully automatic approach on diverse and challenging input images, validate the results against Kinect depth readings, and demonstrate several imaging applications including depth-enhanced image editing and image relighting."
]
} |
1907.03285 | 2954507894 | Finite-state models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them up-to-date requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finite-state models of function blocks (FBs) defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples, based on reduction to Boolean satisfiability problem (SAT). Additionally, we take into account linear temporal properties using counterexample-guided synthesis. In contrast to existing approaches, suggested method is more efficient and produce minimal finite-state models both in terms of number of states and guard conditions. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies: inference of a finite-state model of a Pick-and-Place manipulator, and reconstruction of randomly generated automata. | The problem of finding a minimal deterministic finite-state machine from behavior examples is known to be NP-complete @cite_27 , and the complexity of the LTL synthesis problem is double exponential in the length of the LTL specification @cite_30 . Despite this, synthesis of various types of finite-state models from behavior examples and or formal specification has been addressed by many researchers including @cite_7 @cite_29 @cite_37 @cite_2 @cite_14 @cite_38 @cite_12 @cite_28 @cite_0 @cite_4 @cite_1 with methods based on heuristic state merging, evolutionary algorithms and SAT-solvers. In the context of this paper we are interested in exact methods, so we direct our attention to SAT-based methods. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_1",
"@cite_0",
"@cite_27",
"@cite_2",
"@cite_12"
],
"mid": [
"168680193",
"2033269895",
"2912788251",
"2734817039",
"2226549490",
"2141811956",
"2809142577",
"2408207366",
"2120145129",
"2904898906",
"2017603160",
"2588494934",
"2403966573"
],
"abstract": [
"",
"In this paper, we describe the method of finite state machine (FSM) induction using genetic algorithm with fitness function, cross-over and mutation based on testing and model checking. Input data for the genetic algorithm is a set of tests and a set of properties described using linear time logic. Each test consists of an input sequence of events and the corresponding output action sequence. In previous works testing and model checking were used separately in genetic algorithms. Usage of such an approach is limited because the behavior of system usually cannot be described by tests only. So, additional validation or verification is needed. Calculation of fitness function based only on verification do not perform well because there are very few possible values of fitness function (verification gives only \"yes\" or \"no\" answer). The approach described is tested on the problem of finite state machine induction for elevator doors controlling. Using tests only the genetic algorithm constructs the finite machine working improperly in some cases. Usage of verification allows to induct the correct finite state machine.",
"Inference of deterministic finite automata (DFA) finds a wide range of important practical applications. In recent years, the use of SAT and SMT solvers for the minimum size DFA inference problem (MinDFA) enabled significant performance improvements. Nevertheless, there are many problems that are simply too difficult to solve to optimality with existing technologies. One fundamental difficulty of the MinDFA problem is the size of the search space. Moreover, another fundamental drawback of these approaches is the encoding size. This paper develops novel compact encodings for Symmetry Breaking of SAT-based approaches to MinDFA. The proposed encodings are shown to perform comparably in practice with the most efficient, but also significantly larger, symmetry breaking encodings.",
"We present ( BoSy ), a reactive synthesis tool based on the bounded synthesis approach. Bounded synthesis ensures the minimality of the synthesized implementation by incrementally increasing a bound on the size of the solutions it considers. For each bound, the existence of a solution is encoded as a logical constraint solving problem that is solved by an appropriate solver. ( BoSy ) constructs bounded synthesis encodings into SAT, QBF, DQBF, EPR, and SMT, and interfaces to solvers of the corresponding type. When supported by the solver, ( BoSy ) extracts solutions as circuits, which can, if desired, be verified with standard hardware model checkers. ( BoSy ) won the LTL synthesis track at SYNTCOMP 2016. In addition to its use as a synthesis tool, ( BoSy ) can also be used as an experimentation and performance evaluation framework for various types of satisfiability solvers.",
"We propose a method to construct finite-state reactive controllers for systems whose interactions with their adversarial environment are modeled by infinite-duration two-player games over possibly infinite graphs. The method targets safety games with infinitely many states or with such a large number of states that it would be impractical--if not impossible--for conventional synthesis techniques that work on the entire state space. We resort to constructing finite-state controllers for such systems through an automata learning approach, utilizing a symbolic representation of the underlying game that is based on finite automata. Throughout the learning process, the learner maintains an approximation of the winning region represented as a finite automaton and refines it using different types of counterexamples provided by the teacher until a satisfactory controller can be derived if one exists. We present a symbolic representation of safety games inspired by regular model checking, propose implementations of the learner and teacher, and evaluate their performance on examples motivated by robotic motion planning.",
"We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach is competitive with alternative techniques. Our contributions are fourfold: First, we propose a compact translation of DFA identification into SAT. Second, we reduce the SAT search space by adding lower bound information using a fast max-clique approximation algorithm. Third, we include many redundant clauses to provide the SAT solver with some additional knowledge about the problem. Fourth, we show how to use the flexibility of our translation in order to apply it to very hard problems. Experiments on a well-known suite of random DFA identification problems show that SAT solvers can efficiently tackle all instances. Moreover, our algorithm outperforms state-of-the-art techniques on several hard problems.",
"Inferring a minimal finite state machine (FSM) from a given set of traces is a fundamental problem in computer science. Although the problem is known to be NP-complete, it can be solved efficiently with SAT solvers when the given set of traces is relatively small. On the other hand, to infer an FSM equivalent to a machine which generates traces, the set of traces should be sufficiently representative and hence large. However, the existing SAT-based inference techniques do not scale well when the length and number of traces increase. In this paper, we propose a novel approach which processes lengthy traces incrementally. The experimental results indicate that it scales sufficiently well and time it takes grows slowly with the size of traces.",
"Finite-state models, such as finite-state machines (FSMs), aid software engineering in many ways. They are often used in formal verification and also can serve as visual software models. The latter application is associated with the problems of software synthesis and automatic derivation of software models from specification. Smaller synthesized models are more general and are easier to comprehend, yet the problem of minimum FSM identification has received little attention in previous research. This paper presents four exact methods to tackle the problem of minimum FSM identification from a set of test scenarios and a temporal specification represented in linear temporal logic. The methods are implemented as an open-source tool. Three of them are based on translations of the FSM identification problem to SAT or QSAT problem instances. Accounting for temporal properties is done via counterexample prohibition. Counterexamples are either obtained from previously identified FSMs, or based on bounded model checking. The fourth method uses backtracking. The proposed methods are evaluated on several case studies and on a larger number of randomly generated instances of increasing complexity. The results show that the Iterative SAT-based method is the leader among the proposed methods. The methods are also compared with existing inexact approaches, i.e., the ones which do not necessarily identify the minimum FSM, and these comparisons show encouraging results.",
"G4LTL-ST automatically synthesizes control code for industrial Programmable Logic Controls (PLC) from timed behavioral specifications of input-output signals. These specifications are expressed in a linear temporal logic (LTL) extended with non-linear arithmetic constraints and timing constraints on signals. G4LTL-ST generates code in IEC 61131-3-compatible Structured Text, which is compiled into executable code for a large number of industrial field-level devices. The synthesis algorithm of G4LTL-ST implements pseudo-Boolean abstraction of data constraints and the compilation of timing constraints into LTL, together with a counterstrategy-guided abstraction-refinement synthesis loop. Since temporal logic specifications are notoriously difficult to use in practice, G4LTL-ST supports engineers in specifying realizable control problems by suggesting suitable restrictions on the behavior of the control environment from failed synthesis attempts.",
"The paper focuses on the problems of passive and active FSM inference as well as checking sequence generation. We consider the setting where an FSM cannot be reset so that its inference is constrained to a single trace either given a priori in a passive inference scenario or to be constructed in an active inference scenario or aiming at obtaining checking sequence for a given FSM. In each of the last two cases, the expected result is a trace representing a checking sequence for an inferred machine, if it was not given. We demonstrate that this can be achieved by a repetitive use of a procedure that infers an FSM from a given trace (identifying a minimal machine consistent with a trace) avoiding equivalent conjectures. We thus show that FSM inference and checking sequence construction are two sides of the same coin. Following an existing approach of constructing conjectures by SAT solving, we elaborate first such a procedure and then based on it the methods for obtaining checking sequence for a given FSM and inferring a machine from a black box. The novelty of our approach is that it does not use any state identification facilities. We demonstrate that the proposed approach can also be constrained to find a solution in a subset of FSMs represented by a nondeterministic mutation machine. Experiments with a prototype implementation of the developed approach using an existing SAT solver indicate that it scales for FSMs with up to a dozen of states and requires relatively short sequences to identify a black box machine.",
"The question of whether there is an automaton with n states which agrees with a finite set D of data is shown to be NP -complete, although identification-in-the-limit of finite automata is possible in polynomial time as a function of the size of D . Necessary and sufficient conditions are given for D to be realizable by an automaton whose states are reachable from the initial state by a given set T of input strings. Although this question is also NP -complete, these conditions suggest heuristic approaches. Even if a solution to this problem were available, it is shown that finding a minimal set T does not necessarily give the smallest possible T .",
"Closed-loop model checking, a formal verification technique for industrial automation systems, increases the richness of specifications to be checked and reduces the state space to be verified compared to the open-loop case. To be applied, it needs the controller and the plant formal models to be coupled. There are approaches for controller synthesis, but little has been done regarding plant model construction. While manual plant modeling is time consuming and error-prone, discretizing a simulation model of the plant leads to state excess. This paper aims to solve the problem of automatic plant model construction from existing specification, which is represented in the form of plant behavior examples, or traces, and temporal properties. The proposed method, which is based on the translation of the problem to the Boolean satisfiability problem, is evaluated and shown to be applicable on several case study plant model synthesis tasks and on randomly generated problem instances.",
"The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample."
]
} |
1907.03285 | 2954507894 | Finite-state models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them up-to-date requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finite-state models of function blocks (FBs) defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples, based on reduction to Boolean satisfiability problem (SAT). Additionally, we take into account linear temporal properties using counterexample-guided synthesis. In contrast to existing approaches, suggested method is more efficient and produce minimal finite-state models both in terms of number of states and guard conditions. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies: inference of a finite-state model of a Pick-and-Place manipulator, and reconstruction of randomly generated automata. | Extended Finite-State Machine () is the model most similar to the ECC considered in this paper -- it combines a Mealy and a Moore automaton extended with conditional transitions. Transitions are labeled with input events and Boolean formulas over the input variables, and automaton states have associated sequences of output actions. Several approaches based on translation to SAT @cite_29 @cite_10 have been proposed for inferring EFSMs from behavior examples and LTL properties. @cite_29 LTL properties are accounted for via an iterative counterexample prohibition approach. | {
"cite_N": [
"@cite_29",
"@cite_10"
],
"mid": [
"2408207366",
"2026252407"
],
"abstract": [
"Finite-state models, such as finite-state machines (FSMs), aid software engineering in many ways. They are often used in formal verification and also can serve as visual software models. The latter application is associated with the problems of software synthesis and automatic derivation of software models from specification. Smaller synthesized models are more general and are easier to comprehend, yet the problem of minimum FSM identification has received little attention in previous research. This paper presents four exact methods to tackle the problem of minimum FSM identification from a set of test scenarios and a temporal specification represented in linear temporal logic. The methods are implemented as an open-source tool. Three of them are based on translations of the FSM identification problem to SAT or QSAT problem instances. Accounting for temporal properties is done via counterexample prohibition. Counterexamples are either obtained from previously identified FSMs, or based on bounded model checking. The fourth method uses backtracking. The proposed methods are evaluated on several case studies and on a larger number of randomly generated instances of increasing complexity. The results show that the Iterative SAT-based method is the leader among the proposed methods. The methods are also compared with existing inexact approaches, i.e., the ones which do not necessarily identify the minimum FSM, and these comparisons show encouraging results.",
"The ability to reverse-engineer models of software behaviour is valuable for a wide range of software maintenance, validation and verification tasks. Current reverse-engineering techniques focus either on control-specific behaviour (e.g., in the form of Finite State Machines), or data-specific behaviour (e.g., as pre post-conditions or invariants). However, typical software behaviour is usually a product of the two; models must combine both aspects to fully represent the software's operation. Extended Finite State Machines (EFSMs) provide such a model. Although attempts have been made to infer EFSMs, these have been problematic. The models inferred by these techniques can be non-deterministic, the inference algorithms can be inflexible, and only applicable to traces with specific characteristics. This paper presents a novel EFSM inference technique that addresses the problems of inflexibility and non-determinism. It also adapts an experimental technique from the field of Machine Learning to evaluate EFSM inference techniques, and applies it to three diverse software systems."
]
} |
1907.03285 | 2954507894 | Finite-state models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them up-to-date requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finite-state models of function blocks (FBs) defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples, based on reduction to Boolean satisfiability problem (SAT). Additionally, we take into account linear temporal properties using counterexample-guided synthesis. In contrast to existing approaches, suggested method is more efficient and produce minimal finite-state models both in terms of number of states and guard conditions. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies: inference of a finite-state model of a Pick-and-Place manipulator, and reconstruction of randomly generated automata. | @cite_26 , the method is proposed for inferring an FB model from given execution scenarios by means of translation to the Constraint Satisfaction Problem (CSP). However, has the following restrictions. Guard conditions are generated in form -- corresponding Boolean formulas depend on input variables. Such models do not generalize to unseen data. This is countered by greedy guard conditions minimization, but it does not guarantee the result minimality. @cite_15 is extended with a counterexample prohibition procedure similar to @cite_29 to account for LTL properties. Guard conditions are represented with fixed-size conjunctions of positive negative literals of input variables. The drawback of this approach is that it does not allow constructing models when temporal properties are poorly covered with behavior examples. | {
"cite_N": [
"@cite_29",
"@cite_15",
"@cite_26"
],
"mid": [
"2408207366",
"2898670592",
"2770374269"
],
"abstract": [
"Finite-state models, such as finite-state machines (FSMs), aid software engineering in many ways. They are often used in formal verification and also can serve as visual software models. The latter application is associated with the problems of software synthesis and automatic derivation of software models from specification. Smaller synthesized models are more general and are easier to comprehend, yet the problem of minimum FSM identification has received little attention in previous research. This paper presents four exact methods to tackle the problem of minimum FSM identification from a set of test scenarios and a temporal specification represented in linear temporal logic. The methods are implemented as an open-source tool. Three of them are based on translations of the FSM identification problem to SAT or QSAT problem instances. Accounting for temporal properties is done via counterexample prohibition. Counterexamples are either obtained from previously identified FSMs, or based on bounded model checking. The fourth method uses backtracking. The proposed methods are evaluated on several case studies and on a larger number of randomly generated instances of increasing complexity. The results show that the Iterative SAT-based method is the leader among the proposed methods. The methods are also compared with existing inexact approaches, i.e., the ones which do not necessarily identify the minimum FSM, and these comparisons show encouraging results.",
"We developed an algorithm for inferring controller logic for cyber-physical systems (CPS) in the form of a state machine from given execution traces and linear temporal logic formulas. The algorithm implements an iterative counterexample-guided strategy: constraint programming is employed for constructing a minimal state machine from positive and negative traces (counterexamples) while formal verification is used for discovering new counterexamples. The proposed approach extends previous work by (1) considering a more intrinsic model of a state machine making the algorithm applicable to synthesizing CPS controller logic, and (2) using closed-loop verification which allows considering more expressive temporal properties.",
"A method for inferring finite-state models of function blocks from given execution traces based on translation to the constraint satisfaction problem (CSP) is proposed. In contrast to the previous method based on a metaheuristic algorithm, the approach suggested in this paper is exact: it allows to find a solution if it exists or to prove the opposite. The proposed method is evaluated on the example of constructing a finite-state model of a controller for a Pick-and-Place manipulator and is shown to be significantly faster then the metaheuristic algorithm."
]
} |
1907.03285 | 2954507894 | Finite-state models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them up-to-date requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finite-state models of function blocks (FBs) defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples, based on reduction to Boolean satisfiability problem (SAT). Additionally, we take into account linear temporal properties using counterexample-guided synthesis. In contrast to existing approaches, suggested method is more efficient and produce minimal finite-state models both in terms of number of states and guard conditions. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies: inference of a finite-state model of a Pick-and-Place manipulator, and reconstruction of randomly generated automata. | @cite_35 the two-stage approach of is developed further: on the first stage, a base model is inferred with a translation to SAT, and on the second stage its guard conditions are minimized via a CSP-based approach, in which guard condition Boolean formulas are represented with parse trees. By introducing a total bound on the number of nodes in these parse trees and solving a series of CSP problems, the method finds a model with minimal guard conditions w.r.t. the base model identified on the first stage. Global minimality of guard conditions is not guaranteed due to the two-stage implementation: minimal guards may correspond to another base model, not the one found on the first stage. The same argument applies against any approach based on state machine minimization @cite_31 . In addition, LTL properties are not supported. | {
"cite_N": [
"@cite_35",
"@cite_31"
],
"mid": [
"2908682741",
"2495534164"
],
"abstract": [
"We propose a two-stage exact approach for identifying finite-state models of function blocks based on given execution traces. First, a base finite-state model is inferred with a method based on translation to the Boolean satisfiability problem, and then, the base model is generalized by inferring minimal guard conditions of the state machine with a method based on translation to the constraint satisfaction problem.",
"CTL synthesis [8] is a long-standing problem with applications to synthesising synchronization protocols and concurrent programs. We show how to formulate CTL model checking in terms of “monotonic theories”, enabling us to use the SAT Modulo Monotonic Theories (SMMT) [5] framework to build an efficient SAT-modulo-CTL solver. This yields a powerful procedure for CTL synthesis, which is not only faster than previous techniques from the literature, but also scales to larger and more difficult formulas. Additionally, because it is a constraint-based approach, it can be easily extended with further constraints to guide the synthesis. Moreover, our approach is efficient at producing minimal Kripke structures on common CTL synthesis benchmarks."
]
} |
1907.03285 | 2954507894 | Finite-state models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them up-to-date requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finite-state models of function blocks (FBs) defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples, based on reduction to Boolean satisfiability problem (SAT). Additionally, we take into account linear temporal properties using counterexample-guided synthesis. In contrast to existing approaches, suggested method is more efficient and produce minimal finite-state models both in terms of number of states and guard conditions. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies: inference of a finite-state model of a Pick-and-Place manipulator, and reconstruction of randomly generated automata. | Overall, none of the existing methods allow simultaneously and efficiently accounting for (1) behavior examples, (2) LTL properties, and (3) minimality of synthesized automata in terms of both number of states and guard conditions complexity. The approach proposed in this paper extends @cite_35 and contributes to the state-of-the-art SAT-based state machine synthesis: it supports positive behavior examples, realizes counterexample-guided synthesis to account for LTL properties, and produces models minimal both in terms of the number of states and guard conditions complexity. Though our approach is implemented for FB model identification, it can be easily applied for inference of other types of state machines. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2908682741"
],
"abstract": [
"We propose a two-stage exact approach for identifying finite-state models of function blocks based on given execution traces. First, a base finite-state model is inferred with a method based on translation to the Boolean satisfiability problem, and then, the base model is generalized by inferring minimal guard conditions of the state machine with a method based on translation to the constraint satisfaction problem."
]
} |
1907.03390 | 2953400733 | Some robots can interact with humans using natural language, and identify service requests through human-robot dialog. However, few robots are able to improve their language capabilities from this experience. In this paper, we develop a dialog agent for robots that is able to interpret user commands using a semantic parser, while asking clarification questions using a probabilistic dialog manager. This dialog agent is able to augment its knowledge base and improve its language capabilities by learning from dialog experiences, e.g., adding new entities and learning new ways of referring to existing entities. We have extensively evaluated our dialog system in simulation as well as with human participants through MTurk and real-robot platforms. We demonstrate that our dialog agent performs better in efficiency and accuracy in comparison to baseline learning agents. Demo video can be found at this https URL | Researchers have developed algorithms for learning to interpret natural language commands @cite_20 @cite_0 @cite_11 . Recent research enabled the co-learning of syntax and semantics of spatial language @cite_17 @cite_2 . Although the systems support the learning of language skills, they do not have a dialog management component (implicitly assuming perfect language understanding), and hence do not readily support multi-turn communications. | {
"cite_N": [
"@cite_11",
"@cite_0",
"@cite_2",
"@cite_20",
"@cite_17"
],
"mid": [
"2236233024",
"2296135247",
"2889756642",
"46490633",
"2230046320"
],
"abstract": [
"This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G3), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as \"Put the tire pallet on the truck.\" The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot's performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system's performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.",
"",
"Effective communication between humans often embeds both temporal and spatial context. While spatial context captures the geographic settings of objects in the environment, temporal context describes their changes over time. In this paper, we propose temporal spatial inverse semantics (TeSIS) to extend the inverse semantics approach to also consider the temporal context for robots communicating with humans. Inverse semantics generates natural language requests while taking into account how well the human listeners would interpret those requests given the current spatial context. Compared to inverse semantics, our approach incorporates also temporal context by referring to spatial context information in the past. To achieve this, we extend the sentence structure in inverse semantics to generate sentences that can refer to not only the current but also previous states of the environment. A new metric based on the extended sentence structure is developed by breaking a single sentence into multiple independent sentences that refer to environment states at different times. Using this approach, we are able to generate sentences such as “Please pick up the cup beside the oven that was on the dining table”. To evaluate our approach, we randomly generate scenarios in an experimental domain. Each scenario includes the description of the current and several immediate previous states. Natural language sentences are then generated for these scenarios using both inverse semantics that uses only the spatial context and our approach. Amazon MTurk is used to compare the sentences generated and results show that TeSIS achieves better accuracy, sometimes by a significant margin, than the baseline.",
"As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment.",
"This paper reports recent progress on modeling the grounded co-acquisition of syntax and semantics of locative spatial language in developmental robots. We show how a learner robot can learn to produce and interpret spatial utterances in guided-learning interactions with a tutor robot (equipped with a system for producing English spatial phrases). The tutor guides the learning process by simplifying the challenges and complexity of utterances, gives feedback, and gradually increases the complexity of the language to be learnt. Our experiments show promising results towards long-term, incremental acquisition of natural language in a process of co-development of syntax and semantics."
]
} |
1907.03390 | 2953400733 | Some robots can interact with humans using natural language, and identify service requests through human-robot dialog. However, few robots are able to improve their language capabilities from this experience. In this paper, we develop a dialog agent for robots that is able to interpret user commands using a semantic parser, while asking clarification questions using a probabilistic dialog manager. This dialog agent is able to augment its knowledge base and improve its language capabilities by learning from dialog experiences, e.g., adding new entities and learning new ways of referring to existing entities. We have extensively evaluated our dialog system in simulation as well as with human participants through MTurk and real-robot platforms. We demonstrate that our dialog agent performs better in efficiency and accuracy in comparison to baseline learning agents. Demo video can be found at this https URL | Algorithms have been developed for dialog policy learning @cite_30 @cite_25 @cite_10 . Recent research on Deep RL has enabled dialog agents to learn complex representations for dialog management @cite_26 @cite_6 . The systems do not include a language parsing component. As a result, users can only communicate with their dialog agents using simple or predefined language patterns. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_6",
"@cite_10",
"@cite_25"
],
"mid": [
"1681299129",
"2294065713",
"2951805158",
"2121863487",
"2119567691"
],
"abstract": [
"Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and empirical evaluation of NJFun, an experimental spoken dialogue system that provides users with access to information about fun things to do in New Jersey. Our results show that by optimizing its performance via reinforcement learning, NJFun measurably improves system performance.",
"This article presents SimpleDS, a simple and publicly available dialogue system trained with deep reinforcement learning. In contrast to previous reinforcement learning dialogue systems, this system avoids manual feature engineering by performing action selection directly from raw text of the last system and (noisy) user responses. Our initial results, in the restaurant domain, report that it is indeed possible to induce reasonable behaviours with such an approach that aims for higher levels of automation in dialogue control for intelligent interactive systems and robots.",
"Reinforcement learning methods have been used for learning dialogue policies. However, learning an effective dialogue policy frequently requires prohibitively many conversations. This is partly because of the sparse rewards in dialogues, and the very few successful dialogues in early learning phase. Hindsight experience replay (HER) enables learning from failures, but the vanilla HER is inapplicable to dialogue learning due to the implicit goals. In this work, we develop two complex HER methods providing different tradeoffs between complexity and performance, and, for the first time, enabled HER-based dialogue policy learning. Experiments using a realistic user simulator show that our HER methods perform better than existing experience replay methods (as applied to deep Q-networks) in learning rate.",
"Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.",
"From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a \"theorem-proof\" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic"
]
} |
1907.03390 | 2953400733 | Some robots can interact with humans using natural language, and identify service requests through human-robot dialog. However, few robots are able to improve their language capabilities from this experience. In this paper, we develop a dialog agent for robots that is able to interpret user commands using a semantic parser, while asking clarification questions using a probabilistic dialog manager. This dialog agent is able to augment its knowledge base and improve its language capabilities by learning from dialog experiences, e.g., adding new entities and learning new ways of referring to existing entities. We have extensively evaluated our dialog system in simulation as well as with human participants through MTurk and real-robot platforms. We demonstrate that our dialog agent performs better in efficiency and accuracy in comparison to baseline learning agents. Demo video can be found at this https URL | Mobile robot platforms have been equipped with semantic parsing and dialog management capabilities. After a task is identified in dialog, these robots are able to conduct service tasks using a task planner @cite_12 @cite_23 @cite_24 . Although these works enable a robot to identify human requests via dialog, they do not enable learning from these experiences. | {
"cite_N": [
"@cite_24",
"@cite_23",
"@cite_12"
],
"mid": [
"2773498419",
"1864247448",
"2737759620"
],
"abstract": [
"Probabilistic graphical models, such as partially observable Markov decision processes (POMDPs), have been used in stochastic spoken dialog systems to handle the inherent uncertainty in speech recognition and language understanding. Such dialog systems suffer from the fact that only a relatively small number of domain variables are allowed in the model, so as to ensure the generation of good-quality dialog policies. At the same time, the non-language perception modalities on robots, such as vision-based facial expression recognition and Lidar-based distance detection, can hardly be integrated into this process. In this paper, we use a probabilistic commonsense reasoner to “guide” our POMDP-based dialog manager, and present a principled, multimodal dialog management (MDM) framework that allows the robot's dialog belief state to be seamlessly updated by both observations of human spoken language, and exogenous events such as the change of human facial expressions. The MDM approach has been implemented and evaluated both in simulation and on a real mobile robot using guidance tasks.",
"In order to be fully robust and responsive to a dynamically changing real-world environment, intelligent robots will need to engage in a variety of simultaneous reasoning modalities. In particular, in this paper we consider their needs to i) reason with commonsense knowledge, ii) model their nondeter-ministic action outcomes and partial observability, and iii) plan toward maximizing long-term rewards. On one hand, Answer Set Programming (ASP) is good at representing and reasoning with commonsense and default knowledge, but is ill-equipped to plan under probabilistic uncertainty. On the other hand, Partially Observable Markov Decision Processes (POMDPs) are strong at planning under uncertainty toward maximizing long-term rewards, but are not designed to incorporate commonsense knowledge and inference. This paper introduces the CORPP algorithm which combines P-log, a probabilistic extension of ASP, with POMDPs to integrate commonsense reasoning with planning under uncertainty. Our approach is fully implemented and tested on a shopping request identification problem both in simulation and on a real robot. Compared with existing approaches using P-log or POMDPs individually, we observe significant improvements in both efficiency and accuracy.",
"This paper reports the project of a shopping mall service robot, named KeJia, which is designed for customer guidance, providing information and entertainments in a real shopping mall environment. The background, motivations, and requirements of this project are analyzed and presented, which guide the development of the robot system. To develop the robot system, new techniques and improvements of existing methods for mapping, localization, and navigation are proposed to address the challenge of robot motion in a very large, complex, and crowded environment. Moreover, a novel multimodal interaction mechanism is designed by which customers can conveniently interact with the robot either in speech or via a mobile application. The KeJia robot was deployed in a large modern shopping mall with the size of 30,000 m2 and more than 160 shops; field trials were conducted for 40 days where about 530 customers were served. The results of both simulated experiments and field tests confirm the feasibility, stability, a..."
]
} |
1907.03535 | 2955537983 | A highly successful approach to route planning in networks (particularly road networks) is to identify a hierarchy in the network that allows faster queries after some preprocessing that basically inserts additional "shortcut"-edges into a graph. In the past there has been a succession of techniques that infer a more and more fine grained hierarchy enabling increasingly more efficient queries. This appeared to culminate in contraction hierarchies that assign one hierarchy level to each vertex. In this paper we show how to identify an even more fine grained hierarchy that assigns one level to each edge of the network. Our findings indicate that this can lead to considerably smaller search spaces in terms of visited edges. Currently, this does not result in improved query times so that it remains an open question whether these edge hierarchies can lead to overall improved performance. However, we believe that the technique as such is a noteworthy enrichment of the portfolio of available techniques that might prove useful in the future. | There has been a lot of work on route planning. Refer to @cite_11 for a recent overview. Here we only give selected references to place EHs into the big picture. Besides hierarchical route planning techniques there are also techniques which direct the shortest path search towards the goal (e.g., landmarks @cite_16 , precomputed cluster distances @cite_0 , arc flags @cite_20 ). On road networks goal directed techniques are usually inferior to hierarchical ones since they need considerably more query or preprocessing time. However, combining goal directed and hierarchical route planning is a useful approach @cite_16 @cite_2 . We expect that this is also possible for EHs using the same techniques as used before. Other techniques allow very fast queries by building shortest paths directly from two (hub labeling @cite_12 ) or three (transit node routing @cite_5 @cite_3 ) precomputed shortcuts without requiring a graph search. However, these methods require considerably more space than EHs. | {
"cite_N": [
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"1601313669",
"2132044905",
"2150418461",
"2083019227",
"2151400766",
"2112513979",
"1559814459",
"2104317982"
],
"abstract": [
"Transit Node Routing (TNR) is a fast and exact distance oracle for road networks. We show several new results for TNR. First, we give a surprisingly simple implementation fully based on contraction hierarchies that speeds up preprocessing by an order of magnitude approaching the time for just finding a contraction hierarchy (which alone has two orders of magnitude larger query time). We also develop a very effective purely graph theoretical locality filter without any compromise in query times. Finally, we show that a specialization to the online many-to-one (or one-to-many) shortest path problem.",
"We demonstrate how Dijkstra's algorithm for shortest path queries can be accelerated by using precomputed shortest path distances. Our approach allows a completely flexible tradeoff between query time and space consumption for precomputed distances. In particular, sublinear space is sufficient to give the search a strong “sense of direction”. We evaluate our approach experimentally using large, real-world road networks.",
"In recent years, highly effective hierarchical and goal-directed speed-up techniques for routing in large road networks have been developed. This article makes a systematic study of combinations of such techniques. These combinations turn out to give the best results in many scenarios, including graphs for unit disk graphs, grid networks, and time-expanded timetables. Besides these quantitative results, we obtain general insights for successful combinations.",
"When you drive to somewhere far away, you will leave your current location via one of only a few important traffic junctions. Starting from this informal observation, we developed an algorithmic approach, transit node routing, that allows us to reduce quickest path queries in road networks to a small number of table lookups. For road maps of Western Europe and the United States, our best query times improved over the best previously published figures by two orders of magnitude. This is also more than one million times faster than the best known algorithm for general networks.",
"We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [17] in several ways. In particular, we introduce a bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs to reduce vertex reaches. Our modifications greatly improve both preprocessing and query times. The resulting algorithm is as fast as the best previous method, due to Sanders and Schultes [28]. However, our algorithm is simpler and combines in a natural way with A* search, which yields significantly better query times.",
"[SODA 2010] have recently presented a theoretical analysis of several practical point-to-point shortest path algorithms based on modeling road networks as graphs with low highway dimension. They also analyze a labeling algorithm. While no practical implementation of this algorithm existed, it has the best time bounds. This paper describes an implementation of the labeling algorithm that is faster than any existing method on continental road networks.",
"In this paper, we consider Dijkstra's algorithm for the point-to-point shortest path problem in large and sparse graphs with a given layout. In [1], a method has been presented that uses a partitioning of the graph to perform a preprocessing which allows to speed-up Dijkstra's algorithm considerably. We present an experimental study that evaluates which partitioning methods are suited for this approach. In particular, we examine partitioning algorithms from computational geometry and compare their impact on the speed-up of the shortest-path algorithm. Using a suited partitioning algorithm speed-up factors of 500 and more were achieved. Furthermore, we present an extension of this speed-up technique to multiple levels of partitionings. With this multi-level variant, the same speed-up factors can be achieved with smaller space requirements. It can therefore be seen as a compression of the precomputed data that conserves the correctness of the computed shortest paths.",
"We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs."
]
} |
1907.03565 | 2966487081 | More than two decades ago, combinatorial topology was shown to be useful for analyzing distributed fault-tolerant algorithms in shared memory systems and in message passing systems. In this work, we show that combinatorial topology can also be useful for analyzing distributed algorithms in networks of arbitrary structure. To illustrate this, we analyze consensus, set-agreement, and approximate agreement in networks, and derive lower bounds for these problems under classical computational settings, such as the LOCAL model and dynamic networks. | The deep connections between combinatorial topology and distributed computing were concurrently and independently identified in @cite_8 and @cite_0 . Since then, numerous outstanding results were obtained using combinatorial topology for various types of tasks, including agreement tasks such as consensus and set-agreement @cite_22 , and symmetry breaking tasks such as renaming @cite_26 @cite_12 @cite_14 . A recent work @cite_23 provides evidence that topological arguments are sometimes necessary. All these results are obtained in the asynchronous shared memory model with crash failures, but combinatorial topology can also be applied to Byzantine failures @cite_11 . Works on message passing models consider only complete communication graphs @cite_9 @cite_39 . Recent results show that combinatorial topology can also be applied in the analysis of mobile computing @cite_34 , demonstrating the generality and flexibility of the topological framework applied to distributed computing. The book @cite_4 provides an extensive introduction to combinatorial topology applied to distributed computing. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_22",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_39",
"@cite_0",
"@cite_23",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"2908176613",
"2024581561",
"2946137447",
"1526652053",
"1965990175",
"2056910803",
"2091289286",
"2083306187",
"2899382568",
"2728470441",
"2030003887",
"2016408245"
],
"abstract": [
"The @math -renaming task requires @math processes, each starting with a unique input name (from an arbitrary large range), to coordinate the choice of new output names from a range of size @math . ...",
"In the renaming task, n+1 processes start with unique input names from a large space and must choose unique output names taken from a smaller name space, 0,1,…, K. To rule out trivial solutions, a protocol must be anonymous: the value chosen by a process can depend on its input name and on the execution, but not on the specific process ID. [1990] showed that renaming has a wait-free solution when K≥ 2n. Several algebraic topology proofs of a lower bound stating that no such protocol exists when K",
"Abstract As the structure of contemporary communication networks grows more complex, practical networked distributed systems become prone to component failures. Fault-tolerant consensus in message-...",
"Distributed Computing Through Combinatorial Topology describes techniques for analyzing distributed algorithms based on award winning combinatorial topology research. The authors present a solid theoretical foundation relevant to many real systems reliant on parallelism with unpredictable delays, such as multicore microprocessors, wireless networks, distributed systems, and Internet protocols. Today, a new student or researcher must assemble a collection of scattered conference publications, which are typically terse and commonly use different notations and terminologies. This book provides a self-contained explanation of the mathematics to readers with computer science backgrounds, as well as explaining computer science concepts to readers with backgrounds in applied mathematics. The first section presents mathematical notions and models, including message passing and shared-memory systems, failures, and timing models. The next section presents core concepts in two chapters each: first, proving a simple result that lends itself to examples and pictures that will build up readers' intuition; then generalizing the concept to prove a more sophisticated result. The overall result weaves together and develops the basic concepts of the field, presenting them in a gradual and intuitively appealing way. The book's final section discusses advanced topics typically found in a graduate-level course for those who wish to explore further. Gathers knowledge otherwise spread across research and conference papers using consistent notations and a standard approach to facilitate understandingPresents unique insights applicable to multiple computing fields, including multicore microprocessors, wireless networks, distributed systems, and Internet protocols Synthesizes and distills material into a simple, unified presentation with examples, illustrations, and exercises",
"We give necessary and sufficient combinatorial conditions characterizing the computational tasks that can be solved by N asynchronous processes, up to t of which can fail by halting. The range of possible input and output values for an asynchronous task can be associated with a high-dimensional geometric structure called a simplicial complex. Our main theorem characterizes computability y in terms of the topological properties of this complex. Most notably, a given task is computable only if it can be associated with a complex that is simply connected with trivial homology groups. In other words, the complex has “no holes!” Applications of this characterization include the first impossibility results for several long-standing open problems in distributed computing, such as the “renaming” problem of Attiya et. al., the “k-set agreement” problem of Chaudhuri, and a generalization of the approximate agreement problem.",
"We prove tight bounds on the time needed to solve k-set agreement . In this problem, each processor starts with an arbitrary input value taken from a fixed set, and halts after choosing an output value. In every execution, at most k distinct output values may be chosen, and every processor's output value must be some processor's input value. We analyze this problem in a synchronous, message-passing model where processors fail by crashing. We prove a lower bound of f k +1 degree of coordination required, and the number of faults tolerated, even in idealized models like the synchronous model. The proof of this result is interesting because it is the first to apply topological techniques to the synchronous model.",
"We present a unified, axiomatic approach to proving lower bounds for the k-set agreement problem in both synchronous and asynchronous message-passing models. The proof involves constructing the set of reachable states, proving that these states are highly connected, and then appealing to a well-known topological result that high connectivity implies that set agreement is impossible. We construct the set of reachable states in an iterative fashion using a round operator that we define, and our proof of connectivity is an inductive proof based on this iterative construction and simple properties of the round operator.",
"In the classical consensus problem,each of n processors receives a private input value and produces a decision value which is one of the original input values,with the requirement that all processors decide the same value. A central result in distributed computing is that,in several standard models including the asynchronous shared-memory model,this problem has no determinis- tic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri (Inform. and Comput.,105 (1993),pp. 132-158),where the agreement condition is weak- ened so that the decision values produced may be different,as long as the number of distinct values is at most k .F or n>k ≥ 2 it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper,we resolve this question by showing that for any k<n ,there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.",
"We prove that a class of fundamental shared memory tasks are not amenable to certain standard proof techniques in the field. We formally define a class of extension-based proofs, which contains impossibility arguments like the valency proof by Fisher, Lynch and Paterson of the impossibility of wait-free consensus in an asynchronous system. We introduce a framework which models such proofs as an interaction between a prover and an adversarial protocol. Our main contribution is showing that extension-based proofs are inherently limited in power: for instance, they cannot establish the impossibility of solving (n-1)-set agreement among n > 2 processes in a wait-free manner. This impossibility result does have proofs based on combinatorial topology. However, it was unknown whether proofs based on simpler techniques were possible.",
"The LOOK-COMPUTE-MOVE model for a set of autonomous robots has been thoroughly studied for over two decades. Each robot repeatedly LOOKS at its surroundings and obtains a snapshot containing the positions of all robots; based on this information, the robot COMPUTES a destination and then MOVES to it. Previous work assumed all robots are present at the beginning of the computation. What would be the effect of robots appearing asynchronously? This paper studies thisquestion, for problems of bringing the robots close together, andexposes an intimate connection with combinatorial topology. A central problem in the mobile robots area is the gathering problem. In its discrete version, the robots start at vertices in some graph G known to them, move towards the same vertex and stop. The paper shows that if robots are asynchronous and may crash, then gathering is impossible for any graph G with at least two vertices, even if robots can have unique IDs, remember the past, know the same names for the vertices of G and use an arbitrary number of lights to communicate witheach other. Next, the paper studies two weaker variants of gathering: edge gathering and 1-gathering. For both problems we present possibility and impossibility results. The solvability of edge gathering is fully characterized: it is solvable for three or more robots on a given graph if and only if the graph is acyclic. Finally, general robot tasks in a graph are considered. A combinatorial topology characterization for the solvable tasks is presented, by a reduction of the asynchronous fault-tolerant LOOK-COMPUTE-MOVE model to a wait-free read write shared-memory computing model, bringing together two areas that have been independently studied for a long time into a common theoretical foundation.",
"In the renaming task n + 1 processes start with unique input names taken from a large space and must choose unique output names taken from a smaller name space, 0, 1, . . . , K. To rule out trivial solutions, a protocol must be anonymous: the value chosen by a process can depend on its input name and on the execution, but not on the specific process id. showed in 1990 that renaming has a wait-free solution when K ≥ 2n. Several proofs of a lower bound stating that no such protocol exists when K < 2n have been published. We presented in the ACM PODC 2008 conference the following two results. First, we presented the first completely combinatorial lower bound proof stating that no such a protocol exists when K < 2n. This bound holds for infinitely many values of n. Second, for the other values of n, we proved that the lower bound for K < 2n is incorrect, exhibiting a wait-free renaming protocol for K = 2n−1. More precisely, we presented a theorem stating that there exists a wait-free renaming protocol for K < 2n if and only if the set of integers ( n+1 i+1 | 0 i n-1 2 ) are relatively prime. This paper is the first part of the full version of the results presented in the ACM PODC 2008 conference. It includes only the lower bound. Namely, we show here that no protocol for renaming exists when K < 2n, if n is such that ( n+1 i+1 | 0 i n-1 2 ) are not relatively prime. We prove this result using the known equivalence of K-renaming for K = 2n−1 and the weak symmetry breaking task. In this task processes have no input values and the output values are 0 or 1, and it is required that in every execution in which all processes participate, at least one process decides 0 and at least one process decides 1. The full version of the upper bound appears in a companion paper [10].",
"In this work, we extend the topology-based approach for characterizing computability in asynchronous crash-failure distributed systems to asynchronous Byzantine systems. We give the first theorem with necessary and sufficient conditions to solve arbitrary tasks in asynchronous Byzantine systems where an adversary chooses faulty processes. For colorless tasks, an important subclass of distributed problems, the general result reduces to an elegant model that effectively captures the relation between the number of processes, the number of failures, as well as the topological structure of the task's simplicial complexes."
]
} |
1907.03565 | 2966487081 | More than two decades ago, combinatorial topology was shown to be useful for analyzing distributed fault-tolerant algorithms in shared memory systems and in message passing systems. In this work, we show that combinatorial topology can also be useful for analyzing distributed algorithms in networks of arbitrary structure. To illustrate this, we analyze consensus, set-agreement, and approximate agreement in networks, and derive lower bounds for these problems under classical computational settings, such as the LOCAL model and dynamic networks. | In contrast, distributed network computing has not been impacted by combinatorial topology. This domain of distributed computing is extremely active and productive this last decade, analyzing a large variety of network problems in the so-called model @cite_30 , capturing the ability to solve task locally in networks The CONGEST model has also been subject of tremendous progresses, but this model does not support full information protocols, and thus is out of the scope of our paper. . We refer to @cite_32 @cite_36 @cite_37 @cite_40 @cite_21 @cite_41 @cite_33 @cite_25 @cite_28 for a non exhaustive list of achievements in context. However, all these achievements were based on an operational approach, using sophisticated algorithmic techniques and tools solely from graph theory. Similarly, the existing lower bounds on the round-complexity of tasks in the model @cite_5 @cite_2 @cite_37 @cite_15 @cite_10 were obtained using graph theoretical arguments only. The question of whether adopting a higher dimensional approach by using topology would help in the context of local computing, be it for a better conceptual understanding of the algorithms, or providing stronger technical tools for lower bounds, is, to our knowledge, entirely open. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_33",
"@cite_36",
"@cite_41",
"@cite_28",
"@cite_21",
"@cite_32",
"@cite_40",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_25"
],
"mid": [
"1568961751",
"2243868910",
"2552279664",
"2467514673",
"1666479227",
"2138623498",
"2607483562",
"2771757446",
"2772780650",
"2054910423",
"1957963525",
"909360139",
"2954841372",
"2963714020"
],
"abstract": [
"",
"We show that any randomised Monte Carlo distributed algorithm for the Lovasz local lemma requires Omega(log log n) communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of d = O(1), where d is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lovasz local lemma with a running time of O(log n) rounds in bounded-degree graphs, and the best lower bound before our work was Omega(log* n) rounds [ 2014].",
"This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs.",
"Symmetry-breaking problems are among the most well studied in the field of distributed computing and yet the most fundamental questions about their complexity remain open. In this article we work in the LOCAL model (where the input graph and underlying distributed network are identical) and study the randomized complexity of four fundamental symmetry-breaking problems on graphs: computing MISs (maximal independent sets), maximal matchings, vertex colorings, and ruling sets. A small sample of our results includes the following: —An MIS algorithm running in O(log2D Δ L 2√log n, and comes close to the Ω(flog Δ log log Δ lower bound of Kuhn, Moscibroda, and Wattenhofer. —A maximal matching algorithm running in O(log Δ + log 4log n) time. This is the first significant improvement to the 1986 algorithm of Israeli and Itai. Moreover, its dependence on Δ is nearly optimal. —A (Δ + 1)-coloring algorithm requiring O(log Δ + 2o(√log log n) time, improving on an O(log Δ + √log n)-time algorithm of Schneider and Wattenhofer. —A method for reducing symmetry-breaking problems in low arboricity degeneracy graphs to low-degree graphs. (Roughly speaking, the arboricity or degeneracy of a graph bounds the density of any subgraph.) Corollaries of this reduction include an O(√log n)-time maximal matching algorithm for graphs with arboricity up to 2√log n and an O(log 2 3n)-time MIS algorithm for graphs with arboricity up to 2(log n)1 3. Each of our algorithms is based on a simple but powerful technique for reducing a randomized symmetry-breaking task to a corresponding deterministic one on a poly(log n)-size graph.",
"The Maximal Independent Set (MIS) problem is one of the basics in the study of locality in distributed graph algorithms. This paper presents a very simple randomized algorithm for this problem providing a near-optimal local complexity, which incidentally, when combined with some known techniques, also leads to a near-optimal global complexity. Classical MIS algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86] provide the global complexity guarantee that, with high probability1, all nodes terminate after O(log n) rounds. In contrast, our initial focus is on the local complexity, and our main contribution is to provide a very simple algorithm guaranteeing that each particular node v terminates after O(log deg(v) + log 1 e) rounds, with probability at least 1 -- e. The degree-dependency in this bound is optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Interestingly, this local complexity smoothly transitions to a global complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider [FOCS'12; arXiv: 1202.1983v3], we2 get an MIS algorithm with a high probability global complexity of O(log Δ) + 2O([EQUATION]), where Δ denotes the maximum degree. This improves over the O(log2 Δ) + 2O([EQUATION]) result of , and gets close to the Ω(min log Δ, [EQUATION] ) lower bound of Corollaries include improved algorithms for MIS in graphs of upper-bounded arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local Computation Algorithms (LCA) model, and a faster distributed algorithm for the Lovasz Local Lemma.",
"A local algorithm is a distributed algorithm that runs in constant time, independently of the size of the network. Being highly scalable and fault tolerant, such algorithms are ideal in the operation of large-scale distributed systems. Furthermore, even though the model of local algorithms is very limited, in recent years we have seen many positive results for nontrivial problems. This work surveys the state-of-the-art in the field, covering impossibility results, deterministic local algorithms, randomized local algorithms, and local algorithms for geometric graphs.",
"We present a deterministic distributed algorithm that computes a (2δ-1)-edge-coloring, or even list-edge-coloring, in any n-node graph with maximum degree δ, in O(log^8 δ ⋅ log n) rounds. This answers one of the long-standing open questions of distributed graph algorithms from the late 1980s, which asked for a polylogarithmic-time algorithm. See, e.g., Open Problem 4 in the Distributed Graph Coloring book of Barenboim and Elkin. The previous best round complexities were 2^ O(√ log n ) by Panconesi and Srinivasan [STOC92] and Õ(√ δ ) + O(log^* n) by Fraigniaud, Heinrich, and Kosowski [FOCS16]. A corollary of our deterministic list-edge-coloring also improves the randomized complexity of (2δ-1)-edge-coloring to poly(loglog n) rounds.The key technical ingredient is a deterministic distributed algorithm for hypergraph maximal matching, which we believe will be of interest beyond this result. In any hypergraph of rank r — where each hyperedge has at most r vertices — with n nodes and maximum degree δ, this algorithm computes a maximal matching in O(r^5 log^ 6+log r δ ⋅ log n) rounds.This hypergraph matching algorithm and its extensions also lead to a number of other results. In particular, we obtain a polylogarithmic-time deterministic distributed maximal independent set (MIS) algorithm for graphs with bounded neighborhood independence, hence answering Open Problem 5 of Barenboim and Elkins book, a ((log δ ε)^ O(log 1 ε) )-round deterministic algorithm for (1+ε)-approximation of maximum matching, and a quasi-polylogarithmic-time deterministic distributed algorithm for orienting λ-arboricity graphs with out-degree at most (1+ε)λ , for any constant ε 0, hence partially answering Open Problem 10 of Barenboim and Elkins book.",
"We consider graph coloring and related problems in the distributed message-passing model. Locally-iterative algorithms are especially important in this setting. These are algorithms in which each vertex decides about its next color only as a function of the current colors in its 1-hop neighborhood. In STOC'93 Szegedy and Vishwanathan showed that any locally-iterative (Delta+1)-coloring algorithm requires Omega(Delta log Delta + log^* n) rounds, unless there is \"a very special type of coloring that can be very efficiently reduced\" SV93 . In this paper we obtain this special type of coloring. Specifically, we devise a locally-iterative (Delta+1)-coloring algorithm with running time O(Delta + log^* n), i.e., below Szegedy-Vishwanathan barrier. This demonstrates that this barrier is not an inherent limitation for locally-iterative algorithms. As a result, we also achieve significant improvements for dynamic, self-stabilizing and bandwidth-restricted settings: - We obtain self-stabilizing distributed algorithms for (Delta+1)-vertex-coloring, (2Delta-1)-edge-coloring, maximal independent set and maximal matching with O(Delta+log^* n) time. This significantly improves previously-known results that have O(n) or larger running times GK10 . - We devise a (2Delta-1)-edge-coloring algorithm in the CONGEST model with O(Delta + log^* n) time and in the Bit-Round model with O(Delta + log n) time. Previously-known algorithms had superlinear dependency on Delta for (2Delta-1)-edge-coloring in these models. - We obtain an arbdefective coloring algorithm with running time O( Delta + log^* n). We employ it in order to compute proper colorings that improve the recent state-of-the-art bounds of Barenboim from PODC'15 B15 and from FOCS'16 FHK16 by polylogarithmic factors. - Our algorithms are applicable to the SET-LOCAL model of HKMS15 .",
"Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically.",
"This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time @math . This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most @math requires at least @math colors. • In an n-vertex graph of largest degree @math , an @math -coloring may be found in time @math .",
"The question of what can be computed, and how efficiently, is at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a distributed fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first polylogarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition, we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, whereas for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks.",
"By prior work, there is a distributed graph algorithm that finds a maximal fractional matching (maximal edge packing) in @math O(Δ) rounds, independently of @math n; here @math Δ is the maximum degree of the graph and @math n is the number of nodes in the graph. We show that this is optimal: there is no distributed algorithm that finds a maximal fractional matching in @math o(Δ) rounds, independently of @math n. Our work gives the first linear-in- @math Δ lower bound for a natural graph problem in the standard @math LOCAL model of distributed computing--prior lower bounds for a wide range of graph problems have been at best logarithmic in @math Δ.",
"",
"The (∆+1)-coloring problem is a fundamental symmetry breaking problem in distributed computing. We give a new randomized coloring algorithm for (∆+1)-coloring running in O(√log ∆)+ 2^O(√log log n) rounds with probability 1-1 n^Ω(1) in a graph with n nodes and maximum degree ∆. This implies that the (∆+1)-coloring problem is easier than the maximal independent set problem and the maximal matching problem, due to their lower bounds by Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Our algorithm also extends to the list-coloring problem where the palette of each node contains ∆+1 colors."
]
} |
1812.10779 | 2908322352 | Nowadays, pedestrian detection is one of the pivotal fields in computer vision, especially when performed over video surveillance scenarios. People detection methods are highly sensitive to occlusions among pedestrians, which dramatically degrades performance in crowded scenarios. The cutback in camera prices has allowed generalizing multi-camera set-ups, which can better confront occlusions by using different points of view to disambiguate detections. In this paper we present an approach to improve the performance of these multi-camera systems and to make them independent of the considered scenario, via an automatic understanding of the scene content. This semantic information, obtained from a semantic segmentation, is used 1) to automatically generate a common Area of Interest for all cameras, instead of the usual manual definition of this area; and 2) to improve the 2D detections of each camera via an optimization technique which maximizes coherence of every detection both in all 2D views and in the 3D world, obtaining best-fitted bounding boxes and a consensus height for every pedestrian. Experimental results on five publicly available datasets show that the proposed approach, which does not require any training stage, outperforms state-of-the-art multi-camera pedestrian detectors non specifically trained for these datasets, which demonstrates the expected semantic-based robustness to different scenarios. | There are several approaches @cite_10 @cite_18 that rely on manually annotated operational areas where evaluation is performed. An advantage of these areas is that camera calibration errors are limited and controlled. Besides, these areas are defined to maximize the overlapping between the field of view of the involved cameras. However, the manual annotation of these operational areas hinders the generalization of people detection approaches. Our previous work in this domain @cite_20 resulted in an automatic method for the cooperative extraction of operational areas in scenarios recorded with multiple moving cameras: semantic evidences from different junctures, cameras, and points-of-view are spatio-temporally aligned on a common ground plane and are used to automatically define an operational area or ( ( AOI )). | {
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"1994774201",
"2889337521"
],
"abstract": [
"",
"Multi-camera pedestrian detection is the challenging problem in the field of surveillance video analysis. However, existing approaches may produce \"phantoms\" (i.e., fake pedestrians) due to the heavy occlusions in real surveillance scenario, while calibration errors and the diverse heights of pedestrians may also heavily decrease the detection performance. To address these problems, this paper proposes a robust multiple cameras pedestrian detection approach with multi-view Bayesian network model (MvBN). Given the preliminary results obtained by any multi-view pedestrian detection method, which are actually comprised of both real pedestrians and phantoms, the MvBN is used to model both the occlusion relationship and the homography correspondence between them in all camera views. As such, the removal of phantoms can be formulated as an MvBN inference problem. Moreover, to reduce the influence of the calibration errors and keep robust to the diverse heights of pedestrians, a height-adaptive projection (HAP) method is proposed to further improve the detection performance by utilizing a local search process in a small neighborhood of heights and locations of the detected pedestrians. Experimental results on four public benchmarks show that our method outperforms several state-of-the-art algorithms remarkably and demonstrates high robustness in different surveillance scenes. HighlightsA multi-view Bayesian network is proposed to model pedestrian candidates and their occlusion relationships in all views.A parameter learning algorithm is developed for MvBN by using a set of auxiliary, real-valued, and continuous variables.A height-adaptive projection is proposed to make the final detection robust to synthesis noises and calibration errors.Our approach is recognized as the best performer in five PETS evaluations from 2009 to 2013.",
"Nowadays, video surveillance scenarios usually rely on manually annotated focus areas to constrain automatic video analysis tasks. Although manual annotation simplifies several stages of the analysis, its use hinders the scalability of the developed solutions and might induce operational problems in scenarios recorded with multiple moving cameras (MMCs). To tackle these problems, an automatic method for the cooperative extraction of areas of interest (AoIs) is proposed. Each captured frame is segmented into regions with semantic roles using a state-of-the-art method. Semantic evidences from different junctures, cameras, and points-of-view are, then, spatio-temporally aligned on a common ground plane. Experimental results on widely used datasets recorded with multiple but static cameras suggest that this process provides broader and more accurate AoIs than those manually defined in the datasets. Moreover, the proposed method naturally determines the projection of obstacles and functional objects in the scene, paving the road towards systems focused on the automatic analysis of human behavior. To our knowledge, this is the first study dealing with this problem, as evidenced by the lack of publicly available MMC benchmarks. To also cope with this issue, we provide a new MMC dataset with associated semantic scene annotations."
]
} |
1812.10779 | 2908322352 | Nowadays, pedestrian detection is one of the pivotal fields in computer vision, especially when performed over video surveillance scenarios. People detection methods are highly sensitive to occlusions among pedestrians, which dramatically degrades performance in crowded scenarios. The cutback in camera prices has allowed generalizing multi-camera set-ups, which can better confront occlusions by using different points of view to disambiguate detections. In this paper we present an approach to improve the performance of these multi-camera systems and to make them independent of the considered scenario, via an automatic understanding of the scene content. This semantic information, obtained from a semantic segmentation, is used 1) to automatically generate a common Area of Interest for all cameras, instead of the usual manual definition of this area; and 2) to improve the 2D detections of each camera via an optimization technique which maximizes coherence of every detection both in all 2D views and in the 3D world, obtaining best-fitted bounding boxes and a consensus height for every pedestrian. Experimental results on five publicly available datasets show that the proposed approach, which does not require any training stage, outperforms state-of-the-art multi-camera pedestrian detectors non specifically trained for these datasets, which demonstrates the expected semantic-based robustness to different scenarios. | Semantic segmentation is the task of assigning a unique object label to every pixel of an image. Top-performing strategies for semantic segmentation are based on CNNs. For instance, a dense up-sampling CNN can be used to generate pixel-level predictions within a hybrid dilated convolution framework @cite_12 . Performance can be boosted through the use of an ensemble of many relatively shallow networks @cite_3 . Contextual information can be implicitly used by including relationships between different labels---e.g. an airplane is likely to be on a runway or flying in the sky but not on the water--- @cite_6 . These relationships allow to reduce the complexity associated to large sets of object labels, generally improving performance. | {
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_12"
],
"mid": [
"2952147788",
"2952596663",
"2950510876"
],
"abstract": [
"The trend towards increasingly deep neural networks has been driven by a general observation that increasing depth increases the performance of a network. Recently, however, evidence has been amassing that simply increasing depth may not be the best way to increase performance, particularly given other limitations. Investigations into deep residual networks have also suggested that they may not in fact be operating as a single deep network, but rather as an ensemble of many relatively shallow networks. We examine these issues, and in doing so arrive at a new interpretation of the unravelled view of deep residual networks which explains some of the behaviours that have been observed experimentally. As a result, we are able to derive a new, shallower, architecture of residual networks which significantly outperforms much deeper models such as ResNet-200 on the ImageNet classification dataset. We also show that this performance is transferable to other problem domains by developing a semantic segmentation approach which outperforms the state-of-the-art by a remarkable margin on datasets including PASCAL VOC, PASCAL Context, and Cityscapes. The architecture that we propose thus outperforms its comparators, including very deep ResNets, and yet is more efficient in memory use and sometimes also in training time. The code and models are available at this https URL",
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at this https URL ."
]
} |
1812.10779 | 2908322352 | Nowadays, pedestrian detection is one of the pivotal fields in computer vision, especially when performed over video surveillance scenarios. People detection methods are highly sensitive to occlusions among pedestrians, which dramatically degrades performance in crowded scenarios. The cutback in camera prices has allowed generalizing multi-camera set-ups, which can better confront occlusions by using different points of view to disambiguate detections. In this paper we present an approach to improve the performance of these multi-camera systems and to make them independent of the considered scenario, via an automatic understanding of the scene content. This semantic information, obtained from a semantic segmentation, is used 1) to automatically generate a common Area of Interest for all cameras, instead of the usual manual definition of this area; and 2) to improve the 2D detections of each camera via an optimization technique which maximizes coherence of every detection both in all 2D views and in the 3D world, obtaining best-fitted bounding boxes and a consensus height for every pedestrian. Experimental results on five publicly available datasets show that the proposed approach, which does not require any training stage, outperforms state-of-the-art multi-camera pedestrian detectors non specifically trained for these datasets, which demonstrates the expected semantic-based robustness to different scenarios. | Concerning methods, an interesting example is the use of a multi-view model shaped by a Bayesian network to model the relationships between occlusions @cite_10 . Detections are here assumed to be images of either pedestrians or , the former differentiated from the latter by inference on the network. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1994774201"
],
"abstract": [
"Multi-camera pedestrian detection is the challenging problem in the field of surveillance video analysis. However, existing approaches may produce \"phantoms\" (i.e., fake pedestrians) due to the heavy occlusions in real surveillance scenario, while calibration errors and the diverse heights of pedestrians may also heavily decrease the detection performance. To address these problems, this paper proposes a robust multiple cameras pedestrian detection approach with multi-view Bayesian network model (MvBN). Given the preliminary results obtained by any multi-view pedestrian detection method, which are actually comprised of both real pedestrians and phantoms, the MvBN is used to model both the occlusion relationship and the homography correspondence between them in all camera views. As such, the removal of phantoms can be formulated as an MvBN inference problem. Moreover, to reduce the influence of the calibration errors and keep robust to the diverse heights of pedestrians, a height-adaptive projection (HAP) method is proposed to further improve the detection performance by utilizing a local search process in a small neighborhood of heights and locations of the detected pedestrians. Experimental results on four public benchmarks show that our method outperforms several state-of-the-art algorithms remarkably and demonstrates high robustness in different surveillance scenes. HighlightsA multi-view Bayesian network is proposed to model pedestrian candidates and their occlusion relationships in all views.A parameter learning algorithm is developed for MvBN by using a set of auxiliary, real-valued, and continuous variables.A height-adaptive projection is proposed to make the final detection robust to synthesis noises and calibration errors.Our approach is recognized as the best performer in five PETS evaluations from 2009 to 2013."
]
} |
1812.10779 | 2908322352 | Nowadays, pedestrian detection is one of the pivotal fields in computer vision, especially when performed over video surveillance scenarios. People detection methods are highly sensitive to occlusions among pedestrians, which dramatically degrades performance in crowded scenarios. The cutback in camera prices has allowed generalizing multi-camera set-ups, which can better confront occlusions by using different points of view to disambiguate detections. In this paper we present an approach to improve the performance of these multi-camera systems and to make them independent of the considered scenario, via an automatic understanding of the scene content. This semantic information, obtained from a semantic segmentation, is used 1) to automatically generate a common Area of Interest for all cameras, instead of the usual manual definition of this area; and 2) to improve the 2D detections of each camera via an optimization technique which maximizes coherence of every detection both in all 2D views and in the 3D world, obtaining best-fitted bounding boxes and a consensus height for every pedestrian. Experimental results on five publicly available datasets show that the proposed approach, which does not require any training stage, outperforms state-of-the-art multi-camera pedestrian detectors non specifically trained for these datasets, which demonstrates the expected semantic-based robustness to different scenarios. | Recent approaches are focused on methods. The combination of CNNs and Conditional Random Fields (CRF) can be used to explicitly model ambiguities in crowded scenes @cite_1 . High-order CRF terms are used to model potential occlusions, providing robust pedestrian detection. Alternatively, multi-view detection can be handled by an end-to-end deep learning method based on an occlusion-aware model for monocular pedestrian detection and a multi-view fusion architecture @cite_4 . | {
"cite_N": [
"@cite_1",
"@cite_4"
],
"mid": [
"2608772507",
"2951221967"
],
"abstract": [
"People detection in single 2D images has improved greatly in recent years. However, comparatively little of this progress has percolated into multi-camera multi-people tracking algorithms, whose performance still degrades severely when scenes become very crowded. In this work, we introduce a new architecture that combines Convolutional Neural Nets and Conditional Random Fields to explicitly model those ambiguities. One of its key ingredients are high-order CRF terms that model potential occlusions and give our approach its robustness even when many people are present. Our model is trained end-to-end and we show that it outperforms several state-of-art algorithms on challenging scenes.",
"This paper addresses the problem of multi-view people occupancy map estimation. Existing solutions for this problem either operate per-view, or rely on a background subtraction pre-processing. Both approaches lessen the detection performance as scenes become more crowded. The former does not exploit joint information, whereas the latter deals with ambiguous input due to the foreground blobs becoming more and more interconnected as the number of targets increases. Although deep learning algorithms have proven to excel on remarkably numerous computer vision tasks, such a method has not been applied yet to this problem. In large part this is due to the lack of large-scale multi-camera data-set. The core of our method is an architecture which makes use of monocular pedestrian data-set, available at larger scale then the multi-view ones, applies parallel processing to the multiple video streams, and jointly utilises it. Our end-to-end deep learning method outperforms existing methods by large margins on the commonly used PETS 2009 data-set. Furthermore, we make publicly available a new three-camera HD data-set. Our source code and trained models will be made available under an open-source license."
]
} |
1812.10779 | 2908322352 | Nowadays, pedestrian detection is one of the pivotal fields in computer vision, especially when performed over video surveillance scenarios. People detection methods are highly sensitive to occlusions among pedestrians, which dramatically degrades performance in crowded scenarios. The cutback in camera prices has allowed generalizing multi-camera set-ups, which can better confront occlusions by using different points of view to disambiguate detections. In this paper we present an approach to improve the performance of these multi-camera systems and to make them independent of the considered scenario, via an automatic understanding of the scene content. This semantic information, obtained from a semantic segmentation, is used 1) to automatically generate a common Area of Interest for all cameras, instead of the usual manual definition of this area; and 2) to improve the 2D detections of each camera via an optimization technique which maximizes coherence of every detection both in all 2D views and in the 3D world, obtaining best-fitted bounding boxes and a consensus height for every pedestrian. Experimental results on five publicly available datasets show that the proposed approach, which does not require any training stage, outperforms state-of-the-art multi-camera pedestrian detectors non specifically trained for these datasets, which demonstrates the expected semantic-based robustness to different scenarios. | Algorithms in all of these groups require accurate scene calibration: small calibration errors can produce inaccurate projections and back-projections which may contravene key assumptions of the methods. These errors may lead to misaligned detections, hindering their later use. To cope with this problematic, one can rely on an Height-Adaptive Projection (HAP) procedure in which a gradient descent process is used to find both the optimal pedestrian's height and location on the ground-plane by maximizing the alignment of their back-projections with foreground masks on each camera @cite_10 . | {
"cite_N": [
"@cite_10"
],
"mid": [
"1994774201"
],
"abstract": [
"Multi-camera pedestrian detection is the challenging problem in the field of surveillance video analysis. However, existing approaches may produce \"phantoms\" (i.e., fake pedestrians) due to the heavy occlusions in real surveillance scenario, while calibration errors and the diverse heights of pedestrians may also heavily decrease the detection performance. To address these problems, this paper proposes a robust multiple cameras pedestrian detection approach with multi-view Bayesian network model (MvBN). Given the preliminary results obtained by any multi-view pedestrian detection method, which are actually comprised of both real pedestrians and phantoms, the MvBN is used to model both the occlusion relationship and the homography correspondence between them in all camera views. As such, the removal of phantoms can be formulated as an MvBN inference problem. Moreover, to reduce the influence of the calibration errors and keep robust to the diverse heights of pedestrians, a height-adaptive projection (HAP) method is proposed to further improve the detection performance by utilizing a local search process in a small neighborhood of heights and locations of the detected pedestrians. Experimental results on four public benchmarks show that our method outperforms several state-of-the-art algorithms remarkably and demonstrates high robustness in different surveillance scenes. HighlightsA multi-view Bayesian network is proposed to model pedestrian candidates and their occlusion relationships in all views.A parameter learning algorithm is developed for MvBN by using a set of auxiliary, real-valued, and continuous variables.A height-adaptive projection is proposed to make the final detection robust to synthesis noises and calibration errors.Our approach is recognized as the best performer in five PETS evaluations from 2009 to 2013."
]
} |
1812.10766 | 2907777862 | Current state-of-the-art in 3D human pose and shape recovery relies on deep neural networks and statistical morphable body models, such as the Skinned Multi-Person Linear model (SMPL). However, regardless of the advantages of having both body pose and shape, SMPL-based solutions have shown difficulties to predict 3D bodies accurately. This is mainly due to the unconstrained nature of SMPL, which may generate unrealistic body meshes. Because of this, regression of SMPL parameters is a difficult task, often addressed with complex regularization terms. In this paper we propose to embed SMPL within a deep model to accurately estimate 3D pose and shape from a still RGB image. We use CNN-based 3D joint predictions as an intermediate representation to regress SMPL pose and shape parameters. Later, 3D joints are reconstructed again in the SMPL output. This module can be seen as an autoencoder where the encoder is a deep neural network and the decoder is SMPL model. We refer to this as SMPL reverse (SMPLR). By implementing SMPLR as an encoder-decoder we avoid the need of complex constraints on pose and shape. Furthermore, given that in-the-wild datasets usually lack accurate 3D annotations, it is desirable to lift 2D joints to 3D without pairing 3D annotations with RGB images. Therefore, we also propose a denoising autoencoder (DAE) module between CNN and SMPLR, able to lift 2D joints to 3D and partially recover from structured error. We evaluate our method on SURREAL and Human3.6M datasets, showing improvement over SMPL-based state-of-the-art alternatives by about 4 and 25 millimeters, respectively. | Depth regression given 2D joints has been an active research topic in 3D human pose recovery. Nevertheless, this is an ill-posed problem where several 3D poses can be projected to the same 2D joints. Chen and Ramanan @cite_6 show that copying depth from 3D mocap data can provide a fair estimation when a nearest 2D matching is given. However, Moreno @cite_26 shows that distance of random pairs of poses has more ambiguity in Cartesian space than Euclidean distance matrix. Advances in recent works show that directly using simple @cite_13 or cascade @cite_18 MLP networks can be more accurate. Additionally, 2D joints can be noisy with wrong estimations, making previous solutions suboptimal. In this regard, Yang al @cite_5 use adversarial training and benefit from available 3D data along with 2D data to infer depth information. In our case, the proposed denoising autoencoder is used to lift 2D pose to 3D in the lack of 3D ground truth data, providing accurate input to SMPLR. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_6",
"@cite_5",
"@cite_13"
],
"mid": [
"2891377836",
"2557698284",
"2583372902",
"2795089319",
"2612706635"
],
"abstract": [
"We present a feed-forward, multitask, end-to-end trainable system for the integrated 2d localization, as well as 3d pose and shape estimation, of multiple people in monocular images. The challenge is the formal modeling of the problem that intrinsically requires discrete and continuous computation (e.g. grouping people vs. predicting 3d pose). The model identifies human body structures (joints and limbs) in images, groups them based on 2d and 3d information fused using learned scoring functions, and optimally aggregates such responses into partial or complete 3d human skeleton hypotheses under kinematic tree constraints, but without knowing in advance the number of people in the scene and their visibility relations. We design a single multi-task deep neural network with differentiable stages where the person grouping problem is formulated as an integer program based on learned body part scores parameterized by both 2d and 3d information. This avoids suboptimality resulting from separate 2d and 3d reasoning, with grouping performed based on the combined information. The calculation can be formally described as a linear binary integer program with globally optimal solution. The final predictive stage of 3d pose and shape is based on a learned attention process where information from different human body parts is optimally fused. State-of-the-art results are obtained in large scale datasets like Human3.6M and Panoptic.",
"This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.",
"We explore 3D human pose estimation from a single RGB image. While many approaches try to directly predict 3D pose from image measurements, we explore a simple architecture that reasons through intermediate 2D pose predictions. Our approach is based on two key observations (1) Deep neural nets have revolutionized 2D pose estimation, producing accurate 2D predictions even for poses with self-occlusions (2) Big-datasets of 3D mocap data are now readily available, making it tempting to lift predicted 2D poses to 3D through simple memorization (e.g., nearest neighbors). The resulting architecture is straightforward to implement with off-the-shelf 2D pose estimation systems and 3D mocap libraries. Importantly, we demonstratethatsuchmethodsoutperformalmostallstate-of-theart 3D pose estimation systems, most of which directly try to regress 3D pose from 2D measurements.",
"Recently, remarkable advances have been achieved in 3D human pose estimation from monocular images because of the powerful Deep Convolutional Neural Networks (DCNNs). Despite their success on large-scale datasets collected in the constrained lab environment, it is difficult to obtain the 3D pose annotations for in-the-wild images. Therefore, 3D human pose estimation in the wild is still a challenge. In this paper, we propose an adversarial learning framework, which distills the 3D human pose structures learned from the fully annotated dataset to in-the-wild images with only 2D pose annotations. Instead of defining hard-coded rules to constrain the pose estimation results, we design a novel multi-source discriminator to distinguish the predicted 3D poses from the ground-truth, which helps to enforce the pose estimator to generate anthropometrically valid poses even with images in the wild. We also observe that a carefully designed information source for the discriminator is essential to boost the performance. Thus, we design a geometric descriptor, which computes the pairwise relative locations and distances between body joints, as a new information source for the discriminator. The efficacy of our adversarial learning framework with the new geometric descriptor has been demonstrated through extensive experiments on widely used public benchmarks. Our approach significantly improves the performance compared with previous state-of-the-art approaches.",
"Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation."
]
} |
1812.10766 | 2907777862 | Current state-of-the-art in 3D human pose and shape recovery relies on deep neural networks and statistical morphable body models, such as the Skinned Multi-Person Linear model (SMPL). However, regardless of the advantages of having both body pose and shape, SMPL-based solutions have shown difficulties to predict 3D bodies accurately. This is mainly due to the unconstrained nature of SMPL, which may generate unrealistic body meshes. Because of this, regression of SMPL parameters is a difficult task, often addressed with complex regularization terms. In this paper we propose to embed SMPL within a deep model to accurately estimate 3D pose and shape from a still RGB image. We use CNN-based 3D joint predictions as an intermediate representation to regress SMPL pose and shape parameters. Later, 3D joints are reconstructed again in the SMPL output. This module can be seen as an autoencoder where the encoder is a deep neural network and the decoder is SMPL model. We refer to this as SMPL reverse (SMPLR). By implementing SMPLR as an encoder-decoder we avoid the need of complex constraints on pose and shape. Furthermore, given that in-the-wild datasets usually lack accurate 3D annotations, it is desirable to lift 2D joints to 3D without pairing 3D annotations with RGB images. Therefore, we also propose a denoising autoencoder (DAE) module between CNN and SMPLR, able to lift 2D joints to 3D and partially recover from structured error. We evaluate our method on SURREAL and Human3.6M datasets, showing improvement over SMPL-based state-of-the-art alternatives by about 4 and 25 millimeters, respectively. | It refers to regressing 3D pose directly from RGB image. Due to the nonlinear nature of the human pose, 3D pose regression without modeling correlation of joints is not a trivial task. Brau and Jiang @cite_23 estimate 3D joints and camera parameters without direct supervision on them. Instead, they use several loss functions for projected 2D joints, bones size and independent Fisher priors. Sun al @cite_21 propose a compositional loss function based on relative joints with respect to a defined kinematic tree. They separate 2D joints and depth estimation in the loss. | {
"cite_N": [
"@cite_21",
"@cite_23"
],
"mid": [
"2953258117",
"2566741951"
],
"abstract": [
"Regression based methods are not performing as well as detection based methods for human pose estimation. A central problem is that the structural information in the pose is not well exploited in the previous regression methods. In this work, we propose a structure-aware regression approach. It adopts a reparameterized pose representation using bones instead of joints. It exploits the joint connection structure to define a compositional loss function that encodes the long range interactions in the pose. It is simple, effective, and general for both 2D and 3D pose estimation in a unified setting. Comprehensive evaluation validates the effectiveness of our approach. It significantly advances the state-of-the-art on Human3.6M and is competitive with state-of-the-art results on MPII.",
"We propose a deep convolutional neural network for 3Dhuman pose and camera estimation from monocular imagesthat learns from 2D joint annotations. The proposed networkfollows the typical architecture, but contains an additionaloutput layer which projects predicted 3D joints onto2D, and enforces constraints on body part lengths in 3D.We further enforce pose constraints using an independentlytrained network that learns a prior distribution over 3Dposes. We evaluate our approach on several benchmarkdatasets and compare against state-of-the-art approachesfor 3D human pose estimation, achieving comparable performance.Additionally, we show that our approach significantlyoutperforms other methods in cases where 3Dground truth data is unavailable, and that our network exhibitsgood generalization properties."
]
} |
1812.10915 | 2908415820 | Precipitation nowcasting using neural networks and ground-based radars has become one of the key components of modern weather prediction services, but it is limited to the regions covered by ground-based radars. Truly global precipitation nowcasting requires fusion of radar and satellite observations. We propose the data fusion pipeline based on computer vision techniques, including novel inpainting algorithm with soft masking. | The main idea behind partial convolution from @cite_11 is the following. Let @math be the convolutional weights and @math the corresponding bias. @math are the pixels values for the current convolution window and @math is the corresponding binary mask. The partial convolution at every location is expressed as: | {
"cite_N": [
"@cite_11"
],
"mid": [
"2950820654"
],
"abstract": [
"Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach."
]
} |
1812.10851 | 2908313473 | In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms. | A simple admissible heuristic that is used within A* for MAPF is to sum the individual heuristics of the single agents such as Manhattan distance for 4-connected grids or Euclidean distance for Euclidean graphs @cite_6 . A more-informed heuristic is called the sum of individual costs heuristic . For each agent @math we calculate its optimal path cost from its current state (position) @math to @math assuming that other agents do not exist. Then, we sum these costs over all agents. More-informed heuristics uses forms of pattern-databases @cite_23 @cite_49 . | {
"cite_N": [
"@cite_23",
"@cite_6",
"@cite_49"
],
"mid": [
"1521683430",
"2135939055",
"2155992093"
],
"abstract": [
"It is known that A* is optimal with respect to the expanded nodes (Dechter and Pearl 1985) (D&P). The exact meaning of this optimality varies depending on the class of algorithms and instances over which A* is claimed to be optimal. A* does not provide any optimality guarantees with respect to the generated nodes. However, such guarantees may be critical for optimally solving instances of domains with a large branching factor. In this paper, we introduce two new variants of the recently introduced Enhanced Partial Expansion A* algorithm (EPEA*) ( 2012). We leverage the results of D&P to show that these variants possess optimality with respect to the generated nodes in much the same sense as A* possesses optimality with respect to the expanded nodes. The results in this paper are theoretical. A study of the practical performance of the new variants is beyond the scope of this paper.",
"Multi-robot path planning is dificult due to the combinatorial explosion of the search space with every new robot added. Complete search of the combined state-space soon becomes intractable. In this paper we present a novel form of abstraction that allows us to plan much more eficiently. The key to this abstraction is the partitioning of the map into subgraphs of known structure with entry and exit restrictions which we can represent compactly. Planning then becomes a search in the much smaller space of subgraph configurations. Once an abstract plan is found, it can be quickly resolved into a correct (but possibly sub-optimal) concrete plan without the need for further search. We prove that this technique is sound and complete and demonstrate its practical effiectiveness on a real map. A contending solution, prioritised planning, is also evaluated and shown to have similar performance albeit at the cost of completeness. The two approaches are not necessarily conflicting; we demonstrate how they can be combined into a single algorithm which out-performs either approach alone.",
"When solving instances of problem domains that feature a large branching factor, A* may generate a large number of nodes whose cost is greater than the cost of the optimal solution. We designate such nodes as surplus. Generating surplus nodes and adding them to the OPEN list may dominate both time and memory of the search. A recently introduced variant of A* called Partial Expansion A* (PEA*) deals with the memory aspect of this problem. When expanding a node n, PEA* generates all of its children and puts into OPEN only the children with f = f(n). n is reinserted in the OPEN list with the f-cost of the best discarded child. This guarantees that surplus nodes are not inserted into OPEN. In this paper, we present a novel variant of A* called Enhanced Partial Expansion A* (EPEA*) that advances the idea of PEA* to address the time aspect. Given a priori domain-and heuristic-specific knowledge, EPEA* generates only the nodes with f = f(n). Although EPEA* is not always applicable or practical, we study several variants of EPEA*, which make it applicable to a large number of domains and heuristics. In particular, the ideas of EPEA* are applicable to IDA* and to the domains where pattern databases are traditionally used. Experimental studies show significant improvements in run-time and memory performance for several standard benchmark applications. We provide several theoretical studies to facilitate an understanding of the new algorithm."
]
} |
1812.10851 | 2908313473 | In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms. | More A*-based Algorithms. Enhanced Partial Expansion () @cite_49 avoids the generation of surplus nodes (i.e. nodes @math with @math where @math is the optimal cost; we assume standard A* notation with @math ) by using a priori domain knowledge. When expanding a node @math generates only the children @math with @math and the smallest @math -value among those children with @math ( @math stands for @math in the context of MAPF). The other children of @math are discarded. This is done with the help of a domain-dependent operator selection function (OSF). The OSF returns the exact list of operators which will generate nodes @math with the desired @math . Node @math is then re-inserted into OPEN setting @math to the @math -value of the next best child of @math . In this way, EPEA* avoids the generation of surplus nodes and dramatically reduces the number of generated nodes. An OSF for MAPF can be efficiently built as the effect on the @math -value of moving a single agent in a given direction can be easily computed. For more details see @cite_49 . | {
"cite_N": [
"@cite_49"
],
"mid": [
"2155992093"
],
"abstract": [
"When solving instances of problem domains that feature a large branching factor, A* may generate a large number of nodes whose cost is greater than the cost of the optimal solution. We designate such nodes as surplus. Generating surplus nodes and adding them to the OPEN list may dominate both time and memory of the search. A recently introduced variant of A* called Partial Expansion A* (PEA*) deals with the memory aspect of this problem. When expanding a node n, PEA* generates all of its children and puts into OPEN only the children with f = f(n). n is reinserted in the OPEN list with the f-cost of the best discarded child. This guarantees that surplus nodes are not inserted into OPEN. In this paper, we present a novel variant of A* called Enhanced Partial Expansion A* (EPEA*) that advances the idea of PEA* to address the time aspect. Given a priori domain-and heuristic-specific knowledge, EPEA* generates only the nodes with f = f(n). Although EPEA* is not always applicable or practical, we study several variants of EPEA*, which make it applicable to a large number of domains and heuristics. In particular, the ideas of EPEA* are applicable to IDA* and to the domains where pattern databases are traditionally used. Experimental studies show significant improvements in run-time and memory performance for several standard benchmark applications. We provide several theoretical studies to facilitate an understanding of the new algorithm."
]
} |
1812.10851 | 2908313473 | In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms. | M* @cite_30 @cite_39 and its enhanced recursive variant () are important A*-based algorithms related to . M* dynamically changes the dimensionality and branching factor based on conflicts. The dimensionality is the number of agents that are not allowed to conflict. When a node is expanded, M* initially generates only one child in which each agent takes (one of) its individual optimal move towards the goal (dimensionality 1). This continues until a conflict occurs between @math agents at node @math . At this point, the dimensionality of all the nodes on the branch leading from the root to @math is increased to @math and all these nodes are placed back in OPEN list. When one of these nodes is re-expanded, it generates @math children where the @math conflicting agents make all possible moves and the @math non-conflicting agents make their individual optimal move. An enhanced variant of M* called @cite_32 builds on top of Standley's rather than plain A*. | {
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_39"
],
"mid": [
"2124015815",
"2039107569",
"2123030512"
],
"abstract": [
"Multirobot path planning is difficult because the full configuration space of the system grows exponentially with the number of robots. Planning in the joint configuration space of a set of robots is only necessary if they are strongly coupled, which is often not true if the robots are well separated in the workspace. Therefore, we initially plan for each robot separately, and only couple sets of robots after they have been found to interact, thus minimizing the dimensionality of the search space. We present a general strategy called subdimensional expansion, which dynamically generates low dimensional search spaces embedded in the full configuration space. We also present an implementation of subdimensional expansion for robot configuration spaces that can be represented as a graph, called M*, and show that M* is complete and finds minimal cost paths.",
"We believe the core of handling the complexity of coordinated multiagent search lies in identifying which subsets of robots can be safely decoupled, and hence planned for in a lower dimensional space. Our work, as well as those of others take that perspective. In our prior work, we introduced an approach called subdimensional expansion for constructing low-dimensional but sufficient search spaces for multirobot path planning, and an implementation for graph search called M*. Subdimensional expansion dynamically increases the dimensionality of the search space in regions featuring significant robot-robot interactions. In this paper, we integrate M* with Meta-Agent Constraint-Based Search (MA-CBS), a planning framework that seeks to couple repeatedly colliding robots allowing for other robots to be planned in low-dimensional search space. M* is also integrated with operator decomposition (OD), an A*-variant performing lazy search of the outneighbors of a given vertex. We show that the combined algorithm demonstrates state of the art performance.",
"Abstract Planning optimal paths for large numbers of robots is computationally expensive. In this paper, we introduce a new framework for multirobot path planning called subdimensional expansion, which initially plans for each robot individually, and then coordinates motion among the robots as needed. More specifically, subdimensional expansion initially creates a one-dimensional search space embedded in the joint configuration space of the multirobot system. When the search space is found to be blocked during planning by a robot–robot collision, the dimensionality of the search space is locally increased to ensure that an alternative path can be found. As a result, robots are only coordinated when necessary, which reduces the computational cost of finding a path. We present the M ⁎ algorithm, an implementation of subdimensional expansion that adapts the A ⁎ planner to perform efficient multirobot planning. M ⁎ is proven to be complete and to find minimal cost paths. Simulation results are presented that show that M ⁎ outperforms existing optimal multirobot path planning algorithms."
]
} |
1812.10851 | 2908313473 | In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms. | The low level acts as a goal test for the high level. For each node @math visited by the high level, the low level is invoked. Its task is to find a non-conflicting complete solution such that the cost of the individual path of agent @math is exactly @math . For each agent @math , stores all single-agent paths of cost @math in a special compact data-structure called a multi-value decision diagram (MDD) @cite_37 - MDD will be defined precisely later. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2137108108"
],
"abstract": [
"An investigation was made of the analogous graph structure for representing and manipulating discrete variable problems. The authors define the multi-valued decision diagram (MDD), analyze its properties (in particular prove a strong canonical form) and provide algorithms for combining and manipulating MDDs. They give a method for mapping an MDD into an equivalent BDD (binary decision diagram) which allows them to provide a highly efficient implementation using the previously developed BDD packages. A direct implementation of the MDD structure has also been carried out, but this initial implementation has not yet been tuned to the same extent as the BDDs to allow a reasonable comparison to be made. The authors have used the mapping to BDDs to provide an initial understanding of the limits on the sizes of real problems that can be executed. The results are encouraging. >"
]
} |
1812.10851 | 2908313473 | In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms. | SAT encoding. The encoding @cite_4 again employs log-space representation of variables but position of agent @math at time step @math , that is, @math is represented instead of representing vertex occupancy - that is, variables @math are represented using log-space encoding. To ensure that conflicts among agents in vertices do not occur, the constraint @cite_15 is intorduced for @math variables over all agents for each timestep @math . The advantage of the encoding is that various efficient encodings of the @cite_24 @cite_21 constraint over bit vectors can be integrated. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_21",
"@cite_4"
],
"mid": [
"2142563857",
"2164279585",
"2398115889",
""
],
"abstract": [
"This paper shows how all different constraints (ADCs) over bit-vectors can be handled within a SAT solver. It also contains encouraging experimental results in applying this technique to encode simple path constraints in bounded model checking. Finally, we present a new compact encoding of equalities and inequalities over bit-vectors in CNF.",
"Many real-life Constraint Satisfaction Problems (CSPs) involve some constraints similar to the alldifferent constraints. These constraints are called constraints of difference. They are defined on a subset of variables by a set of tuples for which the values occuring in the same tuple are all different. In this paper, a new filtering algorithm for these constraints is presented. It achieves the generalized arc-consistency condition for these non-binary constraints. It is based on matching theory and its complexity is low. In fact, for a constraint defined on a subset of p variables having domains of cardinality at most d, its space complexity is O(pd) and its time complexity is O(p2d2). This filtering algorithm has been successfully used in the system RESYN ( 1992), to solve the subgraph isomorphism problem.",
"A novel eager encoding of the ALL DIFFERENT constraint over bit-vectors is presented in this short paper. It is based on 1-to-1 mapping of the input bit-vectors to a linearly ordered set of auxiliary bit-vectors. Experiments with four SAT solvers showed that the new encoding can be solved order of magnitudes faster than the standard encoding in a hard unsatisfiable case.",
""
]
} |
1812.10851 | 2908313473 | In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms. | SAT encoding. The next development has been done in SAT encoding called that separates conflict rules in MAPF and agents transitions between time steps @cite_51 . Conflict rules are expressed over anonymized agents that are encoded by direct variables @math . | {
"cite_N": [
"@cite_51"
],
"mid": [
"2044113147"
],
"abstract": [
"This paper addresses make span optimal solving of cooperative path-finding problem (CPF) by translating it to propositional satisfiability (SAT). The task is to relocate set of agents to given goal positions so that they do not collide with each other. A novel SAT encoding of CPF is suggested. The novel encoding uses the concept of matching in a bipartite graph to separate spatial constraint of CPF from consideration of individual agents. The separation allowed reducing the size of encoding significantly. The conducted experimental evaluation shown that novel encoding can be solved faster than existing encodings for CPF and also that the SAT based methods dominates over A based methods in environment densely occupied by agents."
]
} |
1812.10851 | 2908313473 | In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms. | ASP, CSP, and ILP approach. Although lot of work in makespan optimal solving has been done for SAT other compilation-based approaches to MAPF like ASP-based @cite_14 and CSP-based @cite_46 exist. Both ASP and CSP offer rich formalism to express various objective functions in MAPF. The ASP-based approach adopts a more specific definition of MAPF where bounds on lengths of paths for individual agents are specified as a part of the input. Except the bound on sum-of-costs the ASP formulation works with other constraints such as no-cycle (if the agent shall not visit the same part of the environment multiple times), no-intersection (if only one agent visits each part of the environment), or no-waiting (when minimization of idle time is desirable). The ASP program for a given variant of MAPF consisting of a combination of various constraints is solver by the ASP solver @cite_16 . | {
"cite_N": [
"@cite_46",
"@cite_14",
"@cite_16"
],
"mid": [
"2109802566",
"1510564526",
""
],
"abstract": [
"Planning collision-free paths for multiple robots traversing a shared space is a problem that grows combinatorially with the number of robots. The naive centralised approach soon becomes intractable for even a moderate number of robots. Decentralised approaches, such as prioritised planning, are much faster but lack completeness.",
"Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths plans, no crossing meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.",
""
]
} |
1812.10818 | 2951930671 | Data labeling is currently a time-consuming task that often requires expert knowledge. In research settings, the availability of correctly labeled data is crucial to ensure that model predictions are accurate and useful. We propose relatively simple machine learning-based models that achieve high performance metrics in the binary and multiclass classification of radiology reports. We compare the performance of these algorithms to that of a data-driven approach based on NLP, and find that the logistic regression classifier outperforms all other models, in both the binary and multiclass classification tasks. We then choose the logistic regression binary classifier to predict chest X-ray (CXR) non-chest X-ray (non-CXR) labels in reports from different datasets, unseen during any training phase of any of the models. Even in unseen report collections, the binary logistic regression classifier achieves average precision values of above 0.9. Based on the regression coefficient values, we also identify frequent tokens in CXR and non-CXR reports that are features with possibly high predictive power. | Other researchers implemented ML and NLP methods to automate more general information extraction tasks. In particular, @cite_14 developed automatic methods to extract pulmonary embolism findings from thoracic CT reports, @cite_18 identified mammography findings by implementing a rule-based NLP approach, and @cite_3 developed NLP and ML methods to automatically extract BI-RADS (Breast Imaging Reporting and Data System) categories. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_3"
],
"mid": [
"2139054399",
"2768567289",
""
],
"abstract": [
"Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. The absence of an automated system to identify and track radiology recommendations is an important barrier to ensuring timely follow-up of patients especially with non-acute incidental findings on imaging examinations. In this paper, we present a text processing pipeline to automatically identify clinically important recommendation sentences in radiology reports. Our extraction pipeline is based on natural language processing (NLP) and supervised text classification methods. To develop and test the pipeline, we created a corpus of 800 radiology reports double annotated for recommendation sentences by a radiologist and an internist. We ran several experiments to measure the impact of different feature types and the data imbalance between positive and negative recommendation sentences. Our fully statistical approach achieved the best f-score 0.758 in identifying the critical recommendation sentences in radiology reports.",
"A deep learning convolutional neural network model can accurately classify free-text radiology reports when compared with a state-of-the-art method on reports from two academic institutions.",
""
]
} |
1812.10868 | 2908511362 | Shill bidding occurs when fake bids are introduced into an auction on the seller's behalf in order to artificially inflate the final price. This is typically achieved by the seller having friends bid in her auctions, or the seller controls multiple fake bidder accounts that are used for the sole purpose of shill bidding. We previously proposed a reputation system referred to as the Shill Score that indicates how likely a bidder is to be engaging in price inflating behaviour with regard to a specific seller's auctions. A potential bidder can observe the other bidders' Shill Scores, and if they are high, the bidder can elect not to participate as there is some evidence that shill bidding occurs in the seller's auctions. However, if a seller is in collusion with other sellers, or controls multiple seller accounts, she can spread the risk between the various sellers and can reduce suspicion on the shill bidder. Collusive seller behaviour impacts one of the characteristics of shill bidding the Shill Score is examining, therefore collusive behaviour can reduce a bidder's Shill Score. This paper extends the Shill Score to detect shill bidding where multiple sellers are working in collusion with each other. We propose an algorithm that provides evidence of whether groups of sellers are colluding. Based on how tight the association is between the sellers and the level of apparent shill bidding is occurring in the auctions, each participating bidder's Shill Score is adjusted appropriately to remove any advantages from seller collusion. Performance has been tested using simulated auction data and experimental results are presented. | Xu and Cheng @cite_2 propose an approach to detect shill suspects in concurrent online auctions (where multiple auctions for identical items are simultaneously taking place). Their auction model can be formally verified using a model checker according to a set of behavioural properties specified in pattern-based linear temporal logic. @cite_8 @cite_14 extend on this work by verifying shill suspects using Dempster-Shafer theory of evidence. They use eBay auction data to validate whether using Dempster-Shafer theory to combine multiple sources of evidence of shilling behaviour can reduce the number of false positive results that would be generated from a single source of evidence. Later, @cite_7 study the relationship between final prices of online auctions and shill activities in eBay auctions. They train a neural network using features extracted from item descriptions, listings and other auction properties. The likelihood of shill bidding is determined by the aforementioned Dempster-Shafer shill certification technique. @cite_22 introduce an approach for verifying shill bidders using a multi-state Bayesian network, which supports reasoning under uncertainty. They describe how to construct the multi-state Bayesian network and present formulas for calculating the probabilities of a bidder being a shill and being a normal bidder. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_2"
],
"mid": [
"",
"2124219971",
"2021119487",
"2106990685",
"109229848"
],
"abstract": [
"",
"Highlights? A Web crawling agent is used to collect real auction cases. ? The Page-Rank algorithm is used to discover the critical accounts of the groups. ? We developed a ranking method for auction fraud evaluation. ? The ANFIS neural network with real auction cases was implemented in the experiments. ? We find that the proposed ranking method is effective in identifying potential collusive fraud groups. Due to the popularity of online auction markets, auction fraud has become common. Typically, fraudsters will create many accounts and then transact among these accounts to get a higher rating score. This is easy to do because of the anonymity and low fees of the online auction platform. A literature review revealed that previous studies focused on detection of abnormal rating behaviors but did not provide a ranking method to evaluate how dangerous the fraudsters are. Therefore, we propose a process which can provide a method to detect collusive fraud groups in online auctions. First, we implement a Web crawling agent to collect real auction cases and identify potential collusive fraud groups based on a k-core clustering algorithm. Second, we define a data cleaning process to remove the unrelated data. Third, we use the Page-Rank algorithm to discover the critical accounts of the groups. Fourth, we developed a ranking method for auction fraud evaluation. This method is an extension of the standard Page-Rank algorithm and combines the concepts of Web structure and risk evaluation. Finally, we conduct experiments using the Adaptive Neuro-Fuzzy Inference System (ANFIS) neural network and verify the performance of our method by applying it to real auction cases. In summary, we find that the proposed ranking method is effective in identifying potential collusive fraud groups.",
"Shill bidding has become a serious issue for innocent bidders with the growing popularity of online auctions. In this paper, we study the relationship between final prices of online auctions and shill activities. We conduct experiments on real auction data from eBay to examine the hypotheses that state how the difference between final auction price and expected auction price implies shill bidding. In the experiments, a neural network based approach is used to learn the expected auction price. In particular, we trained the Large Memory Storage and Retrieval (LAMSTAR) Neural Network based on features extracted from item descriptions, listings and other auction properties. The likelihood of shill bidding is determined by a previously proposed shill certification technique based on Dempster-Shafer theory. By employing the chi-square test of independence and logistic regression, the experimental results indicate that a higher-than-expected final auction price might be used as direct evidence to distinguish likely shill-infected auctions from trustworthy auctions, allowing for more focused evaluation of shill-suspected auctions. As such, this work contributes to providing a feasible way to identify suspicious auctions that may contain shill biddings. It may also help to develop trustworthy auction houses with shill detection services that can protect honest bidders and benefit the auction markets in both the short-term and long term.",
"We present a shilling behavior detection and verification approach for online auction systems. Assuming a model checking technique to detect shill suspects in real-time, we focus on how to verify shill suspects using Dempster-Shafer theory of evidence. To demonstrate the feasibility of our approach, we provide a case study using real eBay auction data. The analysis results show that our approach can detect shills and that using Dempster-Shafer theory to combine multiple sources of evidence of shilling behavior can reduce the number of false positive results that would be generated from a single source of evidence.",
"Online auctions have become a quite popular and effective approach in the Internet-based eMarketplace. In concurrent auctions, where multiple auctions for identical items are running simultaneously, users’ bidding behaviors become very complicated. This situation motivates shilling behaviors, in which a seller disguises himself as normal bidders in order to drive up the bidding price and make the winning bidder pay more for an auctioned item. The goal of this paper is to propose a formal approach to verifying bidding behaviors, and especially, detecting shilling behaviors in concurrent online auctions. We develop a model template for concurrent auctions and derive auction models based on auction data from two concurrent auctions. The auction model can be formally verified using the SPIN model checker for certain behavioral properties, which are specified in pattern-based LTL (Linear Temporal Logic) formulas. To illustrate the feasibility and effectiveness of our approach, we provide a case study to show how possible shill bidders can be detected."
]
} |
1812.10868 | 2908511362 | Shill bidding occurs when fake bids are introduced into an auction on the seller's behalf in order to artificially inflate the final price. This is typically achieved by the seller having friends bid in her auctions, or the seller controls multiple fake bidder accounts that are used for the sole purpose of shill bidding. We previously proposed a reputation system referred to as the Shill Score that indicates how likely a bidder is to be engaging in price inflating behaviour with regard to a specific seller's auctions. A potential bidder can observe the other bidders' Shill Scores, and if they are high, the bidder can elect not to participate as there is some evidence that shill bidding occurs in the seller's auctions. However, if a seller is in collusion with other sellers, or controls multiple seller accounts, she can spread the risk between the various sellers and can reduce suspicion on the shill bidder. Collusive seller behaviour impacts one of the characteristics of shill bidding the Shill Score is examining, therefore collusive behaviour can reduce a bidder's Shill Score. This paper extends the Shill Score to detect shill bidding where multiple sellers are working in collusion with each other. We propose an algorithm that provides evidence of whether groups of sellers are colluding. Based on how tight the association is between the sellers and the level of apparent shill bidding is occurring in the auctions, each participating bidder's Shill Score is adjusted appropriately to remove any advantages from seller collusion. Performance has been tested using simulated auction data and experimental results are presented. | Some approaches have been proposed to detect shill bidding in real-time (i.e., while an auction is in progress) @cite_24 @cite_15 @cite_17 @cite_5 @cite_23 . The motivation is that actions can be taken to penalise the seller or shill bidder before the auction terminates to ensure that innocent bidders do not become victims. Such actions can include suspending or cancelling an auction, economic penalties, and account suspension or cancellation. However, a problem with a purely real-time shill detection method is that there is insufficient information available from just one auction. A bidder's historical behaviour must be to some extent taken into account to provide sufficient evidence of shill bidding. The real-time proposals so are merely demonstrations of a method, but lack any sort of testing to prove their effectiveness. Furthermore, the shill behaviours outlined in these papers are arbitrary and do not share a consensus amongst the academic community about what actually constitutes shill bidding. | {
"cite_N": [
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2802653551",
"58586023",
"2480260901",
"2786625053",
"827185887"
],
"abstract": [
"espanolAuction fraud; Bidding behaviour; Live shill score; Online auction; Post-filtering process; Shill bidding. EnglishOnline auctions are a popular and convenient way to engage in ecommerce. However, the amount of auction fraud has increased with the rapid surge of users participating in online auctions. Shill bidding is the most prominent type of auction fraud where a seller submits bids to inflate the price of the item without the intention of winning. Mechanisms have been proposed to detect shill bidding once an auction has finished. However, if the shill bidder is not detected during the auction, an innocent bidder can potentially be cheated by the end of the auction. Therefore, it is essential to detect and verify shill bidding in a running auction and take necessary intervention steps accordingly. This paper proposes a run-time statistical algorithm, referred to as the Live Shill Score, for detecting shill bidding in online auctions and takes appropriate actions towards the suspected shill bidders (e.g., issue a warning message, suspend the auction, etc.). The Live Shill Score algorithm also uses a Post-Filtering Process to avoid misclassification of innocent bidders. Experimental results using both simulated and commercial auction data show that our proposed algorithm can potentially detect shill bidding attempts before an auction ends.",
"Online auctions are vulnerable to shill bidders, who intend to artificially raise bidding prices, causing winning bidders to pay more than they should pay for auctioned items. Detection of such fraudulent behaviors is very difficult, especially when an auction is in progress, or “live”. This paper focuses on a formal technique to detect shilling behaviors in live online auctions. We define a monitoring agent that can continuously watch for abnormal bidding behaviors of a monitored bidder. To make the detection process efficient, we introduce a dynamic auction model (DAM), and use real-time model checking techniques to verify shilling behaviors specified in linear temporal logic (LTL). Finally, we present an algorithm for real-time shill detection, and use a case study to demonstrate the efficiency and effectiveness of our approach.",
"Monitoring the progress of auctions for fraudulent bidding activities is crucial for detecting and stopping fraud during runtime to prevent fraudsters from succeeding. To this end, we introduce a stage-based framework to monitor multiple live auctions for In-Auction Fraud (IAF). Creating a stage fraud monitoring system is different than what has been previously proposed in the very limited studies on runtime IAF detection. More precisely, we launch the IAF monitoring operation at several time points in each running auction depending on its duration. At each auction time point, our framework first detects IAF by evaluating each bidder's stage activities based on the most reliable set of IAF patterns, and then takes appropriate actions to react to dishonest bidders. We develop the proposed framework with a dynamic agent architecture where multiple monitoring agents can be created and deleted with respect to the status of their corresponding auctions (initialized, completed or cancelled). The adoption of dynamic software architecture represents an excellent solution to the scalability and time efficiency issues of IAF monitoring systems since hundreds of live auctions are held simultaneously in commercial auction houses. Every time an auction is completed or terminated, the participants' fraud scores are updated dynamically. Our approach enables us to observe each bidder in each live auction and manage his fraud score as well. We validate the IAF monitoring service through commercial auction data. We conduct three experiments to detect and react to shill-bidding fraud by employing datasets acquired from auctions of two valuable items, Palm PDA and XBOX. We observe each auction at three-time points, verifying the shill patterns that most likely happen in the corresponding stage for each one.",
"Online auctions are highly susceptible to fraud. Shill bidding is where a seller introduces fake bids into an auction to drive up the final price. If the shill bidders are not detected in run-time, innocent bidders will have already been cheated by the time the auction ends. Therefore, it is necessary to detect shill bidders in real-time and take appropriate actions according to the fraud activities. This paper presents a real-time shill bidding detection algorithm to identify the presence of shill bidding in multiple online auctions. The algorithm provides each bidder a Live Shill Score (LSS) indicating the likelihood of their potential involvement in price inflating behavior. The LSS is calculated based on the bidding patterns over a live auction and past bidding history. We have tested our algorithm on data obtained from a series of realistic simulated auctions and also commercial online auctions. Experimental results show that the real-time detection algorithm is able to prune the search space required to detect which bidders are likely to be potential shill bidders.",
"In spite of many advantages of online auctioning, serious frauds menace the auction users' interests. Today, monitoring auctions for frauds is becoming very crucial. We propose here a generic framework that covers real-time monitoring of multiple live auctions. The monitoring is performed at different auction times depending on fraud types and auction duration. We divide the real-time monitoring functionality into threefold: detecting frauds, reacting to frauds, and updating bidders' clusters. The first task examines in run-time bidding activities in ongoing auctions by applying fraud detection mechanisms. The second one determines how to react to suspicious activities by taking appropriate run-time actions against the fraudsters and infected auctions. Finally, every time an auction ends, successfully or unsuccessfully, participants' fraud scores and their clusters are updated dynamically. Through simulated auction data, we conduct an experiment to monitor live auctions for shill bidding. The latter is considered the most severe fraud in online auctions, and the most difficult to detect. More precisely, we monitor each live auction at three time points, and for each of them, we verify the shill patterns that most likely happen."
]
} |
1812.10735 | 2908378553 | Aspect level sentiment classification is a fine-grained sentiment analysis task, compared to the sentence level classification. A sentence usually contains one or more aspects. To detect the sentiment towards a particular aspect in a sentence, previous studies have developed various methods for generating aspect-specific sentence representations. However, these studies handle each aspect of a sentence separately. In this paper, we argue that multiple aspects of a sentence are usually orthogonal based on the observation that different aspects concentrate on different parts of the sentence. To force the orthogonality among aspects, we propose constrained attention networks (CAN) for multi-aspect sentiment analysis, which handles multiple aspects of a sentence simultaneously. Experimental results on two public datasets demonstrate the effectiveness of our approach. We also extend our approach to multi-task settings, outperforming the state-of-the-arts significantly. | Aspect level sentiment classification is a fine-grained sentiment analysis task. Earlier methods are usually based on explicit features @cite_12 @cite_7 . @cite_12 uses different linguistic features for sentiment classification. @cite_7 studies aspect-based Twitter sentiment classification by applying automatic features, which are obtained from unsupervised learning methods. With the rapid development of deep learning technologies, many end-to-end neural networks are implemented to solve this fine-grained task. @cite_17 proposes an attention-based LSTM network for aspect-level sentiment classification. @cite_21 introduces a word aspect fusion attention layer to learn attentive representations. @cite_6 proposes the interactive attention networks to generate the representations for targets and contexts separately. @cite_0 proposes dyadic memory networks for aspect based sentiment analysis. @cite_13 @cite_1 both propose hierarchical neural network models for aspect level sentiment classification. @cite_9 proposes a two-step attention model for targeted aspect-based sentiment analysis. @cite_19 proposes a segmentation attention based LSTM model for aspect level sentiment classification. However, all these works can be categorized as single-aspect sentiment analysis, which deals with aspects in a sentence separately, ignoring the orthogonality among multiple aspects. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2296071000",
"2788810909",
"2963494756",
"2804000041",
"2964164368",
"2767439512",
"2788610610",
"2767210791",
"18470130",
"2562607067"
],
"abstract": [
"Target-dependent sentiment analysis on Twitter has attracted increasing research attention. Most previous work relies on syntax, such as automatic parse trees, which are subject to noise for informal text such as tweets. In this paper, we show that competitive results can be achieved without the use of syntax, by extracting a rich set of automatic features. In particular, we split a tweet into a left context and a right context according to a given target, using distributed word representations and neural pooling functions to extract features. Both sentiment-driven and standard embeddings are used, and a rich set of neural pooling functions are explored. Sentiment lexicons are used as an additional source of information for feature extraction. In standard evaluation, the conceptually simple method gives a 4.8 absolute improvement over the state-of-the-art on three-way targeted sentiment classification, achieving the best reported results for this task.",
"",
"",
"",
"Aspect-level sentiment classification aims at identifying the sentiment polarity of specific target in its context. Previous approaches have realized the importance of targets in sentiment classification and developed various methods with the goal of precisely modeling their contexts via generating target-specific representations. However, these studies always ignore the separate modeling of targets. In this paper, we argue that both targets and contexts deserve special treatment and need to be learned their own representations via interactive learning. Then, we propose the interactive attention networks (IAN) to interactively learn attentions in the contexts and targets, and generate the representations for targets and contexts separately. With this design, the IAN model can well represent a target and its collocative context, which is helpful to sentiment classification. Experimental results on SemEval 2014 Datasets demonstrate the effectiveness of our model.",
"This paper proposes Dyadic Memory Networks (DyMemNN), a novel extension of end-to-end memory networks (memNN) for aspect-based sentiment analysis (ABSA). Originally designed for question answering tasks, memNN operates via a memory selection operation in which relevant memory pieces are adaptively selected based on the input query. In the problem of ABSA, this is analogous to aspects and documents in which the relationship between each word in the document is compared with the aspect vector. In the standard memory networks, simple dot products or feed forward neural networks are used to model the relationship between aspect and words which lacks representation learning capability. As such, our dyadic memory networks ameliorates this weakness by enabling rich dyadic interactions between aspect and word embeddings by integrating either parameterized neural tensor compositions or holographic compositions into the memory selection operation. To this end, we propose two variations of our dyadic memory networks, namely the Tensor DyMemNN and Holo DyMemNN. Overall, our two models are end-to-end neural architectures that enable rich dyadic interaction between aspect and document which intuitively leads to better performance. Via extensive experiments, we show that our proposed models achieve the state-of-the-art performance and outperform many neural architectures across six benchmark datasets.",
"",
"Aspect-level sentiment classification is a fine-grained sentiment analysis task, which aims to predict the sentiment of a text in different aspects. One key point of this task is to allocate the appropriate sentiment words for the given aspect.Recent work exploits attention neural networks to allocate sentiment words and achieves the state-of-the-art performance. However, the prior work only attends to the sentiment information and ignores the aspect-related information in the text, which may cause mismatching between the sentiment words and the aspects when an unrelated sentiment word is semantically meaningful for the given aspect. To solve this problem, we propose a HiErarchical ATtention (HEAT) network for aspect-level sentiment classification. The HEAT network contains a hierarchical attention module, consisting of aspect attention and sentiment attention. The aspect attention extracts the aspect-related information to guide the sentiment attention to better allocate aspect-specific sentiment words of the text. Moreover, the HEAT network supports to extract the aspect terms together with aspect-level sentiment classification by introducing the Bernoulli attention mechanism. To verify the proposed method, we conduct experiments on restaurant and laptop review data sets from SemEval at both the sentence level and the review level. The experimental results show that our model better allocates appropriate sentiment expressions for a given aspect benefiting from the guidance of aspect terms. Moreover, our method achieves better performance on aspect-level sentiment classification than state-of-the-art models.",
"In this paper we examine different linguistic features for sentimental polarity classification, and perform a comparative study on this task between blog and review data. We found that results on blog are much worse than reviews and investigated two methods to improve the performance on blogs. First we explored information retrieval based topic analysis to extract relevant sentences to the given topics for polarity classification. Second, we adopted an adaptive method where we train classifiers from review data and incorporate their hypothesis as features. Both methods yielded performance gain for polarity classification on blog data.",
""
]
} |
1812.10563 | 2906886582 | The setting of the classic prophet inequality is as follows: a gambler is shown the probability distributions of @math independent, non-negative random variables with finite expectations. In their indexed order, a value is drawn from each distribution, and after every draw the gambler may choose to accept the value and end the game, or discard the value permanently and continue the game. What is the best performance that the gambler can achieve in comparison to a prophet who can always choose the highest value? Krengel, Sucheston, and Garling solved this problem in 1978, showing that there exists a strategy for which the gambler can achieve half as much reward as the prophet in expectation. Furthermore, this result is tight. In this work, we consider a setting in which the gambler is allowed much less information. Suppose that the gambler can only take one sample from each of the distributions before playing the game, instead of knowing the full distributions. We provide a simple and intuitive algorithm that recovers the original approximation of @math . Our algorithm works against even an almighty adversary who always chooses a worst-case ordering, rather than the standard offline adversary. The result also has implications for mechanism design -- there is much interest in designing competitive auctions with a finite number of samples from value distributions rather than full distributional knowledge. | The constraint of being able to choose one item has been expanded to many combinatorial domains including multiple choices @cite_1 , matroids @cite_2 , and general down-closed set systems @cite_12 . The connection between prophet inequalities and auction design was first noted by Hajiaghayi, Kleinberg, and Sandholm @cite_11 . Chawla et. al @cite_7 explored this in detail, showing that prophet inequalities could be used to answer questions about the performance of posted price mechanisms. Recently, Correa et. al explored the reverse direction, showing that posted price mechanisms and their guarantees could be turned into results about prophet inequalities @cite_15 . Many threads involving combinatorial prophet inequalities and posted price mechanisms were united and generalized by D " @cite_3 . Another recent connection was noted by Lee and Singla @cite_8 between prophet inequalities and contention resolution schemes, a technique for solving combinatorial optimization problems. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"2077124610",
"2810576705",
"2950397526",
"2734590272",
"2949499418",
"",
"2329590824",
""
],
"abstract": [
"We study the classic mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [20]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [1], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [25]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. We prove that these mechanisms are approximately optimal in single-dimensional settings. These posted-price mechanisms avoid many of the properties of optimal mechanisms that make the latter impractical. Furthermore, these mechanisms generalize naturally to multi-dimensional settings where they give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time. This work can be viewed as an extension and improvement of the single-agent algorithmic pricing work of [9] to the setting of multiple agents where the designer has combinatorial feasibility constraints on which agents can simultaneously obtain each service.",
"Online contention resolution schemes (OCRSs) were proposed by Feldman, Svensson, and Zenklusen as a generic technique to round a fractional solution in the matroid polytope in an online fashion. It has found applications in several stochastic combinatorial problems where there is a commitment constraint: on seeing the value of a stochastic element, the algorithm has to immediately and irrevocably decide whether to select it while always maintaining an independent set in the matroid. Although OCRSs immediately lead to prophet inequalities, these prophet inequalities are not optimal. Can we instead use prophet inequalities to design optimal OCRSs? We design the first optimal @math -OCRS for matroids by reducing the problem to designing a matroid prophet inequality where we compare to the stronger benchmark of an ex-ante relaxation. We also introduce and design optimal @math -random order CRSs for matroids, which are similar to OCRSs but the arrival is chosen uniformly at random.",
"For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to single buyer sub-problems. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) buyers' types must be distributed independently (not necessarily identically), (ii) objective function must be linearly separable over the buyers, and (iii) except for the supply constraints, there should be no other inter-buyer constraints. Our framework is general in the sense that it makes no explicit assumption about buyers' valuations, type distributions, and single buyer constraints (e.g., budget, incentive compatibility, etc). We present two generic multi buyer mechanisms which use single buyer mechanisms as black boxes; if an @math -approximate single buyer mechanism can be constructed for each buyer, and if no buyer requires more than @math of all units of each item, then our generic multi buyer mechanisms are @math -approximation of the optimal multi buyer mechanism, where @math is a constant which is at least @math . Observe that @math is at least 1 2 (for @math ) and approaches 1 as @math . As a byproduct of our construction, we present a generalization of prophet inequalities. Furthermore, as applications of our framework, we present multi buyer mechanisms with improved approximation factor for several settings from the literature.",
"We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms, and is used to derive new and improved results for combinatorial markets (with and without complements), multi-dimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.",
"Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a \"prophet\" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of p matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most O(p), and this factor is also tight. Beyond their interest as theorems about pure online algorithms or optimal stopping rules, these results also have applications to mechanism design. Our results imply improved bounds on the ability of sequential posted-price mechanisms to approximate Bayesian optimal mechanisms in both single-parameter and multi-parameter settings. In particular, our results imply the first efficiently computable constant-factor approximations to the Bayesian optimal revenue in certain multi-parameter settings.",
"",
"We study generalizations of the Prophet Inequality'' and Secretary Problem'', where the algorithm is restricted to an arbitrary downward-closed set system. For 0,1 values, we give O(n)-competitive algorithms for both problems. This is close to the Omega(n log n) lower bound due to Babaioff, Immorlica, and Kleinberg. For general values, our results translate to O(log(n) log(r))-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O(loglog(n) log(r)) factor) an open question posed to us by Bobby Kleinberg.",
""
]
} |
1812.10687 | 2967737057 | Deep learning provides a powerful tool for robotic perception in the open world. However, real-world robotic systems, especially mobile robots, must be able to react intelligently and safely even in unexpected circumstances. This requires a system that knows what it knows, and can estimate its own uncertainty for unfamiliar, out-of-distribution observations. Approximate Bayesian approaches are commonly used to estimate uncertainty for neural network predictions, but struggle with out-of-distribution observations. Generative models can in principle detect out-of-distribution observations as those with a low estimated density, but overly pessimistic as an uncertainty measure, since the mere presence of an out-of-distribution input does not by itself indicate an unsafe situation. Intuitively, we would like a perception system that can detect when task-salient parts of the image are unfamiliar or uncertain, while ignoring task-irrelevant features. In this paper, we present a method for uncertainty-aware robotic perception that combines generative modeling and model uncertainty. Our method estimates an uncertainty measure about the model’s prediction, taking into account an explicit generative model of the observation distribution to handle out-of-distribution inputs. We evaluate our method on an action-conditioned collision prediction task with both simulated and real data, and demonstrate that our approach improves on a variety of Bayesian neural network techniques. | Prior work has investigated improving the calibration of deep models. Bayesian neural networks based on variational inference have been widely applied to neural network training @cite_18 @cite_5 . Bootstrapping provides an effective alternative to variational Bayesian methods @cite_13 @cite_4 , and simple ensembles (without dataset resampling) typically perform just as well as full bootstrapping with deep neural networks @cite_13 @cite_4 . Indeed, in our experiments, ensembles provide the best uncertainty estimates compared to other Bayesian neural network methods, though our approach improves on all of them. However, while these methods estimate a posterior distribution over the model parameters, they do not explicitly reason about the data distribution itself. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_13",
"@cite_4"
],
"mid": [
"2951266961",
"",
"2963938771",
"2963238274"
],
"abstract": [
"We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.",
"",
"Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as '-greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.",
"Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet."
]
} |
1812.10687 | 2967737057 | Deep learning provides a powerful tool for robotic perception in the open world. However, real-world robotic systems, especially mobile robots, must be able to react intelligently and safely even in unexpected circumstances. This requires a system that knows what it knows, and can estimate its own uncertainty for unfamiliar, out-of-distribution observations. Approximate Bayesian approaches are commonly used to estimate uncertainty for neural network predictions, but struggle with out-of-distribution observations. Generative models can in principle detect out-of-distribution observations as those with a low estimated density, but overly pessimistic as an uncertainty measure, since the mere presence of an out-of-distribution input does not by itself indicate an unsafe situation. Intuitively, we would like a perception system that can detect when task-salient parts of the image are unfamiliar or uncertain, while ignoring task-irrelevant features. In this paper, we present a method for uncertainty-aware robotic perception that combines generative modeling and model uncertainty. Our method estimates an uncertainty measure about the model’s prediction, taking into account an explicit generative model of the observation distribution to handle out-of-distribution inputs. We evaluate our method on an action-conditioned collision prediction task with both simulated and real data, and demonstrate that our approach improves on a variety of Bayesian neural network techniques. | Our approach aims to map observations into the training distribution, which is similar to the goals of domain adaptation methods that transform target domain images into the source domain, and vice versa, typically for simulation to real world transfer @cite_12 @cite_1 @cite_22 . However, these prior methods do not explicitly reason about uncertainty, and typically employ generative adversarial network models that are known to be poorly calibrated. | {
"cite_N": [
"@cite_1",
"@cite_22",
"@cite_12"
],
"mid": [
"2799034341",
"2767657961",
"2786551991"
],
"abstract": [
"Developing visual perception models for active agents and sensorimotor control in the physical world are cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we investigate developing real-world perception for active agents, propose Gibson Environment for this purpose, and showcase a set of perceptual tasks learned therein. Gibson is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism \"Goggles\" enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space.",
"Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.",
"This paper deals with the reality gap from a novel perspective, targeting transferring Deep Reinforcement Learning (DRL) policies learned in simulated environments to the real-world domain for visual control tasks. Instead of adopting the common solutions to the problem by increasing the visual fidelity of synthetic images output from simulators during the training phase, this paper seeks to tackle the problem by translating the real-world image streams back to the synthetic domain during the deployment phase, to make the robot feel at home. We propose this as a lightweight, flexible, and efficient solution for visual control, as 1) no extra transfer steps are required during the expensive training of DRL agents in simulation; 2) the trained DRL agents will not be constrained to being deployable in only one specific real-world environment; 3) the policy training and the transfer operations are decoupled, and can be conducted in parallel. Besides this, we propose a conceptually simple yet very effective shift loss to constrain the consistency between subsequent frames, eliminating the need for optical flow. We validate the shift loss for artistic style transfer for videos and domain adaptation, and validate our visual control approach in real-world robot experiments. A video of our results is available at: this https URL"
]
} |
1812.10352 | 2906169349 | We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding. | The existence of unknown unknowns was experimentally demonstrated by Attenberg al in @cite_16 . The authors separated the decisions rendered by predictive models into four conceptual categories: known knowns, known unknowns, unknown knowns, and unknown unknowns. Subsequently, the authors developed and participated in a challenge'', which challenged the participants to manually find the unknown unknowns to fool the machine. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2059362837"
],
"abstract": [
"We present techniques for gathering data that expose errors of automatic predictive models. In certain common settings, traditional methods for evaluating predictive models tend to miss rare but important errors—most importantly, cases for which the model is confident of its prediction (but wrong). In this article, we present a system that, in a game-like setting, asks humans to identify cases that will cause the predictive model-based system to fail. Such techniques are valuable in discovering problematic cases that may not reveal themselves during the normal operation of the system and may include cases that are rare but catastrophic. We describe the design of the system, including design iterations that did not quite work. In particular, the system incentivizes humans to provide examples that are difficult for the model to handle by providing a reward proportional to the magnitude of the predictive model's error. The humans are asked to “Beat the Machine” and find cases where the automatic model (“the Machine”) is wrong. Experiments show that the humans using Beat the Machine identify more errors than do traditional techniques for discovering errors in predictive models, and, indeed, they identify many more errors where the machine is (wrongly) confident it is correct. Furthermore, those cases the humans identify seem to be not simply outliers, but coherent areas missed completely by the model. Beat the Machine identifies the “unknown unknowns.” Beat the Machine has been deployed at an industrial scale by several companies. The main impact has been that firms are changing their perspective on and practice of evaluating predictive models. “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.” --Donald Rumsfeld"
]
} |
1812.10352 | 2906169349 | We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding. | Embracing the UDA problem, disentangling feature representation has been widely researched in the literature. The application of disentangled features has been explored in detail @cite_22 @cite_8 . The authors constructed new face images using a disentangled feature input, while preserving the original identities. Using generative adversarial network @cite_4 , more research to learn disentangled representation @cite_21 @cite_17 @cite_23 have been proposed. In particular, Chen al proposed the InfoGAN @cite_21 method, which learns and preserves semantic context without supervision. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_17"
],
"mid": [
"2099471712",
"1955369839",
"2594202937",
"2434741482",
"2737047298",
"2798813225"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Face recognition under viewpoint and illumination changes is a difficult problem, so many researchers have tried to solve this problem by producing the pose- and illumination- invariant feature. [26] changed all arbitrary pose and illumination images to the frontal view image to use for the invariant feature. In this scheme, preserving identity while rotating pose image is a crucial issue. This paper proposes a new deep architecture based on a novel type of multitask learning, which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity. The target pose can be controlled by the user's intention. This novel type of multi-task model significantly improves identity preservation over the single task model. By using all the synthesized controlled pose images, called Controlled Pose Image (CPI), for the pose-illumination-invariant feature and voting among the multiple face recognition results, we clearly outperform the state-of-the-art algorithms by more than 4 6 on the MultiPIE dataset.",
"Deep neural networks (DNNs) trained on large-scale datasets have recently achieved impressive improvements in face recognition. But a persistent challenge remains to develop methods capable of handling large pose variations that are relatively under-represented in training data. This paper presents a method for learning a feature representation that is invariant to pose, without requiring extensive pose coverage in training data. We first propose to use a synthesis network for generating non-frontal views from a single frontal image, in order to increase the diversity of training data while preserving accurate facial details that are critical for identity discrimination. Our next contribution is a multi-source multi-task DNN that seeks a rich embedding representing identity information, as well as information such as pose and landmark locations. Finally, we propose a Siamese network to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features obtained from two images of the same subject. Experiments on face datasets in both controlled and wild scenarios, such as MultiPIE, LFW and 300WLP, show that our method consistently outperforms the state-of-the-art, especially on images with large head pose variations.",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"The large pose discrepancy between two face images is one of the key challenges in face recognition. Conventional approaches for pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn a generative and discriminative representation, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.",
"We address the problem of image feature learning for the applications where multiple factors exist in the image generation process and only some factors are of our interest. We present a novel multi-task adversarial network based on an encoder-discriminator-generator architecture. The encoder extracts a disentangled feature representation for the factors of interest. The discriminators classify each of the factors as individual tasks. The encoder and the discriminators are trained cooperatively on factors of interest, but in an adversarial way on factors of distraction. The generator provides further regularization on the learned feature by reconstructing images with shared factors as the input image. We design a new optimization scheme to stabilize the adversarial optimization process when multiple distributions need to be aligned. The experiments on face recognition and font recognition tasks show that our method outperforms the state-of-the-art methods in terms of both recognizing the factors of interest and generalization to images with unseen variations."
]
} |
1812.10352 | 2906169349 | We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding. | These studies highlighted the importance of feature disentanglement, which is the first step in understanding the information contained within the feature. Inspired by various applications, we have attempted to remove certain information from the feature. In contrast to the InfoGan @cite_21 , we minimize the mutual information in order to learn. However, removal of information is an antithetical concept to learning and is also referred to as . Although the concept itself is the complete opposite of learning, it can help learning algorithms. Herein, we describe an algorithm for removing target information and present experimental results and analysis to support the proposed algorithm. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2434741482"
],
"abstract": [
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods."
]
} |
1812.10193 | 2906169906 | Recent advances in computing have allowed for the possibility to collect large amounts of data on personal activities and private living spaces. Collecting and publishing a dataset in this environment can cause concerns over privacy of the individuals in the dataset. In this paper we examine these privacy concerns. In particular, given a target application, how can we mask sensitive attributes in the data while preserving the utility of the data in that target application. Our focus is on protecting attributes that are hidden and can be inferred from the data by machine learning algorithms. We propose a generic framework that (1) removes the knowledge useful for inferring sensitive information, but (2) preserves the knowledge relevant to a given target application. We use deep neural networks and generative adversarial networks (GAN) to create privacy-preserving perturbations. Our noise-generating network is compact and efficient for running on mobile devices. Through extensive experiments, we show that our method outperforms conventional methods in effectively hiding the sensitive attributes while guaranteeing high performance for the target application. Our results hold for new neural network architectures, not seen before during training and are suitable for training new classifiers. | A lot of work have focused on manipulating the existing training and inference algorithms to protect privacy of training data, for example, a differentially private training algorithm for deep neural networks in which noise is added to the gradient during each iteration @cite_44 @cite_3 @cite_17 , and a teacher-student" model using aggregated information instead of the original signals @cite_33 . It is also proposed to train a classifier in the cloud, with multiple users uploading their perturbed data to a central node without ever revealing their original private data @cite_29 @cite_12 . | {
"cite_N": [
"@cite_33",
"@cite_29",
"@cite_3",
"@cite_44",
"@cite_12",
"@cite_17"
],
"mid": [
"2950602864",
"2140596092",
"2520442116",
"2473418344",
"2535838896",
"2053637704"
],
"abstract": [
"Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as \"teachers\" for a \"student\" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.",
"In this paper, we consider the design of a system in which Internet-connected mobile users contribute sensor data as training samples, and collaborate on building a model for classification tasks such as activity or context recognition. Constructing the model can naturally be performed by a service running in the cloud, but users may be more inclined to contribute training samples if the privacy of these data could be ensured. Thus, in this paper, we focus on privacy-preserving collaborative learning for the mobile setting, which addresses several competing challenges not previously considered in the literature: supporting complex classification methods like support vector machines, respecting mobile computing and communication constraints, and enabling user-determined privacy levels. Our approach, Pickle, ensures classification accuracy even in the presence of significantly perturbed training samples, is robust to methods that attempt to infer the original data or poison the model, and imposes minimal costs. We validate these claims using a user study, many real-world datasets and two different implementations of Pickle.",
"In recent years, deep learning has spread beyond both academia and industry with many exciting real-world applications. The development of deep learning has presented obvious privacy issues. However, there has been lack of scientific study about privacy preservation in deep learning. In this paper, we concentrate on the auto-encoder, a fundamental component in deep learning, and propose the deep private auto-encoder (dPA). Our main idea is to enforce e-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results. We apply the dPA to human behavior prediction in a health social network. Theoretical analysis and thorough experimental evaluations show that the dPA is highly effective and efficient, and it significantly outperforms existing solutions.",
"Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.",
"Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude.",
"Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets."
]
} |
1812.10193 | 2906169906 | Recent advances in computing have allowed for the possibility to collect large amounts of data on personal activities and private living spaces. Collecting and publishing a dataset in this environment can cause concerns over privacy of the individuals in the dataset. In this paper we examine these privacy concerns. In particular, given a target application, how can we mask sensitive attributes in the data while preserving the utility of the data in that target application. Our focus is on protecting attributes that are hidden and can be inferred from the data by machine learning algorithms. We propose a generic framework that (1) removes the knowledge useful for inferring sensitive information, but (2) preserves the knowledge relevant to a given target application. We use deep neural networks and generative adversarial networks (GAN) to create privacy-preserving perturbations. Our noise-generating network is compact and efficient for running on mobile devices. Through extensive experiments, we show that our method outperforms conventional methods in effectively hiding the sensitive attributes while guaranteeing high performance for the target application. Our results hold for new neural network architectures, not seen before during training and are suitable for training new classifiers. | A different approach to preserve privacy is making sure sensitive elements of the data is removed before publishing it, often called privacy-preserving data publishing (PPDP) @cite_15 . A main line of work in this field focuses on transforming numerical data into a secondary feature space, such that certain statistical properties are preserved and data mining tasks can be done with minimal performance loss @cite_43 @cite_8 @cite_9 . These methods guarantee data utility only on this secondary space. This means that a classifier trained on the perturbed data is not guaranteed to perform well on original data. This can be troublesome in a scenario where a classification model is trained on public, perturbed data and is going to be deployed on users' personal devices to perform a task locally on non-perturbed private user data. Our perturbed data can be used in conjunction with original data in specified target applications. In addition, some of the methods in this category rely on expensive computations which renders them infeasible on large datasets. | {
"cite_N": [
"@cite_43",
"@cite_15",
"@cite_9",
"@cite_8"
],
"mid": [
"2129533844",
"2168757172",
"2160553465",
""
],
"abstract": [
"Due to growing concerns about the privacy of personal information, organizations that use their customers' records in data mining activities are forced to take actions to protect the privacy of the individuals. A frequently used disclosure protection method is data perturbation. When used for data mining, it is desirable that perturbation preserves statistical relationships between attributes, while providing adequate protection for individual confidential data. To achieve this goal, we propose a kd-tree based perturbation method, which recursively partitions a data set into smaller subsets such that data records within each subset are more homogeneous after each partition. The confidential data in each final subset are then perturbed using the subset average. An experimental study is conducted to show the effectiveness of the proposed method",
"Methods for privacy protection of microdata include grouping and publication of data perturbed with random noise. The authors suggest a variant of the latter in which the noise is generated by bootstrapping from the original empirical distribution. The published data distribution then essentially consists of a convolution of a distribution with itself and the distribution can be recovered, although the individual observations remain protected. The authors explore the trade-off between privacy protection based on bootstrapping and the efficiency of estimation using the published data. For reasonable loss measures, the trade-off is hyperbolic in character. Some encouraging simulation results are reported.",
"This paper explores the possibility of using multiplicative random projection matrices for privacy preserving distributed data mining. It specifically considers the problem of computing statistical aggregates like the inner product matrix, correlation coefficient matrix, and Euclidean distance matrix from distributed privacy sensitive data possibly owned by multiple parties. This class of problems is directly related to many other data-mining problems such as clustering, principal component analysis, and classification. This paper makes primary contributions on two different grounds. First, it explores independent component analysis as a possible tool for breaching privacy in deterministic multiplicative perturbation-based models such as random orthogonal transformation and random rotation. Then, it proposes an approximate random projection-based technique to improve the level of privacy protection while still preserving certain statistical characteristics of the data. The paper presents extensive theoretical analysis and experimental results. Experiments demonstrate that the proposed technique is effective and can be successfully used for different types of privacy-preserving data mining applications.",
""
]
} |
1812.10193 | 2906169906 | Recent advances in computing have allowed for the possibility to collect large amounts of data on personal activities and private living spaces. Collecting and publishing a dataset in this environment can cause concerns over privacy of the individuals in the dataset. In this paper we examine these privacy concerns. In particular, given a target application, how can we mask sensitive attributes in the data while preserving the utility of the data in that target application. Our focus is on protecting attributes that are hidden and can be inferred from the data by machine learning algorithms. We propose a generic framework that (1) removes the knowledge useful for inferring sensitive information, but (2) preserves the knowledge relevant to a given target application. We use deep neural networks and generative adversarial networks (GAN) to create privacy-preserving perturbations. Our noise-generating network is compact and efficient for running on mobile devices. Through extensive experiments, we show that our method outperforms conventional methods in effectively hiding the sensitive attributes while guaranteeing high performance for the target application. Our results hold for new neural network architectures, not seen before during training and are suitable for training new classifiers. | In recent years GANs have been successfully used to produce that can fool a classifier to predict wrong classes @cite_1 . Some have formulated the problem of privacy protection as producing adversarial examples for an identity-revealing classifier @cite_45 @cite_27 . However, we demonstrate through experiments that the absence of our proposed function, @math , that maintains the utility of the data, leads to a weaker utility guarantee for the published data. | {
"cite_N": [
"@cite_27",
"@cite_45",
"@cite_1"
],
"mid": [
"2765886485",
"2805329444",
"2783555701"
],
"abstract": [
"Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.",
"Adversarial attacks involve adding, small, often imperceptible, perturbations to inputs with the goal of getting a machine learning model to misclassifying them. While many different adversarial attack strategies have been proposed on image classification models, object detection pipelines have been much harder to break. In this paper, we propose a novel strategy to craft adversarial examples by solving a constrained optimization problem using an adversarial generator network. Our approach is fast and scalable, requiring only a forward pass through our trained generator network to craft an adversarial sample. Unlike in many attack strategies, we show that the same trained generator is capable of attacking new images without explicitly optimizing on them. We evaluate our attack on a trained Faster R-CNN face detector on the cropped 300-W face dataset where we manage to reduce the number of detected faces to @math of all originally detected faces. In a different experiment, also on 300-W, we demonstrate the robustness of our attack to a JPEG compression based defense typical JPEG compression level of @math reduces the effectiveness of our attack from only @math of detected faces to a modest @math .",
"Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76 accuracy on a public MNIST black-box attack challenge."
]
} |
1812.10193 | 2906169906 | Recent advances in computing have allowed for the possibility to collect large amounts of data on personal activities and private living spaces. Collecting and publishing a dataset in this environment can cause concerns over privacy of the individuals in the dataset. In this paper we examine these privacy concerns. In particular, given a target application, how can we mask sensitive attributes in the data while preserving the utility of the data in that target application. Our focus is on protecting attributes that are hidden and can be inferred from the data by machine learning algorithms. We propose a generic framework that (1) removes the knowledge useful for inferring sensitive information, but (2) preserves the knowledge relevant to a given target application. We use deep neural networks and generative adversarial networks (GAN) to create privacy-preserving perturbations. Our noise-generating network is compact and efficient for running on mobile devices. Through extensive experiments, we show that our method outperforms conventional methods in effectively hiding the sensitive attributes while guaranteeing high performance for the target application. Our results hold for new neural network architectures, not seen before during training and are suitable for training new classifiers. | Similar efforts have been made in the fairness literature to make sure that certain attributes (e.g., gender, or race) in a dataset do not create unwanted bias that affects decision making systems @cite_41 @cite_32 . There is a key distinction between our work and that of censorship_fairness @cite_32 in how we train our model. To train their model on a face dataset, the authors give two sets of data to the network, where in the second set, the last name of the subjects is artificially placed on each image. By providing two sets of and samples, they are letting the model what a safe-to-publish image looks like prior to training. In our method, the model relies only on the two classifiers, @math and @math , to learn how the published results should look like. It is also unclear whether @math and @math in their work will reach an optimal state given that they are trained from scratch together with the generative model. | {
"cite_N": [
"@cite_41",
"@cite_32"
],
"mid": [
"2725155646",
"2247194987"
],
"abstract": [
"How can we learn a classifier that is \"fair\" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.",
"In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model."
]
} |
1812.10411 | 2952984976 | Cross-lingual speech emotion recognition is an important task for practical applications. The performance of automatic speech emotion recognition systems degrades in cross-corpus scenarios, particularly in scenarios involving multiple languages or a previously unseen language such as Urdu for which limited or no data is available. In this study, we investigate the problem of cross-lingual emotion recognition for Urdu language and contribute URDU---the first ever spontaneous Urdu-language speech emotion database. Evaluations are performed using three different Western languages against Urdu and experimental results on different possible scenarios suggest various interesting aspects for designing more adaptive emotion recognition system for such limited languages. In results, selecting training instances of multiple languages can deliver comparable results to baseline and augmentation a fraction of testing language data while training can help to boost accuracy for speech emotion recognition. URDU data is publicly available for further research. | To study the universality of emotional cues among languages, @cite_6 studied speech emotion recognition for Mandarin vs. Western languages (i.e., German, and Danish). They evaluated gender-specific speech emotion classification and achieved the accuracy more than the chance level. In @cite_9 , authors used six emotional databases and evaluated different scenarios for cross-corpus speech emotion recognition. They were able to capture the limitations of current systems due to their very poor performance on spontaneous set or natural emotional corpus. In another interesting work, @cite_15 developed an emotion profile based ensemble SVM for emotion recognition in different unseen languages. The authors used the RML emotion database that covers six languages. However, this data is recorded under the same condition and contains very small number of utterances, five sentences, for each emotion. | {
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_15"
],
"mid": [
"",
"2638999229",
"2344608732"
],
"abstract": [
"",
"An investigation on classification of emotional speech cross different language families is proposed in this paper. Datasets on three languages, CDESD in Mandarin, Emo-DB in German, and DES in Danish are analyzed. With 2-D classifications on arousal-appraisal space, better recognition performances are observed in arousal dimension than in appraisal dimension. The classification rates in cross language family test between CDESD and Emo-DB or DES are far higher than chance level, shows that there exist universal mechanisms in human voice emotion independent on languages. Results in test within the same language family between Emo-DB and DES are even better than in cross language family test with CDESD in Mandarin, shows the language and culture also influence the way of expression in speech. The best classification rate in the cross language family test is achieved on male speech samples as 71.62 , when CDESD dataset is used as training set and Emo-DB as testing set.",
"Over the last years, researchers have addressed emotional state identification because it is an important issue to achieve more natural speech interactive systems. There are several theories that explain emotional expressiveness as a result of natural evolution, as a social construction, or a combination of both. In this work, we propose a novel system to model each language independently, preserving the cultural properties. In a second stage, we use the concept of universality of emotions to map and predict emotions in never-seen languages. Features and classifiers widely tested for similar tasks were used to set the baselines. We developed a novel ensemble classifier to deal with multiple languages and tested it on never-seen languages. Furthermore, this ensemble uses the Emotion Profiles technique in order to map features from diverse languages in a more tractable space. The experiments were performed in a language-independent scheme. Results show that the proposed model improves the baseline accuracy, whereas its modular design allows the incorporation of a new language without having to train the whole system."
]
} |
1812.10328 | 2906046032 | In this work, we present a framework based on multi-stream convolutional neural networks (CNNs) for group activity recognition. Streams of CNNs are separately trained on different modalities and their predictions are fused at the end. Each stream has two branches to predict the group activity based on person and scene level representations. A new modality based on the human pose estimation is presented to add extra information to the model. We evaluate our method on the Volleyball and Collective Activity datasets. Experimental results show that the proposed framework is able to achieve state-of-the-art results when multiple or single frames are given as input to the model with 90.50 and 86.61 accuracy on Volleyball dataset, respectively, and 87.01 accuracy of multiple frames group activity on Collective Activity dataset. | There have been efforts to use probabilistic graphical models to tackle the group activity recognition problem. @cite_26 propose a graphical model with person-person and group-person factors and employ a two-stage inference mechanism to find the optimal graph structure and the best possible labels for the individual actions and collective activity. In @cite_15 , the authors explore the idea of tracking persons and predicting their activity as a group in a joint probabilistic framework. @cite_17 use a latent graph model for multi-target tracking, activity group localization and group activity recognition. | {
"cite_N": [
"@cite_15",
"@cite_26",
"@cite_17"
],
"mid": [
"100367037",
"2047499569",
"2320007652"
],
"abstract": [
"We present a coherent, discriminative framework for simultaneously tracking multiple people and estimating their collective activities. Instead of treating the two problems separately, our model is grounded in the intuition that a strong correlation exists between a person's motion, their activity, and the motion and activities of other nearby people. Instead of directly linking the solutions to these two problems, we introduce a hierarchy of activity types that creates a natural progression that leads from a specific person's motion to the activity of the group as a whole. Our model is capable of jointly tracking multiple people, recognizing individual activities (atomic activities), the interactions between pairs of people (interaction activities), and finally the behavior of groups of people (collective activities). We also propose an algorithm for solving this otherwise intractable joint inference problem by combining belief propagation with a version of the branch and bound algorithm equipped with integer programming. Experimental results on challenging video datasets demonstrate our theoretical claims and indicate that our model achieves the best collective activity classification results to date.",
"In this paper, we go beyond recognizing the actions of individuals and focus on group activities. This is motivated from the observation that human actions are rarely performed in isolation; the contextual information of what other people in the scene are doing provides a useful cue for understanding high-level activities. We propose a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them. Two types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. In particular, we propose three different approaches to model the person-person interaction. One approach is to explore the structures of person-person interaction. Differently from most of the previous latent structured models, which assume a predefined structure for the hidden layer, e.g., a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. The second approach explores person-person interaction in the feature level. We introduce a new feature representation called the action context (AC) descriptor. The AC descriptor encodes information about not only the action of an individual person in the video, but also the behavior of other people nearby. The third approach combines the above two. Our experimental results demonstrate the benefit of using contextual information for disambiguating group activities.",
"A latent graphical model integrating multi-target tracking, group discovery, and activity recognition is proposed.Performance of activity recognition improves when multi-target tracking and group clustering are incorporated.Group activities are better recognized based on the structured relations within the group and group-group compatibilities.Increasing the connectivity of different groups improves the overall performance.Incorporating activity information leads to robust group localization in the video. Beyond recognizing actions of individuals, activity group localization in videos aims to localize groups of persons in spatiotemporal spaces and recognize what activity the group performs. In this paper, we propose a latent graph model to simultaneously address the problem of multi-target tracking, group discovery and activity recognition. Our key insight is to exploit the contextual relations among people. We present them as a latent relational graph, which hierarchically encodes the association potentials between tracklets, intra-group interactions, correlations, and inter-group compatibilities. Our model is capable of propagating multiple evidences among different layers of the latent graph. Particularly, associated tracklets assist accurate group discovery, activity recognition can benefit from knowing the whole structured groups, and the group and activity information in turn provides strong cues for establishing coherent associations between tracklets. Experiments on five datasets demonstrate that our model achieves both significant improvements in activity group localization and competitive performance on activity recognition."
]
} |
1812.10328 | 2906046032 | In this work, we present a framework based on multi-stream convolutional neural networks (CNNs) for group activity recognition. Streams of CNNs are separately trained on different modalities and their predictions are fused at the end. Each stream has two branches to predict the group activity based on person and scene level representations. A new modality based on the human pose estimation is presented to add extra information to the model. We evaluate our method on the Volleyball and Collective Activity datasets. Experimental results show that the proposed framework is able to achieve state-of-the-art results when multiple or single frames are given as input to the model with 90.50 and 86.61 accuracy on Volleyball dataset, respectively, and 87.01 accuracy of multiple frames group activity on Collective Activity dataset. | Recently, a series of works have studied the group activity recognition using RNNs to model temporal information @cite_18 @cite_19 @cite_20 @cite_23 @cite_1 @cite_2 @cite_10 . @cite_18 employ a hierarchy of Long-Short Term Memory (LSTM) networks to predict individual actions and collective activity. In @cite_19 , individual actors and the collective event are modeled with bidirectional and unidirectional LSTMs, respectively, and higher importance is given to key actors in an event by the means of attention pooling on persons. @cite_20 introduces person-centered features as the input of a hierarchical LSTM to predict futsal activity. In a different approach, @cite_23 proposes a three-level model consisting of person, group and scene representations. In the person level, each person is modeled in temporal domain with an LSTM. The final representations of these LSTMs are fed into other LSTMs at the group level in which persons are divided into spatio-temporal consistent groups. The output of the group-level LSTMs are used to generate the final scene-level prediction. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_10",
"@cite_20"
],
"mid": [
"2949887038",
"",
"2206427987",
"2736442062",
"2558630670",
"2778252923",
"2737024909"
],
"abstract": [
"In group activity recognition, the temporal dynamics of the whole activity can be inferred based on the dynamics of the individual people representing the activity. We build a deep model to capture these dynamics based on LSTM (long-short term memory) models. To make use of these ob- servations, we present a 2-stage deep temporal model for the group activity recognition problem. In our model, a LSTM model is designed to represent action dynamics of in- dividual people in a sequence and another LSTM model is designed to aggregate human-level information for whole activity understanding. We evaluate our model over two datasets: the collective activity dataset and a new volley- ball dataset. Experimental results demonstrate that our proposed model improves group activity recognition perfor- mance with compared to baseline methods.",
"",
"Multi-person event recognition is a challenging task, often with many people active in the scene but only a small subset contributing to an actual event. In this paper, we propose a model which learns to detect events in such videos while automatically \"attending\" to the people responsible for the event. Our model does not use explicit annotations regarding who or where those people are during training and testing. In particular, we track people in videos and use a recurrent neural network (RNN) to represent the track features. We learn time-varying attention weights to combine these features at each time-instant. The attended features are then processed using another RNN for event detection classification. Since most video datasets with multiple people are restricted to a small number of videos, we also collected a new basketball dataset comprising 257 basketball games with 14K event annotations corresponding to 11 event classes. Our model outperforms state-of-the-art methods for both event classification and detection on this new dataset. Additionally, we show that the attention mechanism is able to consistently localize the relevant players.",
"Modeling of high order interactional context, e.g., group interaction, lies in the central of collective group activity recognition. However, most of the previous activity recognition methods do not offer a flexible and scalable scheme to handle the high order context modeling problem. To explicitly address this fundamental bottleneck, we propose a recurrent interactional context modeling scheme based on LSTM network. By utilizing the information propagation aggregation capability of LSTM, the proposed scheme unifies the interactional feature modeling process for single person dynamics, intra-group (e.g., persons within a group) and inter-group(e.g., group to group)interactions. The proposed high order context modeling scheme produces more discriminative descriptive interactional features. It is very flexible to handle a varying number of input instances (e.g., different number of persons in a group or different number of groups) and linearly scalable to high order context modeling problem. Extensive experiments on two benchmark collective group activity datasets demonstrate the effectiveness of the proposed method.",
"We present a unified framework for understanding human social behaviors in raw image sequences. Our model jointly detects multiple individuals, infers their social actions, and estimates the collective actions with a single feed-forward pass through a neural network. We propose a single architecture that does not rely on external detection algorithms but rather is trained end-to-end to generate dense proposal maps that are refined via a novel inference scheme. The temporal consistency is handled via a person-level matching Recurrent Neural Network. The complete model takes as input a sequence of frames and outputs detections along with the estimates of individual actions and collective activities. We demonstrate state-of-the-art performance of our algorithm on multiple publicly available benchmarks.",
"Activity recognition has become an important function in many emerging computer vision applications e.g. automatic video surveillance system, human-computer interaction application, and video recommendation system, etc. In this paper, we propose a novel semantics based group activity recognition scheme, namely SBGAR, which achieves higher accuracy and efficiency than existing group activity recognition methods. SBGAR consists of two stages: in stage I, we use a LSTM model to generate a caption for each video frame; in stage II, another LSTM model is trained to predict the final activity categories based on these generated captions. We evaluate SBGAR using two well-known datasets: the Collective Activity Dataset and the Volleyball Dataset. Our experimental results show that SBGAR improves the group activity recognition accuracy with shorter computation time compared to the state-of-the-art methods.",
"We present a hierarchical recurrent network for understanding team sports activity in image and location sequences. In the hierarchical model, we integrate proposed multiple person-centered features over a temporal sequence based on LSTM's outputs. To achieve this scheme, we introduce the Keeping state in LSTM as one of externally controllable states, and extend the Hierarchical LSTMs to include mechanism for the integration. Experimental results demonstrate effectiveness of the proposed framework involving hierarchical LSTM and person-centered feature. In this study, we demonstrate improvement over the reference model. Specifically, by incorporating the person-centered feature with meta-information (e.g., location data) in our proposed late fusion framework, we also demonstrate increased discriminability of action categories and enhanced robustness against fluctuation in the number of observed players."
]
} |
1812.10328 | 2906046032 | In this work, we present a framework based on multi-stream convolutional neural networks (CNNs) for group activity recognition. Streams of CNNs are separately trained on different modalities and their predictions are fused at the end. Each stream has two branches to predict the group activity based on person and scene level representations. A new modality based on the human pose estimation is presented to add extra information to the model. We evaluate our method on the Volleyball and Collective Activity datasets. Experimental results show that the proposed framework is able to achieve state-of-the-art results when multiple or single frames are given as input to the model with 90.50 and 86.61 accuracy on Volleyball dataset, respectively, and 87.01 accuracy of multiple frames group activity on Collective Activity dataset. | @cite_1 propose confidence-energy recurrent network in which a novel energy layer is used instead of the common softmax layer and the uncertain predictions are avoided by the computation of p-values at the same layer. @cite_2 detect persons and classify group activity in an end to end framework. To do so, a fully convolutional network provides the initial bounding boxes for persons, which are then collectively refined in a Markov Random Field. Similar to previous works, an RNN predicts the group activity. Most recently, authors in @cite_10 decide upon the group activity based on the captions automatically generated for the input video as semantic information. | {
"cite_N": [
"@cite_10",
"@cite_1",
"@cite_2"
],
"mid": [
"2778252923",
"",
"2558630670"
],
"abstract": [
"Activity recognition has become an important function in many emerging computer vision applications e.g. automatic video surveillance system, human-computer interaction application, and video recommendation system, etc. In this paper, we propose a novel semantics based group activity recognition scheme, namely SBGAR, which achieves higher accuracy and efficiency than existing group activity recognition methods. SBGAR consists of two stages: in stage I, we use a LSTM model to generate a caption for each video frame; in stage II, another LSTM model is trained to predict the final activity categories based on these generated captions. We evaluate SBGAR using two well-known datasets: the Collective Activity Dataset and the Volleyball Dataset. Our experimental results show that SBGAR improves the group activity recognition accuracy with shorter computation time compared to the state-of-the-art methods.",
"",
"We present a unified framework for understanding human social behaviors in raw image sequences. Our model jointly detects multiple individuals, infers their social actions, and estimates the collective actions with a single feed-forward pass through a neural network. We propose a single architecture that does not rely on external detection algorithms but rather is trained end-to-end to generate dense proposal maps that are refined via a novel inference scheme. The temporal consistency is handled via a person-level matching Recurrent Neural Network. The complete model takes as input a sequence of frames and outputs detections along with the estimates of individual actions and collective activities. We demonstrate state-of-the-art performance of our algorithm on multiple publicly available benchmarks."
]
} |
1812.10328 | 2906046032 | In this work, we present a framework based on multi-stream convolutional neural networks (CNNs) for group activity recognition. Streams of CNNs are separately trained on different modalities and their predictions are fused at the end. Each stream has two branches to predict the group activity based on person and scene level representations. A new modality based on the human pose estimation is presented to add extra information to the model. We evaluate our method on the Volleyball and Collective Activity datasets. Experimental results show that the proposed framework is able to achieve state-of-the-art results when multiple or single frames are given as input to the model with 90.50 and 86.61 accuracy on Volleyball dataset, respectively, and 87.01 accuracy of multiple frames group activity on Collective Activity dataset. | In @cite_5 , Temporal Segment Network is proposed to model long-range temporal information. Each video is divided into segments of which snippets of frames are sampled. A differentiable segmental consensus of CNNs on different segments is performed which makes end-to-end joint training of segment CNNs possible. They train their model with RGB, optical flow, and warped optical flow modalities separately and combine class scores of multiple streams at the end. The study of the effect of pretraining different architectures on large scale video classification datasets before training on smaller datasets in @cite_21 proves the positive impact of this step in action recognition task. Finally, it is worth mentioning that a common aspect of all above state-of-the-art action recognition methods is the use of multiple input streams. | {
"cite_N": [
"@cite_5",
"@cite_21"
],
"mid": [
"2507009361",
"2619082050"
],
"abstract": [
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).",
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101."
]
} |
1812.10437 | 2906221980 | A central machine is interested in estimating the underlying structure of a sparse Gaussian Graphical Model (GGM) from datasets distributed across multiple local machines. The local machines can communicate with the central machine through a wireless multiple access channel. In this paper, we are interested in designing effective strategies where reliable learning is feasible under power and bandwidth limitations. Two approaches are proposed: Signs and Uncoded methods. In Signs method, the local machines quantize their data into binary vectors and an optimal channel coding scheme is used to reliably send the vectors to the central machine where the structure is learned from the received data. In Uncoded method, data symbols are scaled and transmitted through the channel. The central machine uses the received noisy symbols to recover the structure. Theoretical results show that both methods can recover the structure with high probability for large enough sample size. Experimental results indicate the superiority of Signs method over Uncoded method under several circumstances. | The Chow-Liu algorithm obtains the maximum likelihood estimate of the structure if the underlying graph is a tree @cite_1 . Although this algorithm is applicable for discrete random variables, it can be used for tree structured GGMs in a similar manner @cite_14 . @cite_13 proposed a distributed version of the Chow-Liu algorithm and proved it can recover the underlying tree structure with high probability. in @cite_16 and @cite_9 provided an analysis of the error exponent of the Chow-Liu algorithm on tree-structured GGMs. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2124988974",
"2163166770",
"2117245428",
"2891030779"
],
"abstract": [
"",
"The problem of maximum-likelihood (ML) estimation of discrete tree-structured distributions is considered. Chow and Liu established that ML-estimation reduces to the construction of a maximum-weight spanning tree using the empirical mutual information quantities as the edge weights. Using the theory of large-deviations, we analyze the exponent associated with the error probability of the event that the ML-estimate of the Markov tree structure differs from the true tree structure, given a set of independently drawn samples. By exploiting the fact that the output of ML-estimation is a tree, we establish that the error exponent is equal to the exponential rate of decay of a single dominant crossover event. We prove that in this dominant crossover event, a non-neighbor node pair replaces a true edge of the distribution that is along the path of edges in the true tree graph connecting the nodes in the non-neighbor pair. Using ideas from Euclidean information theory, we then analyze the scenario of ML-estimation in the very noisy learning regime and show that the error exponent can be approximated as a ratio, which is interpreted as the signal-to-noise ratio (SNR) for learning tree distributions. We show via numerical experiments that in this regime, our SNR approximation is accurate.",
"A method is presented to approximate optimally an n -dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence. The problem is to find an optimum set of n - 1 first order dependence relationship among the n variables. It is shown that the procedure derived in this paper yields an approximation of a minimum difference in information. It is further shown that when this procedure is applied to empirical observations from an unknown distribution of tree dependence, the procedure is the maximum-likelihood estimate of the distribution.",
"The problem of learning forest-structured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the high-dimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp., tree) model is the hardest (resp., easiest) to learn using the proposed algorithm in terms of error rates for structure learning.",
"In this paper, learning of tree-structured Gaussian graphical models from distributed data is addressed. In our model, samples are stored in a set of distributed machines where each machine has access to only a subset of features. A central machine is then responsible for learning the structure based on received messages from the other nodes. We present a set of communication-efficient strategies, which are theoretically proved to convey sufficient information for reliable learning of the structure. In particular, our analyses show that even if each machine sends only the signs of its local data samples to the central node, the tree structure can still be recovered with high accuracy. Our simulation results on both synthetic and real-world datasets show that our strategies achieve a desired accuracy in inferring the underlying structure while spending a small budget on communication."
]
} |
1812.10479 | 2907386696 | Stock market volatility forecasting is a task relevant to assessing market risk. We investigate the interaction between news and prices for the one-day-ahead volatility prediction using state-of-the-art deep learning approaches. The proposed models are trained either end-to-end or using sentence encoders transfered from other tasks. We evaluate a broad range of stock market sectors, namely Consumer Staples, Energy, Utilities, Heathcare, and Financials. Our experimental results show that adding news improves the volatility forecasting as compared to the mainstream models that rely only on price data. In particular, our model outperforms the widely-recognized GARCH(1,1) model for all sectors in terms of coefficient of determination @math , @math and @math , achieving the best performance when training from both news and price data. | The works described above ( @cite_7 @cite_42 @cite_32 @cite_30 ) target long-horizon volatility predictions (one year or quarterly @cite_30 ). In particular, @cite_30 and @cite_32 uses market data (price) features along with the textual representation of the 10-K reports. These existing works that employ multi-modal learning @cite_45 are based on a In the setup, text and price features are trained independently and a meta model is used in a later stage to discriminate how to weight the contribution of each mode. approach. For example, stacking ensembles to take into account the price and text predictions @cite_30 . In contrast, our end-to-end trained model can learn the joint distribution of both price and text. | {
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_42",
"@cite_32",
"@cite_45"
],
"mid": [
"2592713985",
"2251998780",
"2251697431",
"2251162459",
"2951127645"
],
"abstract": [
"Volatility prediction--an essential concept in financial markets--has recently been addressed using sentiment analysis methods. We investigate the sentiment of annual disclosures of companies in stock markets to forecast volatility. We specifically explore the use of recent Information Retrieval (IR) term weighting models that are effectively extended by related terms using word embeddings. In parallel to textual information, factual market data have been widely used as the mainstream approach to forecast market risk. We therefore study different fusion methods to combine text and market data resources. Our word embedding-based approach significantly outperforms state-of-the-art methods. In addition, we investigate the characteristics of the reports of the companies in different financial sectors.",
"This paper attempts to identify the importance of sentiment words in financial reports on financial risk. By using a financespecific sentiment lexicon, we apply regression and ranking techniques to analyze the relations between sentiment words and financial risk. The experimental results show that, based on the bag-of-words model, models trained on sentiment words only result in comparable performance to those on origin texts, which confirms the importance of financial sentiment words on risk prediction. Furthermore, the learned models suggest strong correlations between financial sentiment words and risk of companies. As a result, these findings are of great value for providing us more insight and understanding into the impact of financial sentiment words in financial reports.",
"This paper proposes to apply the continuous vector representations of words for discovering keywords from a financial sentiment lexicon. In order to capture more keywords, we also incorporate syntactic information into the Continuous Bag-ofWords (CBOW) model. Experimental results on a task of financial risk prediction using the discovered keywords demonstrate that the proposed approach is good at predicting financial risk.",
"In November 2014, the European Central Bank (ECB) started to directly supervise the largest banks in the Eurozone via the Single Supervisory Mechanism (SSM). While supervisory risk assessments are usually based on quantitative data and surveys, this work explores whether sentiment analysis is capable of measuring a bank’s attitude and opinions towards risk by analyzing text data. For realizing this study, a collection consisting of more than 500 CEO letters and outlook sections extracted from bank annual reports is built up. Based on these data, two distinct experiments are conducted. The evaluations find promising opportunities, but also limitations for risk sentiment analysis in banking supervision. At the level of individual banks, predictions are relatively inaccurate. In contrast, the analysis of aggregated figures revealed strong and significant correlations between uncertainty or negativity in textual disclosures and the quantitative risk indicator’s future evolution. Risk sentiment analysis should therefore rather be used for macroprudential analyses than for assessments of individual banks.",
"Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research."
]
} |
1812.10156 | 2905889633 | We prove that the binary classifiers of bit strings generated by random wide deep neural networks are biased towards simple functions. The simplicity is captured by the following two properties. For any given input bit string, the average Hamming distance of the closest input bit string with a different classification is at least @math , where @math is the length of the string. Moreover, if the bits of the initial string are flipped randomly, the average number of flips required to change the classification grows linearly with @math . On the contrary, for a uniformly random binary classifier, the average Hamming distance of the closest input bit string with a different classification is one, and the average number of random flips required to change the classification is two. These results are confirmed by numerical experiments on deep neural networks with two hidden layers, and settle the conjecture stating that random deep neural networks are biased towards simple functions. The conjecture that random deep neural networks are biased towards simple functions was proposed and numerically explored in [Valle P ', arXiv:1805.08522] to explain the unreasonably good generalization properties of deep learning algorithms. By providing a precise characterization of the form of this bias towards simplicity, our results open the way to a rigorous proof of the generalization properties of deep learning algorithms in real-world scenarios. | The properties of deep neural networks with randomly initialized weights have been the subject of intensive studies @cite_19 @cite_60 @cite_7 @cite_2 @cite_23 @cite_25 @cite_4 @cite_16 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_60",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_25"
],
"mid": [
"2556364298",
"2766678531",
"",
"2433379750",
"2885059312",
"2785626633",
"2789210533",
"2423689290"
],
"abstract": [
"We study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively.",
"A deep fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP) in the limit of infinite network width. This correspondence enables exact Bayesian inference for neural networks on regression tasks by means of straightforward matrix computations. For single hidden-layer networks, the covariance function of this GP has long been known. Recently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified the correspondence between using these kernels as the covariance function for a GP and performing fully Bayesian prediction with a deep neural network. In this work, we derive this correspondence and develop a computationally efficient pipeline to compute the covariance functions. We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We find that the GP-based predictions are competitive and can outperform neural networks trained with stochastic gradient descent. We observe that the trained neural network accuracy approaches that of the corresponding GP-based computation with increasing layer width, and that the GP uncertainty is strongly correlated with prediction error. We connect our observations to the recent development of signal propagation in random neural networks.",
"",
"We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows: (1) The complexity of the computed function grows exponentially with depth. (2) All weights are not equal: trained networks are more sensitive to their lower (initial) layer weights. (3) Regularizing on trajectory length (trajectory regularization) is a simpler alternative to batch normalization, with the same performance.",
"We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike \"deep kernels\", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84 classification error on MNIST, a new record for GPs with a comparable number of parameters.",
"Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition. We show that, under broad conditions, as we make the architecture increasingly wide, the implied random function converges in distribution to a Gaussian process, formalising and extending existing results by Neal (1996) to deep networks. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then compare finite Bayesian deep networks from the literature to Gaussian processes in terms of the key predictive quantities of interest, finding that in some cases the agreement can be very close. We discuss the desirability of Gaussian process behaviour and review non-Gaussian alternative models from the literature.",
"Recent work has shown that tight concentration of the entire spectrum of singular values of a deep network's input-output Jacobian around one at initialization can speed up learning by orders of magnitude. Therefore, to guide important design choices, it is important to build a full theoretical understanding of the spectra of Jacobians at initialization. To this end, we leverage powerful tools from free probability theory to provide a detailed analytic understanding of how a deep network's Jacobian spectrum depends on various hyperparameters including the nonlinearity, the weight and bias distributions, and the depth. For a variety of nonlinearities, our work reveals the emergence of new universal limiting spectral distributions that remain concentrated around one even as the depth goes to infinity.",
"We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. We prove this generic class of deep random functions cannot be efficiently computed by any shallow network, going beyond prior work restricted to the analysis of single functions. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides a quantitative underpinning for previously abstract notions about the geometry of deep functions."
]
} |
1812.10156 | 2905889633 | We prove that the binary classifiers of bit strings generated by random wide deep neural networks are biased towards simple functions. The simplicity is captured by the following two properties. For any given input bit string, the average Hamming distance of the closest input bit string with a different classification is at least @math , where @math is the length of the string. Moreover, if the bits of the initial string are flipped randomly, the average number of flips required to change the classification grows linearly with @math . On the contrary, for a uniformly random binary classifier, the average Hamming distance of the closest input bit string with a different classification is one, and the average number of random flips required to change the classification is two. These results are confirmed by numerical experiments on deep neural networks with two hidden layers, and settle the conjecture stating that random deep neural networks are biased towards simple functions. The conjecture that random deep neural networks are biased towards simple functions was proposed and numerically explored in [Valle P ', arXiv:1805.08522] to explain the unreasonably good generalization properties of deep learning algorithms. By providing a precise characterization of the form of this bias towards simplicity, our results open the way to a rigorous proof of the generalization properties of deep learning algorithms in real-world scenarios. | The relation between generalization and simplicity was first conjectured in 2006 in @cite_10 , where the authors define a complexity measure for Boolean functions, called generalization complexity, and provide numerical evidence that this measure is correlated with the generalization error. @cite_48 explore the generalization properties of deep neural networks trained on partially random data, and find that the generalization error correlates with the amount of randomness in the data. Based on this result, @cite_3 @cite_46 proposed that the stochastic gradient descent employed to train the network is more likely to find the simpler functions that match the training set rather than the more complex ones. However, further studies @cite_15 suggested that stochastic gradient descent is not sufficient to justify the observed generalization. | {
"cite_N": [
"@cite_15",
"@cite_48",
"@cite_3",
"@cite_46",
"@cite_10"
],
"mid": [
"2731468224",
"2950220847",
"2682189153",
"2766965791",
"2122613489"
],
"abstract": [
"It is widely observed that deep learning models with learned parameters generalize well, even with much more model parameters than the number of training samples. We systematically investigate the underlying reasons why deep neural networks often generalize well, and reveal the difference between the minima (with the same training error) that generalize well and those they don't. We show that it is the characteristics the landscape of the loss function that explains the good generalization capability. For the landscape of loss function for deep networks, the volume of basin of attraction of good minima dominates over that of poor minima, which guarantees optimization methods with random initialization to converge to good minima. We theoretically justify our findings through analyzing 2-layer neural networks; and show that the low-complexity solutions have a small norm of Hessian matrix with respect to model parameters. For deeper networks, extensive numerical evidence helps to support our arguments.",
"Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.",
"We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.",
"We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable datasets. We show the predictor converges to the direction of the max-margin (hard margin SVM) solution. The result also generalizes to other monotone decreasing loss functions with an infimum at infinity, to multi-class problems, and to training a weight layer in a deep network in a certain restricted setting. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization n more complex models and with other optimization methods.",
"We introduce a measure for the complexity of Boolean functions that is highly correlated with the generalization ability that could be obtained when the functions are implemented in feedforward neural networks. The measure, based on the calculation of the number of neighbour examples that differ in their output value, can be simply computed from the definition of the functions, independently of their implementation. Numerical simulations performed on different architectures show a good agreement between the estimated complexity and the generalization ability and training times obtained. The proposed measure could help as a useful tool for carrying a systematic study of the computational capabilities of network architectures by classifying in an easy and reliable way the Boolean functions. Also, based on the fact that the average generalization ability computed over the whole set of Boolean functions is 0.5, a very complex set of functions was found for which the generalization ability is lower than for random functions. r 2006 Elsevier B.V. All rights reserved."
]
} |
1812.10156 | 2905889633 | We prove that the binary classifiers of bit strings generated by random wide deep neural networks are biased towards simple functions. The simplicity is captured by the following two properties. For any given input bit string, the average Hamming distance of the closest input bit string with a different classification is at least @math , where @math is the length of the string. Moreover, if the bits of the initial string are flipped randomly, the average number of flips required to change the classification grows linearly with @math . On the contrary, for a uniformly random binary classifier, the average Hamming distance of the closest input bit string with a different classification is one, and the average number of random flips required to change the classification is two. These results are confirmed by numerical experiments on deep neural networks with two hidden layers, and settle the conjecture stating that random deep neural networks are biased towards simple functions. The conjecture that random deep neural networks are biased towards simple functions was proposed and numerically explored in [Valle P ', arXiv:1805.08522] to explain the unreasonably good generalization properties of deep learning algorithms. By providing a precise characterization of the form of this bias towards simplicity, our results open the way to a rigorous proof of the generalization properties of deep learning algorithms in real-world scenarios. | The idea of a bias towards simple patterns has been applied to learning theory through the concepts of minimum description length @cite_39 , Blumer algorithms @cite_41 @cite_29 and universal induction @cite_43 . @cite_21 proved that the generalization error grows with the Kolmogorov complexity of the target function if the learning algorithm returns the function that has the lowest Kolmogorov complexity among all the functions compatible with the training set. The relation between generalization and complexity has been further investigated in @cite_61 @cite_53 . The complexity of the functions generated by a deep neural networks has also been studied from the perspective of the number of linear regions @cite_55 @cite_62 @cite_30 and of the curvature of the classification boundaries @cite_25 . We note that the results proved here — viz., that the functions generated by random deep networks are insensitive to large changes in their inputs — implies that such functions should be simple with respect to all the measures of complexity above, but the converse is not true: not all simple functions are likely to be generated by random deep networks. | {
"cite_N": [
"@cite_61",
"@cite_30",
"@cite_62",
"@cite_41",
"@cite_29",
"@cite_21",
"@cite_53",
"@cite_55",
"@cite_39",
"@cite_43",
"@cite_25"
],
"mid": [
"2084323233",
"2806891045",
"2951603627",
"",
"",
"1897981530",
"2794316594",
"1981530182",
"2054658115",
"",
"2423689290"
],
"abstract": [
"Abstract Many neural net learning algorithms aim at finding “simple” nets to explain training data. The expectation is that the “simpler” the networks, the better the generalization on test data (→ Occam's razor). Previous implementations, however, use measures for “simplicity” that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the “Bayesian” kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learing, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding “algorithmically simple” problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm. For a given problem, solution candidates are computed by efficient “self-sizing” programs that influence their own runtime and storage size. The probabilistic search algorithm finds the “good” programs (the ones quickly computing algorithmically probable solutions fitting the training data). Simulations focus on the task of discovering “algorithmically simple” neural networks with low Kolmogorov complexity and high generalization capability. It is demonstrated that the method, at least with certain toy problems where it is computationally feasible, can lead to generalization results unmatchable by previous neural network algorithms. Much remains to be done, however, to make large scale applications and “incremental learning” feasible. © 1997 Elsevier Science Ltd.",
"In this work we present a new framework to derive upper bounds on the number regions of feed-forward neural nets with ReLU activation functions. We derive all existing such bounds as special cases, however in a different representation in terms of matrices. This provides new insight and allows a more detailed analysis of the corresponding bounds. In particular, we provide a Jordan-like decomposition for the involved matrices and present new tighter results for an asymptotic setting. Moreover, new even stronger bounds may be obtained from our framework.",
"We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep networks are able to sequentially map portions of each layer's input-space to the same output. In this way, deep models compute functions that react equally to complicated patterns of different inputs. The compositional structure of these functions enables them to re-use pieces of computation exponentially often in terms of the network's depth. This paper investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piecewise linear activation functions. In particular, our analysis is not specific to a single family of models, and as an example, we employ it for rectifier and maxout networks. We improve complexity bounds from pre-existing work and investigate the behavior of units in higher layers.",
"",
"",
"The No Free Lunch theorems are often used to argue that domain specific knowledge is required to design successful algorithms. We use algorithmic information theory to argue the case for a universal bias allowing an algorithm to succeed in all interesting problem domains. Additionally, we give a new algorithm for off-line classification, inspired by Solomonoff induction, with good performance on all structured problems under reasonable assumptions. This includes a proof of the efficacy of the well-known heuristic of randomly selecting training data in the hope of reducing misclassification rates.",
"Many systems in nature can be described using discrete input–output maps. Without knowing details about a map, there may seem to be no a priori reason to expect that a randomly chosen input would be more likely to generate one output over another. Here, by extending fundamental results from algorithmic information theory, we show instead that for many real-world maps, the a priori probability P(x) that randomly sampled inputs generate a particular output x decays exponentially with the approximate Kolmogorov complexity @math K ( x ) of that output. These input–output maps are biased towards simplicity. We derive an upper bound P(x) ≲ @math 2 - a K ( x ) - b , which is tight for most inputs. The constants a and b, as well as many properties of P(x), can be predicted with minimal knowledge of the map. We explore this strong bias towards simple outputs in systems ranging from the folding of RNA secondary structures to systems of coupled ordinary differential equations to a stochastic financial trading model.",
"This paper explores the complexity of deep feedforward networks with linear pre-synaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piecewise linear functions based on computational geometry. We look at a deep rectifier multi-layer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime, when the number of inputs stays constant, if the shallow model has @math hidden units and @math inputs, then the number of linear regions is @math . For a @math layer model with @math hidden units on each layer it is @math . The number @math grows faster than @math when @math tends to infinity or when @math tends to infinity and @math . Additionally, even when @math is small, if we restrict @math to be @math , we can show that a deep model has considerably more linear regions that a shallow one. We consider this as a first step towards understanding the complexity of these models and specifically towards providing suitable mathematical tools for future analysis.",
"The number of digits it takes to write down an observed sequence x\"1, ..., x\"N of a time series depends on the model with its parameters that one assumes to have generated the observed data. Accordingly, by finding the model which minimizes the description length one obtains estimates of both the integer-valued structure parameters and the real-valued system parameters.",
"",
"We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. We prove this generic class of deep random functions cannot be efficiently computed by any shallow network, going beyond prior work restricted to the analysis of single functions. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides a quantitative underpinning for previously abstract notions about the geometry of deep functions."
]
} |
1812.10119 | 2906208572 | Using sequence to sequence algorithms for query expansion has not been explored yet in Information Retrieval literature nor in Question-Answering's. We tried to fill this gap in the literature with a custom Query Expansion engine trained and tested on open datasets. Starting from open datasets, we built a Query Expansion training set using sentence-embeddings-based Keyword Extraction. We therefore assessed the ability of the Sequence to Sequence neural networks to capture expanding relations in the words embeddings' space. | Paraphrase generation is a very close field. Generating new utterances carrying the same meaning expands the initial query and highly increases the robustness of a search-based chatbot @cite_5 @cite_3 . | {
"cite_N": [
"@cite_5",
"@cite_3"
],
"mid": [
"2395170394",
"1980519283"
],
"abstract": [
"This paper presents an approach to creating intelligent conversational agents that are capable of returning appropriate responses to natural language input. Our approach consists of using a supervised learning algorithm in combination with different NLP algorithms in training the system to identify paraphrases of the user’s question stored in a database. When tested on a data set consisting of questions and answers for a current conversational agent project, our approach returned an accuracy score of 79.15 , a precision score of 77.58 and a recall score of 78.01 .",
"Paraphrases, which stem from the variety of lexical and grammatical means of expressing meaning available in a language, pose challenges for a sentence generation system. In this paper, we discuss the generation of paraphrases from predicate argument structure using a simple, uniform generation methodology. Central to our approach are lexico-grammatical resources which pair elementary semantic structures with their syntactic realization and a simple but powerful mechanism for combining resources."
]
} |
1812.09809 | 2906129565 | Recently, the hybrid convolutional neural network hidden Markov model (CNN-HMM) has been introduced for offline handwritten Chinese text recognition (HCTR) and has achieved state-of-the-art performance. In a CNN-HMM system, a handwritten text line is modeled by a series of cascading HMMs, each representing one character, and the posterior distributions of HMM states are calculated by CNN. However, modeling each of the large vocabulary of Chinese characters with a uniform and fixed number of hidden states requires high memory and computational costs and makes the tens of thousands of HMM state classes confusing. Another key issue of CNN-HMM for HCTR is the diversified writing style, which leads to model strain and a significant performance decline for specific writers. To address these issues, we propose a writer-aware CNN based on parsimonious HMM (WCNN-PHMM). Validated on the ICDAR 2013 competition of CASIA-HWDB database, the more compact WCNN-PHMM of a 7360-class vocabulary can achieve a relative character error rate (CER) reduction of 16.6 over the conventional CNN-HMM without considering language modeling. Moreover, the state-tying results of PHMM explicitly show the information sharing among similar characters and the confusion reduction of tied state classes. Finally, we visualize the learned writer codes and demonstrate the strong relationship with the writing styles of different writers. To the best of our knowledge, WCNN-PHMM yields the best results on the ICDAR 2013 competition set, demonstrating its power when enlarging the size of the character vocabulary. | Offline HCTR can be formulated as the Bayesian decision problem: where @math is the feature sequence of a given text line image and @math is the underlying @math -character sequence. In oversegmentation-based approaches @cite_24 , the posterior probability @math can be computed by searching the optimal segmentation path and the corresponding posterior probability of the character sequence by combining the character classifier, the segmentation model and the geometric language model. Regarding segmentation-free approaches, the CTC-based and HMM-based approaches are two mainstream frameworks. In the CTC-based approach @cite_39 , a special character blank class and a defined many-to-one mapping function are introduced to directly compute @math with the forward-backward algorithm @cite_9 . For the HMM-based approach @cite_14 , the handwritten text line is modeled by a series of cascading HMMs, with each representing one character class. Accordingly, @math can be reformulated as the probability on HMM states using the Bayesian formula. More details will be provided in . | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_14",
"@cite_39"
],
"mid": [
"2033404582",
"2127141656",
"2808523546",
"2785979806"
],
"abstract": [
"This paper presents an effective approach for the offline recognition of unconstrained handwritten Chinese texts. Under the general integrated segmentation-and-recognition framework with character oversegmentation, we investigate three important issues: candidate path evaluation, path search, and parameter estimation. For path evaluation, we combine multiple contexts (character recognition scores, geometric and linguistic contexts) from the Bayesian decision view, and convert the classifier outputs to posterior probabilities via confidence transformation. In path search, we use a refined beam search algorithm to improve the search efficiency and, meanwhile, use a candidate character augmentation strategy to improve the recognition accuracy. The combining weights of the path evaluation function are optimized by supervised learning using a Maximum Character Accuracy criterion. We evaluated the recognition performance on a Chinese handwriting database CASIA-HWDB, which contains nearly four million character samples of 7,356 classes and 5,091 pages of unconstrained handwritten texts. The experimental results show that confidence transformation and combining multiple contexts improve the text line recognition performance significantly. On a test set of 1,015 handwritten pages, the proposed approach achieved character-level accurate rate of 90.75 percent and correct rate of 91.39 percent, which are superior by far to the best results reported in the literature.",
"Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.",
"This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24 by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53 .",
"The Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) has been demonstrated successful in handwritten text recognition of Western and Arabic scripts. It is totally segmentation free and can be trained directly from text line images. However, the application of LSTM-RNNs (including Multi-Dimensional LSTM-RNN (MDLSTM-RNN)) to Chinese text recognition has shown limited success, even when training them with large datasets and using pre-training on datasets of other languages. In this paper, we propose a handwritten Chinese text recognition method by using Separable MDLSTMRNN (SMDLSTM-RNN) modules, which extract contextual information in various directions, and consume much less computation efforts and resources compared with the traditional MDLSTMRNN. Experimental results on the ICDAR-2013 competition dataset show that the proposed method performs significantly better than the previous LSTM-based methods, and can compete with the state-of-the-art systems."
]
} |
1812.09809 | 2906129565 | Recently, the hybrid convolutional neural network hidden Markov model (CNN-HMM) has been introduced for offline handwritten Chinese text recognition (HCTR) and has achieved state-of-the-art performance. In a CNN-HMM system, a handwritten text line is modeled by a series of cascading HMMs, each representing one character, and the posterior distributions of HMM states are calculated by CNN. However, modeling each of the large vocabulary of Chinese characters with a uniform and fixed number of hidden states requires high memory and computational costs and makes the tens of thousands of HMM state classes confusing. Another key issue of CNN-HMM for HCTR is the diversified writing style, which leads to model strain and a significant performance decline for specific writers. To address these issues, we propose a writer-aware CNN based on parsimonious HMM (WCNN-PHMM). Validated on the ICDAR 2013 competition of CASIA-HWDB database, the more compact WCNN-PHMM of a 7360-class vocabulary can achieve a relative character error rate (CER) reduction of 16.6 over the conventional CNN-HMM without considering language modeling. Moreover, the state-tying results of PHMM explicitly show the information sharing among similar characters and the confusion reduction of tied state classes. Finally, we visualize the learned writer codes and demonstrate the strong relationship with the writing styles of different writers. To the best of our knowledge, WCNN-PHMM yields the best results on the ICDAR 2013 competition set, demonstrating its power when enlarging the size of the character vocabulary. | This study is comprehensively extended from our previous conference papers @cite_46 @cite_40 with the following new contributions: 1) the proposed PHMM is introduced with more technical details and verified for a more promising CNN-HMM, rather than the DNN-HMM in @cite_46 ; 2) we present a novel unsupervised adaptation strategy with writer codes and adaptation layers to guide the convolutional layers in CNN-HMM, rather than using the fully connected layers in DNN-HMM @cite_40 ; 3) WCNN-PHMM perfectly combines the two techniques to yield a compact and high-performance model; and 4) all experiments are redesigned to verify the effectiveness of WCNN-PHMM, and detailed analyses are described to give the readers a deep understanding of our approach. | {
"cite_N": [
"@cite_40",
"@cite_46"
],
"mid": [
"2573871018",
"2887688727"
],
"abstract": [
"Recently, we propose deep neural network based hidden Markov models (DNN-HMMs) for offline handwritten Chinese text recognition. In this study, we design a novel writer code based adaptation on top of the DNN-HMM to further improve the accuracy via a customized recognizer. The writer adaptation is implemented by incorporating the new layers with the original input or hidden layers of the writer-independent DNN. These new layers are driven by the so-called writer code, which guides and adapts the DNN-based recognizer with the writer information. In the training stage, the writer-aware layers are jointly learned with the conventional DNN layers in an alternative manner. In the recognition stage, with the initial recognition results from the first-pass decoding with the writer-independent DNN, an unsupervised adaptation is performed to generate the writer code via the cross-entropy criterion for the subsequent second-pass decoding. The experiments on the most challenging task of ICDAR 2013 Chinese handwriting competition show that our proposed adaptation approach can achieve consistent and significant improvements of recognition accuracy over a highperformance writer-independent DNN-HMM based recognizer across all 60 writers, yielding a relative character error rate reduction of 23.62 in average.",
"Recently, hidden Markov models (HMMs) have achieved promising results for offline handwritten Chinese text recognition. However, due to the large vocabulary of Chinese characters with each modeled by a uniform and fixed number of hidden states, a high demand of memory and computation is required. In this study, to address this issue, we present parsimonious HMMs via the state tying which can fully utilize the similarities among different Chinese characters. Two-step algorithm with the data-driven question-set is adopted to generate the tied-state pool using the likelihood measure. The proposed parsimonious HMMs with both Gaussian mixture models (GMMs) and deep neural networks (DNNs) as the emission distributions not only lead to a compact model but also improve the recognition accuracy via the data sharing for the tied states and the confusion decreasing among state classes. Tested on ICDAR-2013 competition database, in the best configured case, the new parsimonious DNN-HMM can yield a relative character error rate (CER) reduction of 6.2 , 25 reduction of model size and 60 reduction of decoding time over the conventional DNN-HMM. In the compact setting case of average 1-state HMM, our parsimonious DNN-HMM significantly outperforms the conventional DNN-HMM with a relative CER reduction of 35.5 ."
]
} |
1812.10027 | 2949628230 | Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss. | Deep neural network has become the structure for today's practical machine learning model selection @cite_37 , thanks to its simplicity and effectiveness. In order to deploy the tons of pre-trained deep neural networks and run them on different devices, researchers propose the following deep network model deployment solutions. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2145287260"
],
"abstract": [
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance."
]
} |
1812.10027 | 2949628230 | Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss. | Attracted by the elasticity in computing power and flexible collaboration, hierarchically distributed computing structures(e.g., cloud computing, fog computing, edge computing) naturally becomes the choice for supporting deep-structure-based services and applications @cite_43 @cite_35 @cite_21 @cite_5 @cite_17 . Considering the deployment location for model, the-state-of-art approaches could be divided into the three classes. | {
"cite_N": [
"@cite_35",
"@cite_21",
"@cite_43",
"@cite_5",
"@cite_17"
],
"mid": [
"2416799949",
"",
"2045371716",
"2208484250",
"2777638777"
],
"abstract": [
"The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"",
"Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing.",
"The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.",
"With the increasing commoditization of computer vision, speech recognition and machine translation systems and the widespread deployment of learning-based back-end technologies such as digital advertising and intelligent infrastructures, AI (Artificial Intelligence) has moved from research labs to production. These changes have been made possible by unprecedented levels of data and computation, by methodological advances in machine learning, by innovations in systems software and architectures, and by the broad accessibility of these technologies. The next generation of AI systems promises to accelerate these developments and increasingly impact our lives via frequent interactions and making (often mission-critical) decisions on our behalf, often in highly personalized contexts. Realizing this promise, however, raises daunting challenges. In particular, we need AI systems that make timely and safe decisions in unpredictable environments, that are robust against sophisticated adversaries, and that can process ever increasing amounts of data across organizations and individuals without compromising confidentiality. These challenges will be exacerbated by the end of the Moore's Law, which will constrain the amount of data these technologies can store and process. In this paper, we propose several open research directions in systems, architectures, and security that can address these challenges and help unlock AI's potential to improve lives and society."
]
} |
1812.10027 | 2949628230 | Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss. | : Conventionally, most of today's deep neural network usually deployed on dedicated servers in the datacenter @cite_5 . Users usually have to upload a large amount of original data (e.g., images) to the servers, causing the latency. To reduce the latency, @cite_34 proposed a bandwidth efficiency object tracking system by dropping video frames from the raw video to the cloud; @cite_32 proposed to make use of the blurred frames so as to reduce upload load. The limitation of conventional cloud-based studies is that to some extent they have to upload the orignal data, which causes large latency. | {
"cite_N": [
"@cite_5",
"@cite_34",
"@cite_32"
],
"mid": [
"2208484250",
"2029016069",
""
],
"abstract": [
"The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.",
"Glimpse is a continuous, real-time object recognition system for camera-equipped mobile devices. Glimpse captures full-motion video, locates objects of interest, recognizes and labels them, and tracks them from frame to frame for the user. Because the algorithms for object recognition entail significant computation, Glimpse runs them on server machines. When the latency between the server and mobile device is higher than a frame-time, this approach lowers object recognition accuracy. To regain accuracy, Glimpse uses an active cache of video frames on the mobile device. A subset of the frames in the active cache are used to track objects on the mobile, using (stale) hints about objects that arrive from the server from time to time. To reduce network bandwidth usage, Glimpse computes trigger frames to send to the server for recognizing and labeling. Experiments with Android smartphones and Google Glass over Verizon, ATT without Glimpse, continuous detection is non-functional (0.2 -1.9 precision).",
""
]
} |
1812.10027 | 2949628230 | Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss. | However, these former proposal took into account only latency measurement and raw data quantity between layers, not taking the sparsity in the feature maps into account, thus their partition point frequently falls on the first or the last layer of a DNN structure in their experiments, which makes their divided model turns to be cloud-only or client-only scheme and thus less practical. Another work @cite_23 proposed to adaptively upload compressed data or deploy compressed model locally to jointly execute, but they didn't look into and modify the deep model itself. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2792220137"
],
"abstract": [
"Deep learning shows great promise in providing more intelligence to augmented reality (AR) devices, but few AR apps use deep learning due to lack of infrastructure support. Deep learning algorithms are computationally intensive, and front-end devices cannot deliver sufficient compute power for real-time processing. In this work, we design a framework that ties together front-end devices with more powerful backend “helpers” (e.g., home servers) to allow deep learning to be executed locally or remotely in the cloud edge. We consider the complex interaction between model accuracy, video quality, battery constraints, network data usage, and network conditions to determine an optimal offloading strategy. Our contributions are: (1) extensive measurements to understand the tradeoffs between video quality, network conditions, battery consumption, processing delay, and model accuracy; (2) a measurement-driven mathematical framework that efficiently solves the resulting combinatorial optimization problem; (3) an Android application that performs real-time object detection for AR applications, with experimental results that demonstrate the superiority of our approach."
]
} |
1812.10061 | 2906211136 | Neural models enjoy widespread use across a variety of tasks and have grown to become crucial components of many industrial systems. Despite their effectiveness and extensive popularity, they are not without their exploitable flaws. Initially applied to computer vision systems, the generation of adversarial examples is a process in which seemingly imperceptible perturbations are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. Due to recent trends in speech processing, this has become a noticeable issue in speech recognition models. In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Limited-vocabulary speech classifiers, such as the Speech Commands model, are used quite frequently in a variety of applications, particularly in managing automated attendants in telephony contexts. As such, adversarial examples produced by this attack could have real-world consequences. While previous work in defending against these adversarial examples has investigated using audio preprocessing to reduce or distort adversarial noise, this work explores the idea of flooding p articular frequency bands of an audio signal with random noise in order to detect adversarial examples. This technique of flooding, which does not require retraining or modifying the model, is inspired by work done in computer vision and builds on the idea that speech classifiers are relatively robust to natural noise. A combined defense incorporating 5 different frequency bands for flooding the signal with noise outperformed other existing defenses in the audio space, detecting adversarial examples with 91.8 precision and 93.5 recall. | The attack against Speech Commands described by @cite_4 is particularly relevant within the realm of telephony, as it could be adapted to fool limited-vocabulary speech classifiers used for automated attendants. This attack produces adversarial examples using a gradient-free genetic algorithm, allowing the attack to penetrate the non-differentiable layers of preprocessing typically used in automatic speech recognition. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2782403400"
],
"abstract": [
"Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produceincorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87 success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89 the human listener's perception of the audio clip as evaluated in our human study."
]
} |
1812.10061 | 2906211136 | Neural models enjoy widespread use across a variety of tasks and have grown to become crucial components of many industrial systems. Despite their effectiveness and extensive popularity, they are not without their exploitable flaws. Initially applied to computer vision systems, the generation of adversarial examples is a process in which seemingly imperceptible perturbations are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. Due to recent trends in speech processing, this has become a noticeable issue in speech recognition models. In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Limited-vocabulary speech classifiers, such as the Speech Commands model, are used quite frequently in a variety of applications, particularly in managing automated attendants in telephony contexts. As such, adversarial examples produced by this attack could have real-world consequences. While previous work in defending against these adversarial examples has investigated using audio preprocessing to reduce or distort adversarial noise, this work explores the idea of flooding p articular frequency bands of an audio signal with random noise in order to detect adversarial examples. This technique of flooding, which does not require retraining or modifying the model, is inspired by work done in computer vision and builds on the idea that speech classifiers are relatively robust to natural noise. A combined defense incorporating 5 different frequency bands for flooding the signal with noise outperformed other existing defenses in the audio space, detecting adversarial examples with 91.8 precision and 93.5 recall. | Recent work in computer vision has shown that some preprocessing, such as JPEG and JPEG2000 image compression @cite_3 or cropping and resizing @cite_9 , can be employed with a certain degree of success in defending against adversarial attacks. In a similar vein, preprocessing defenses have also been used for defending against adversarial attacks on speech recognition. @cite_10 were able to achieve some success using local smoothing, down-sampling, and quantization for disrupting adversarial examples produced by the attack of While quantizing with @math , were able to achieve their best result of correctly recovering the original label of 63.8 | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_3"
],
"mid": [
"2949162339",
"2916239775",
"2794785996"
],
"abstract": [
"Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.",
"",
"Adversarial examples are known to have a negative effect on the performance of classifiers which have otherwise good performance on undisturbed images. These examples are generated by adding non-random noise to the testing samples in order to make classifier misclassify the given data. Adversarial attacks use these intentionally generated examples and they pose a security risk to the machine learning based systems. To be immune to such attacks, it is desirable to have a pre-processing mechanism which removes these effects causing misclassification while keeping the content of the image. JPEG and JPEG2000 are well-known image compression techniques which suppress the high-frequency content taking the human visual system into account. JPEG has been also shown to be an effective method for reducing adversarial noise. In this paper, we propose applying JPEG2000 compression as an alternative and systematically compare the classification performance of adversarial images compressed using JPEG and JPEG2000 at different target PSNR values and maximum compression levels. Our experiments show that JPEG2000 is more effective in reducing adversarial noise as it allows higher compression rates with less distortion and it does not introduce blocking artifacts."
]
} |
1812.10061 | 2906211136 | Neural models enjoy widespread use across a variety of tasks and have grown to become crucial components of many industrial systems. Despite their effectiveness and extensive popularity, they are not without their exploitable flaws. Initially applied to computer vision systems, the generation of adversarial examples is a process in which seemingly imperceptible perturbations are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. Due to recent trends in speech processing, this has become a noticeable issue in speech recognition models. In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Limited-vocabulary speech classifiers, such as the Speech Commands model, are used quite frequently in a variety of applications, particularly in managing automated attendants in telephony contexts. As such, adversarial examples produced by this attack could have real-world consequences. While previous work in defending against these adversarial examples has investigated using audio preprocessing to reduce or distort adversarial noise, this work explores the idea of flooding p articular frequency bands of an audio signal with random noise in order to detect adversarial examples. This technique of flooding, which does not require retraining or modifying the model, is inspired by work done in computer vision and builds on the idea that speech classifiers are relatively robust to natural noise. A combined defense incorporating 5 different frequency bands for flooding the signal with noise outperformed other existing defenses in the audio space, detecting adversarial examples with 91.8 precision and 93.5 recall. | While the aforementioned defenses focus on removing or distorting adversarial noise, one could also defend against an adversarial example by adding noise to the signal. Artificial neural network (ANN) classifiers are relatively robust to natural noise, whereas adversarial examples are less so. @cite_13 used this observation and proposed a procedure for defending against adversarial images that involves corrupting localized regions of the image through the redistribution of pixel values. This procedure, which they refer to as "pixel deflection," was shown to be very effective for retrieving the true class of an adversarial attack. The strategy of defense proposed by is more sophisticated than merely corrupting images by indiscriminately redistributing pixels; they target specific pixels of the image to deflect and also perform a subsequent wavelet-based denoising procedure for softening the corruption's impact on benign inputs. Regardless of the many aspects of the pixel deflection defense that seem to only be directly applicable to defenses within computer vision, the fundamental motivating idea behind this strategy---that ANN classifiers are robust to natural noise on benign inputs relative to adversarial inputs---is an observation that should also hold true for audio classification. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2962933288"
],
"abstract": [
"CNNs are poised to become integral parts of many critical systems. Despite their robustness to natural variations, image pixel values can be manipulated, via small, carefully crafted, imperceptible perturbations, to cause a model to misclassify images. We present an algorithm to process an image so that classification accuracy is significantly preserved in the presence of such adversarial manipulations. Image classifiers tend to be robust to natural noise, and adversarial attacks tend to be agnostic to object location. These observations motivate our strategy, which leverages model robustness to defend against adversarial perturbations by forcing the image to match natural image statistics. Our algorithm locally corrupts the image by redistributing pixel values via a process we term pixel deflection. A subsequent wavelet-based denoising operation softens this corruption, as well as some of the adversarial changes. We demonstrate experimentally that the combination of these techniques enables the effective recovery of the true class, against a variety of robust attacks. Our results compare favorably with current state-of-the-art defenses, without requiring retraining or modifying the CNN."
]
} |
1812.09961 | 2947357873 | Appropriate test data is a crucial factor to reach success in dynamic software testing, e.g., fuzzing. Most of the real-world applications, however, accept complex structure inputs containing data surrounded by meta-data which is processed in several stages comprising of the parsing and rendering (execution). It makes the automatically generating efficient test data, to be non-trivial and laborious activity. The success of deep learning to cope in solving complex tasks especially in generative tasks has motivated us to exploit it in the context of complex test data generation. To do so, a neural language model (NLM) based on deep recurrent neural networks (RNNs) is used to learn the structure of complex input. Our approach generates new test data while distinguishes between data and meta-data that makes it possible to target both the parsing and rendering parts of software under test (SUT). Such test data can improve, input fuzzing. To assess the proposed approach, we developed a modular file format fuzzer, IUST-DeepFuzz. Our conducted experiments on the MuPDF, a lightweight and favorite portable document format (PDF) reader, reveal that IUST-DeepFuzz reaches high coverage of SUT in comparison with the state-of-the-art tools such as learn&fuzz, AFL, Augmented-AFL and random fuzzing. We also observed that the simpler deep learning models, the higher code coverage. | In this section, we discuss some related works in fuzzing and explain their existing problems concerning test data generation. According to the test data generation methods, fuzzers are categorized as Mutation-based and generation-based @cite_29 @cite_28 @cite_34 . Various techniques are applied to both methods to improve them. Most of these techniques have focused on artificial intelligence algorithms. | {
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_34"
],
"mid": [
"2798388185",
"348312514",
""
],
"abstract": [
"Abstract Fuzzing is an effective and widely used technique for finding security bugs and vulnerabilities in software. It inputs irregular test data into a target program to try to trigger a vulnerable condition in the program execution. Since the first random fuzzing system was constructed, fuzzing efficiency has been greatly improved by combination with several useful techniques, including dynamic symbolic execution, coverage guide, grammar representation, scheduling algorithms, dynamic taint analysis, static analysis and machine learning. In this paper, we will systematically review these techniques and their corresponding representative fuzzing systems. By introducing the principles, advantages and disadvantages of these techniques, we hope to provide researchers with a systematic and deeper understanding of fuzzing techniques and provide some references for this field.",
"Abstract : Fuzzing is an approach to software testing where the system being tested is bombarded with test cases generated by another program. The system is then monitored for any flaws exposed by the processing of this input. While the fundamental principles of fuzzing have not changed since the term was first coined, the complexity of the mechanisms used to drive the fuzzing process have undergone significant evolutionary advances. This paper is a survey of the history of fuzzing, which attempts to identify significant features of fuzzers and recent advances in their development, in order to discern the current state of the art in fuzzing technologies, and to extrapolate them into the future.",
""
]
} |
1812.10037 | 2906637185 | Semantic parsing is the task of converting natural language utterances into machine interpretable meaning representations which can be executed against a real-world environment such as a database. Scaling semantic parsing to arbitrary domains faces two interrelated challenges: obtaining broad coverage training data effectively and cheaply; and developing a model that generalizes to compositional utterances and complex intentions. We address these challenges with a framework which allows to elicit training data from a domain ontology and bootstrap a neural parser which recursively builds derivations of logical forms. In our framework meaning representations are described by sequences of natural language templates, where each template corresponds to a decomposed fragment of the underlying meaning representation. Although artificial, templates can be understood and paraphrased by humans to create natural utterances, resulting in parallel triples of utterances, meaning representations, and their decompositions. These allow us to train a neural semantic parser which learns to compose rules in deriving meaning representations. We crowdsource training data on six domains, covering both single-turn utterances which exhibit rich compositionality, and sequential utterances where a complex task is procedurally performed in steps. We then develop neural semantic parsers which perform such compositional tasks. In general, our approach allows to deploy neural semantic parsers quickly and cheaply from a given domain ontology. | In reaction to these problems in 1970s, the focus of semantic parsing research shifted from rule-based method to empirical or statistical methods, where data and machine learning plays an important role. Statistical semantic parsers typically consist of three key components: a grammar, a trainable model, and a parsing algorithm. The grammar defines the space of derivations from utterances to meaning representations, and the model together with the parsing algorithm find the most likely derivation. An example of early statistical semantic parser is the system @cite_31 based on inductive logic programming (ILP). The system uses ILP to learn control rules for a shift-reduce parser. To train and evaluate their system, zelle1996learning created the dataset which contains 880 queries to a US geography database. These queries are paired with annotated meaning representations in Prolog. | {
"cite_N": [
"@cite_31"
],
"mid": [
"2163274265"
],
"abstract": [
"This paper presents recent work using the CHILL parser acquisition system to automate the construction of a natural-language interface for database queries. CHILL treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Programming to learn relational control knowledge. Starting with a general framework for constructing a suitable logical form, CHILL is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a preexisting, hand-crafted counterpart. These results demonstrate the ability of a corpus-based system to produce more than purely syntactic representations. They also provide direct evidence of the utility of an empirical approach at the level of a complete natural language application."
]
} |
1812.10037 | 2906637185 | Semantic parsing is the task of converting natural language utterances into machine interpretable meaning representations which can be executed against a real-world environment such as a database. Scaling semantic parsing to arbitrary domains faces two interrelated challenges: obtaining broad coverage training data effectively and cheaply; and developing a model that generalizes to compositional utterances and complex intentions. We address these challenges with a framework which allows to elicit training data from a domain ontology and bootstrap a neural parser which recursively builds derivations of logical forms. In our framework meaning representations are described by sequences of natural language templates, where each template corresponds to a decomposed fragment of the underlying meaning representation. Although artificial, templates can be understood and paraphrased by humans to create natural utterances, resulting in parallel triples of utterances, meaning representations, and their decompositions. These allow us to train a neural semantic parser which learns to compose rules in deriving meaning representations. We crowdsource training data on six domains, covering both single-turn utterances which exhibit rich compositionality, and sequential utterances where a complex task is procedurally performed in steps. We then develop neural semantic parsers which perform such compositional tasks. In general, our approach allows to deploy neural semantic parsers quickly and cheaply from a given domain ontology. | Until early 2000, semantic parsing research mainly focused on restricted domains. Besides , commonly used datasets are for coaching advice to soccer agents @cite_41 , and for air travel information service @cite_46 . At that time, statistical approaches for parsing domains-specific context-free grammars have been largely explored. For example, kate2006using propose , which induces context-free grammar rules that generate meaning representations, and uses kernel SVM to score derivations. ge2005statistical propose , which employs an integrated statistical parser to produce a semantically augmented parse tree. Each non-terminal node in the tree has both a syntactic and a semantic label, from which the final meaning representation can be derived. The system proposed by wong2007learning learns synchronous context free grammars that generate utterances and meaning representations. Parsing is achieved by finding the most probable derivation that leads to the utterance and recovering the meaning representation with synchronous rules. lu2008generative proposes a generative model for utterances and meaning representations. Similar to ge2005statistical , they define hybrid trees whose nodes include both words and meaning representation tokens. Training is performed with the EM algorithm. The model, especially the generative process, was extend by kim2010generative to learn from ambiguous supervisions. | {
"cite_N": [
"@cite_41",
"@cite_46"
],
"mid": [
"2167932310",
"2091671846"
],
"abstract": [
"RoboCup Challenge offers a set of challenges for intelligent agent researchers using a friendly competition in a dynamic, real-time, multiagent domain. While RoboCup in general envisions longer range challenges over the next few decades, RoboCup Challenge presents three specific challenges for the next two years: (i) learning of individual agents and teams; (ii) multi-agent team planning and plan-execution in service of teamwork; and (iii) opponent modeling. RoboCup Challenge provides a novel opportunity for machine learning, planning, and multi-agent researchers it not only supplies a concrete domain to evaluate their techniques, but also challenges researchers to evolve these techniques to face key constraints fundamental to this domain: real-time, uncertainty, and teamwork.",
"Progress can be measured and encouraged via standards for comparison and evaluation. Though qualitative assessments can be useful in initial stages, quantifiable measures of systems under the same conditions are essential for comparing results and assessing claims. This paper will address the emerging standards for evaluation of spoken language systems."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.