paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/netgan-generating-graphs-via-random-walks
|
1803.00816
| null | null |
NetGAN: Generating Graphs via Random Walks
|
We propose NetGAN - the first implicit generative model for graphs able to
mimic real-world networks. We pose the problem of graph generation as learning
the distribution of biased random walks over the input graph. The proposed
model is based on a stochastic neural network that generates discrete output
samples and is trained using the Wasserstein GAN objective. NetGAN is able to
produce graphs that exhibit well-known network patterns without explicitly
specifying them in the model definition. At the same time, our model exhibits
strong generalization properties, as highlighted by its competitive link
prediction performance, despite not being trained specifically for this task.
Being the first approach to combine both of these desirable properties, NetGAN
opens exciting avenues for further research.
|
NetGAN is able to produce graphs that exhibit well-known network patterns without explicitly specifying them in the model definition.
|
http://arxiv.org/abs/1803.00816v2
|
http://arxiv.org/pdf/1803.00816v2.pdf
|
ICML 2018 7
|
[
"Aleksandar Bojchevski",
"Oleksandr Shchur",
"Daniel Zügner",
"Stephan Günnemann"
] |
[
"Graph Generation",
"Link Prediction"
] | 2018-03-02T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2193
|
http://proceedings.mlr.press/v80/bojchevski18a/bojchevski18a.pdf
|
netgan-generating-graphs-via-random-walks-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/learning-convex-bounds-for-linear-quadratic
|
1806.00319
| null | null |
Learning convex bounds for linear quadratic control policy synthesis
|
Learning to make decisions from observed data in dynamic environments remains
a problem of fundamental importance in a number of fields, from artificial
intelligence and robotics, to medicine and finance. This paper concerns the
problem of learning control policies for unknown linear dynamical systems so as
to maximize a quadratic reward function. We present a method to optimize the
expected value of the reward over the posterior distribution of the unknown
system parameters, given data. The algorithm involves sequential convex
programing, and enjoys reliable local convergence and robust stability
guarantees. Numerical simulations and stabilization of a real-world inverted
pendulum are used to demonstrate the approach, with strong performance and
robustness properties observed in both.
| null |
http://arxiv.org/abs/1806.00319v1
|
http://arxiv.org/pdf/1806.00319v1.pdf
|
NeurIPS 2018 12
|
[
"Jack Umenberger",
"Thomas B. Schön"
] |
[] | 2018-06-01T00:00:00 |
http://papers.nips.cc/paper/8165-learning-convex-bounds-for-linear-quadratic-control-policy-synthesis
|
http://papers.nips.cc/paper/8165-learning-convex-bounds-for-linear-quadratic-control-policy-synthesis.pdf
|
learning-convex-bounds-for-linear-quadratic-1
| null |
[] |
https://paperswithcode.com/paper/multi-layer-kernel-ridge-regression-for-one
|
1805.07808
| null | null |
Multi-layer Kernel Ridge Regression for One-class Classification
|
In this paper, a multi-layer architecture (in a hierarchical fashion) by
stacking various Kernel Ridge Regression (KRR) based Auto-Encoder for one-class
classification is proposed and is referred as MKOC. MKOC has many layers of
Auto-Encoders to project the input features into new feature space and the last
layer was regression based one class classifier. The Auto-Encoders use an
unsupervised approach of learning and the final layer uses semi-supervised
(trained by only positive samples) approach of learning. The proposed MKOC is
experimentally evaluated on 15 publicly available benchmark datasets.
Experimental results verify the effectiveness of the proposed approach over 11
existing state-of-the-art kernel-based one-class classifiers. Friedman test is
also performed to verify the statistical significance of the claim of the
superiority of the proposed one-class classifiers over the existing
state-of-the-art methods.
| null |
http://arxiv.org/abs/1805.07808v2
|
http://arxiv.org/pdf/1805.07808v2.pdf
| null |
[
"Chandan Gautam",
"Aruna Tiwari",
"Sundaram Suresh",
"Alexandros Iosifidis"
] |
[
"Classification",
"General Classification",
"One-Class Classification",
"One-class classifier",
"regression"
] | 2018-05-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/implicit-reparameterization-gradients
|
1805.08498
| null | null |
Implicit Reparameterization Gradients
|
By providing a simple and efficient way of computing low-variance gradients
of continuous random variables, the reparameterization trick has become the
technique of choice for training a variety of latent variable models. However,
it is not applicable to a number of important continuous distributions. We
introduce an alternative approach to computing reparameterization gradients
based on implicit differentiation and demonstrate its broader applicability by
applying it to Gamma, Beta, Dirichlet, and von Mises distributions, which
cannot be used with the classic reparameterization trick. Our experiments show
that the proposed approach is faster and more accurate than the existing
gradient estimators for these distributions.
|
By providing a simple and efficient way of computing low-variance gradients of continuous random variables, the reparameterization trick has become the technique of choice for training a variety of latent variable models.
|
http://arxiv.org/abs/1805.08498v4
|
http://arxiv.org/pdf/1805.08498v4.pdf
|
NeurIPS 2018 12
|
[
"Michael Figurnov",
"Shakir Mohamed",
"andriy mnih"
] |
[] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients
|
http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf
|
implicit-reparameterization-gradients-1
| null |
[] |
https://paperswithcode.com/paper/natural-language-generation-for-electronic
|
1806.01353
| null | null |
Natural Language Generation for Electronic Health Records
|
A variety of methods existing for generating synthetic electronic health
records (EHRs), but they are not capable of generating unstructured text, like
emergency department (ED) chief complaints, history of present illness or
progress notes. Here, we use the encoder-decoder model, a deep learning
algorithm that features in many contemporary machine translation systems, to
generate synthetic chief complaints from discrete variables in EHRs, like age
group, gender, and discharge diagnosis. After being trained end-to-end on
authentic records, the model can generate realistic chief complaint text that
preserves much of the epidemiological information in the original data. As a
side effect of the model's optimization goal, these synthetic chief complaints
are also free of relatively uncommon abbreviation and misspellings, and they
include none of the personally-identifiable information (PII) that was in the
training data, suggesting it may be used to support the de-identification of
text in EHRs. When combined with algorithms like generative adversarial
networks (GANs), our model could be used to generate fully-synthetic EHRs,
facilitating data sharing between healthcare providers and researchers and
improving our ability to develop machine learning methods tailored to the
information in healthcare data.
|
A variety of methods existing for generating synthetic electronic health records (EHRs), but they are not capable of generating unstructured text, like emergency department (ED) chief complaints, history of present illness or progress notes.
|
http://arxiv.org/abs/1806.01353v1
|
http://arxiv.org/pdf/1806.01353v1.pdf
| null |
[
"Scott Lee"
] |
[
"Decoder",
"De-identification",
"Machine Translation",
"Text Generation",
"Translation"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/artificial-immune-systems-can-find
|
1806.00300
| null | null |
Artificial Immune Systems Can Find Arbitrarily Good Approximations for the NP-Hard Number Partitioning Problem
|
Typical artificial immune system (AIS) operators such as hypermutations with
mutation potential and ageing allow to efficiently overcome local optima from
which evolutionary algorithms (EAs) struggle to escape. Such behaviour has been
shown for artificial example functions constructed especially to show
difficulties that EAs may encounter during the optimisation process.
{\color{black}However, no evidence is available indicating that these two
operators have similar behaviour also in more realistic problems.} In this
paper we perform an analysis for the standard NP-hard \partition problem from
combinatorial optimisation and rigorously show that hypermutations and ageing
allow AISs to efficiently escape from local optima where standard EAs require
exponential time. As a result we prove that while EAs and random local search
(RLS) may get trapped on 4/3 approximations, AISs find arbitrarily good
approximate solutions of ratio (1+$\epsilon$) {\color{black}within $n(\epsilon
^{-(2/\epsilon)-1})(1-\epsilon)^{-2} e^{3} 2^{2/\epsilon} + 2n^3 2^{2/\epsilon}
+ 2n^3$ function evaluations in expectation. This expectation is polynomial in
the problem size and exponential only in $1/\epsilon$}.
| null |
http://arxiv.org/abs/1806.00300v2
|
http://arxiv.org/pdf/1806.00300v2.pdf
| null |
[
"Dogan Corus",
"Pietro S. Oliveto",
"Donya Yazdani"
] |
[
"Evolutionary Algorithms"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-artificial-immune-systems
|
1806.00299
| null | null |
Fast Artificial Immune Systems
|
Various studies have shown that characteristic Artificial Immune System (AIS)
operators such as hypermutations and ageing can be very efficient at escaping
local optima of multimodal optimisation problems. However, this efficiency
comes at the expense of considerably slower runtimes during the exploitation
phase compared to standard evolutionary algorithms. We propose modifications to
the traditional `hypermutations with mutation potential' (HMP) that allow them
to be efficient at exploitation as well as maintaining their effective
explorative characteristics. Rather than deterministically evaluating fitness
after each bitflip of a hypermutation, we sample the fitness function
stochastically with a `parabolic' distribution which allows the `stop at first
constructive mutation' (FCM) variant of HMP to reduce the linear amount of
wasted function evaluations when no improvement is found to a constant. By
returning the best sampled solution during the hypermutation, rather than the
first constructive mutation, we then turn the extremely inefficient HMP
operator without FCM, into a very effective operator for the standard Opt-IA
AIS using hypermutation, cloning and ageing. We rigorously prove the
effectiveness of the two proposed operators by analysing them on all problems
where the performance of HPM is rigorously understood in the literature. %
| null |
http://arxiv.org/abs/1806.00299v1
|
http://arxiv.org/pdf/1806.00299v1.pdf
| null |
[
"Dogan Corus",
"Pietro S. Oliveto",
"Donya Yazdani"
] |
[
"Evolutionary Algorithms"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automatic-detection-of-neurons-in-neun
|
1806.00292
| null | null |
Automatic Detection of Neurons in NeuN-stained Histological Images of Human Brain
|
In this paper, we present a novel use of an anisotropic diffusion model for
automatic detection of neurons in histological sections of the adult human
brain cortex. We use a partial differential equation model to process high
resolution images to acquire locations of neuronal bodies. We also present a
novel approach in model training and evaluation that considers variability
among the human experts, addressing the issue of existence and correctness of
the golden standard for neuron and cell counting, used in most of relevant
papers. Our method, trained on dataset manually labeled by three experts, has
correctly distinguished over 95% of neuron bodies in test data, doing so in
time much shorter than other comparable methods.
|
In this paper, we present a novel use of an anisotropic diffusion model for automatic detection of neurons in histological sections of the adult human brain cortex.
|
http://arxiv.org/abs/1806.00292v1
|
http://arxiv.org/pdf/1806.00292v1.pdf
| null |
[
"Andrija Štajduhar",
"Domagoj Džaja",
"Miloš Judaš",
"Sven Lončarić"
] |
[] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/musical-instrument-separation-on-shift
|
1806.00273
| null | null |
Sparse Pursuit and Dictionary Learning for Blind Source Separation in Polyphonic Music Recordings
|
We propose an algorithm for the blind separation of single-channel audio signals. It is based on a parametric model that describes the spectral properties of the sounds of musical instruments independently of pitch. We develop a novel sparse pursuit algorithm that can match the discrete frequency spectra from the recorded signal with the continuous spectra delivered by the model. We first use this algorithm to convert an STFT spectrogram from the recording into a novel form of log-frequency spectrogram whose resolution exceeds that of the mel spectrogram. We then make use of the pitch-invariant properties of that representation in order to identify the sounds of the instruments via the same sparse pursuit method. As the model parameters which characterize the musical instruments are not known beforehand, we train a dictionary that contains them, using a modified version of Adam. Applying the algorithm on various audio samples, we find that it is capable of producing high-quality separation results when the model assumptions are satisfied and the instruments are clearly distinguishable, but combinations of instruments with similar spectral characteristics pose a conceptual difficulty. While a key feature of the model is that it explicitly models inharmonicity, its presence can also still impede performance of the sparse pursuit algorithm. In general, due to its pitch-invariance, our method is especially suitable for dealing with spectra from acoustic instruments, requiring only a minimal number of hyperparameters to be preset. Additionally, we demonstrate that the dictionary that is constructed for one recording can be applied to a different recording with similar instruments without additional training.
|
In general, due to its pitch-invariance, our method is especially suitable for dealing with spectra from acoustic instruments, requiring only a minimal number of hyperparameters to be preset.
|
https://arxiv.org/abs/1806.00273v5
|
https://arxiv.org/pdf/1806.00273v5.pdf
| null |
[
"Sören Schulze",
"Emily J. King"
] |
[
"blind source separation",
"Dictionary Learning"
] | 2018-06-01T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
}
] |
https://paperswithcode.com/paper/learning-neural-random-fields-with-inclusive
|
1806.00271
| null | null |
Generative Modeling by Inclusive Neural Random Fields with Applications in Image Generation and Anomaly Detection
|
Neural random fields (NRFs), referring to a class of generative models that use neural networks to implement potential functions in random fields (a.k.a. energy-based models), are not new but receive less attention with slow progress. Different from various directed graphical models such as generative adversarial networks (GANs), NRFs provide an interesting family of undirected graphical models for generative modeling. In this paper we propose a new approach, the inclusive-NRF approach, to learning NRFs for continuous data (e.g. images), by introducing inclusive-divergence minimized auxiliary generators and developing stochastic gradient sampling in an augmented space. Based on the new approach, specific inclusive-NRF models are developed and thoroughly evaluated in two important generative modeling applications - image generation and anomaly detection. The proposed models consistently improve over state-of-the-art results in both applications. Remarkably, in addition to superior sample generation, one additional benefit of our inclusive-NRF approach is that, unlike GANs, it can directly provide (unnormalized) density estimate for sample evaluation. With these contributions and results, this paper significantly advances the learning and applications of NRFs to a new level, both theoretically and empirically, which have never been obtained before.
|
With these contributions and results, this paper significantly advances the learning and applications of NRFs to a new level, both theoretically and empirically, which have never been obtained before.
|
https://arxiv.org/abs/1806.00271v5
|
https://arxiv.org/pdf/1806.00271v5.pdf
| null |
[
"Yunfu Song",
"Zhijian Ou"
] |
[
"Anomaly Detection",
"Image Generation"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/active-learning-for-convolutional-neural
|
1708.00489
| null |
H1aIuk-RW
|
Active Learning for Convolutional Neural Networks: A Core-Set Approach
|
Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin.
|
active learning).
|
http://arxiv.org/abs/1708.00489v4
|
http://arxiv.org/pdf/1708.00489v4.pdf
|
ICLR 2018 1
|
[
"Ozan Sener",
"Silvio Savarese"
] |
[
"Active Learning",
"image-classification",
"Image Classification"
] | 2017-08-01T00:00:00 |
https://openreview.net/forum?id=H1aIuk-RW
|
https://openreview.net/pdf?id=H1aIuk-RW
|
active-learning-for-convolutional-neural-1
| null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Coresets",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Coresets",
"source_title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach",
"source_url": "http://arxiv.org/abs/1708.00489v4"
}
] |
https://paperswithcode.com/paper/learn-the-new-keep-the-old-extending
|
1806.00265
| null | null |
Learn the new, keep the old: Extending pretrained models with new anatomy and images
|
Deep learning has been widely accepted as a promising solution for medical
image segmentation, given a sufficiently large representative dataset of images
with corresponding annotations. With ever increasing amounts of annotated
medical datasets, it is infeasible to train a learning method always with all
data from scratch. This is also doomed to hit computational limits, e.g.,
memory or runtime feasible for training. Incremental learning can be a
potential solution, where new information (images or anatomy) is introduced
iteratively. Nevertheless, for the preservation of the collective information,
it is essential to keep some "important" (i.e. representative) images and
annotations from the past, while adding new information. In this paper, we
introduce a framework for applying incremental learning for segmentation and
propose novel methods for selecting representative data therein. We
comparatively evaluate our methods in different scenarios using MR images and
validate the increased learning capacity with using our methods.
| null |
http://arxiv.org/abs/1806.00265v1
|
http://arxiv.org/pdf/1806.00265v1.pdf
| null |
[
"Firat Ozdemir",
"Philipp Fuernstahl",
"Orcun Goksel"
] |
[
"Anatomy",
"Image Segmentation",
"Incremental Learning",
"Medical Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/combining-pyramid-pooling-and-attention
|
1806.00264
| null | null |
Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion
|
One of the time-consuming routine work for a radiologist is to discern
anatomical structures from tomographic images. For assisting radiologists, this
paper develops an automatic segmentation method for pelvic magnetic resonance
(MR) images. The task has three major challenges 1) A pelvic organ can have
various sizes and shapes depending on the axial image, which requires local
contexts to segment correctly. 2) Different organs often have quite similar
appearance in MR images, which requires global context to segment. 3) The
number of available annotated images are very small to use the latest
segmentation algorithms. To address the challenges, we propose a novel
convolutional neural network called Attention-Pyramid network (APNet) that
effectively exploits both local and global contexts, in addition to a
data-augmentation technique that is particularly effective for MR images. In
order to evaluate our method, we construct fine-grained (50 pelvic organs) MR
image segmentation dataset, and experimentally confirm the superior performance
of our techniques over the state-of-the-art image segmentation methods.
| null |
http://arxiv.org/abs/1806.00264v2
|
http://arxiv.org/pdf/1806.00264v2.pdf
| null |
[
"Ting-Ting Liang",
"Satoshi Tsutsui",
"Liangcai Gao",
"Jing-Jing Lu",
"Mengyan Sun"
] |
[
"Data Augmentation",
"Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptation-to-criticality-through
|
1712.05284
| null | null |
Adaptation to criticality through organizational invariance in embodied agents
|
Many biological and cognitive systems do not operate deep within one or other
regime of activity. Instead, they are poised at critical points located at
phase transitions in their parameter space. The pervasiveness of criticality
suggests that there may be general principles inducing this behaviour, yet
there is no well-founded theory for understanding how criticality is generated
at a wide span of levels and contexts. In order to explore how criticality
might emerge from general adaptive mechanisms, we propose a simple learning
rule that maintains an internal organizational structure from a specific family
of systems at criticality. We implement the mechanism in artificial embodied
agents controlled by a neural network maintaining a correlation structure
randomly sampled from an Ising model at critical temperature. Agents are
evaluated in two classical reinforcement learning scenarios: the Mountain Car
and the Acrobot double pendulum. In both cases the neural controller appears to
reach a point of criticality, which coincides with a transition point between
two regimes of the agent's behaviour. These results suggest that adaptation to
criticality could be used as a general adaptive mechanism in some
circumstances, providing an alternative explanation for the pervasive presence
of criticality in biological and cognitive systems.
|
In order to explore how criticality might emerge from general adaptive mechanisms, we propose a simple learning rule that maintains an internal organizational structure from a specific family of systems at criticality.
|
http://arxiv.org/abs/1712.05284v3
|
http://arxiv.org/pdf/1712.05284v3.pdf
| null |
[
"Miguel Aguilera",
"Manuel G. Bedia"
] |
[
"Acrobot",
"Reinforcement Learning"
] | 2017-12-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-survey-of-domain-adaptation-for-neural
|
1806.00258
| null | null |
A Survey of Domain Adaptation for Neural Machine Translation
|
Neural machine translation (NMT) is a deep learning based approach for
machine translation, which yields the state-of-the-art translation performance
in scenarios where large-scale parallel corpora are available. Although the
high-quality and domain-specific translation is crucial in the real world,
domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT
performs poorly in such scenarios. Domain adaptation that leverages both
out-of-domain parallel corpora as well as monolingual corpora for in-domain
translation, is very important for domain-specific translation. In this paper,
we give a comprehensive survey of the state-of-the-art domain adaptation
techniques for NMT.
| null |
http://arxiv.org/abs/1806.00258v1
|
http://arxiv.org/pdf/1806.00258v1.pdf
|
COLING 2018 8
|
[
"Chenhui Chu",
"Rui Wang"
] |
[
"Domain Adaptation",
"Machine Translation",
"NMT",
"Survey",
"Translation"
] | 2018-06-01T00:00:00 |
https://aclanthology.org/C18-1111
|
https://aclanthology.org/C18-1111.pdf
|
a-survey-of-domain-adaptation-for-neural-1
| null |
[] |
https://paperswithcode.com/paper/denser-deep-evolutionary-network-structured
|
1801.01563
| null | null |
DENSER: Deep Evolutionary Network Structured Representation
|
Deep Evolutionary Network Structured Representation (DENSER) is a novel
approach to automatically design Artificial Neural Networks (ANNs) using
Evolutionary Computation. The algorithm not only searches for the best network
topology (e.g., number of layers, type of layers), but also tunes
hyper-parameters, such as, learning parameters or data augmentation parameters.
The automatic design is achieved using a representation with two distinct
levels, where the outer level encodes the general structure of the network,
i.e., the sequence of layers, and the inner level encodes the parameters
associated with each layer. The allowed layers and range of the
hyper-parameters values are defined by means of a human-readable Context-Free
Grammar. DENSER was used to evolve ANNs for CIFAR-10, obtaining an average test
accuracy of 94.13%. The networks evolved for the CIFA--10 are tested on the
MNIST, Fashion-MNIST, and CIFAR-100; the results are highly competitive, and on
the CIFAR-100 we report a test accuracy of 78.75%. To the best of our
knowledge, our CIFAR-100 results are the highest performing models generated by
methods that aim at the automatic design of Convolutional Neural Networks
(CNNs), and are amongst the best for manually designed and fine-tuned CNNs.
|
Deep Evolutionary Network Structured Representation (DENSER) is a novel approach to automatically design Artificial Neural Networks (ANNs) using Evolutionary Computation.
|
http://arxiv.org/abs/1801.01563v3
|
http://arxiv.org/pdf/1801.01563v3.pdf
| null |
[
"Filipe Assunção",
"Nuno Lourenço",
"Penousal Machado",
"Bernardete Ribeiro"
] |
[
"Data Augmentation",
"Image Classification"
] | 2018-01-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fighting-fake-news-image-splice-detection-via
|
1805.04096
| null | null |
Fighting Fake News: Image Splice Detection via Learned Self-Consistency
|
Advances in photo editing and manipulation tools have made it significantly
easier to create fake imagery. Learning to detect such manipulations, however,
remains a challenging problem due to the lack of sufficient amounts of
manipulated training data. In this paper, we propose a learning algorithm for
detecting visual image manipulations that is trained only using a large dataset
of real photographs. The algorithm uses the automatically recorded photo EXIF
metadata as supervisory signal for training a model to determine whether an
image is self-consistent -- that is, whether its content could have been
produced by a single imaging pipeline. We apply this self-consistency model to
the task of detecting and localizing image splices. The proposed method obtains
state-of-the-art performance on several image forensics benchmarks, despite
never seeing any manipulated images at training. That said, it is merely a step
in the long quest for a truly general purpose visual forensics tool.
|
In this paper, we propose a learning algorithm for detecting visual image manipulations that is trained only using a large dataset of real photographs.
|
http://arxiv.org/abs/1805.04096v3
|
http://arxiv.org/pdf/1805.04096v3.pdf
|
ECCV 2018 9
|
[
"Minyoung Huh",
"Andrew Liu",
"Andrew Owens",
"Alexei A. Efros"
] |
[
"Image Forensics"
] | 2018-05-10T00:00:00 |
http://openaccess.thecvf.com/content_ECCV_2018/html/Jacob_Huh_Fighting_Fake_News_ECCV_2018_paper.html
|
http://openaccess.thecvf.com/content_ECCV_2018/papers/Jacob_Huh_Fighting_Fake_News_ECCV_2018_paper.pdf
|
fighting-fake-news-image-splice-detection-via-1
| null |
[] |
https://paperswithcode.com/paper/tapas-train-less-accuracy-predictor-for
|
1806.00250
| null | null |
TAPAS: Train-less Accuracy Predictor for Architecture Search
|
In recent years an increasing number of researchers and practitioners have
been suggesting algorithms for large-scale neural network architecture search:
genetic algorithms, reinforcement learning, learning curve extrapolation, and
accuracy predictors. None of them, however, demonstrated high-performance
without training new experiments in the presence of unseen datasets. We propose
a new deep neural network accuracy predictor, that estimates in fractions of a
second classification performance for unseen input datasets, without training.
In contrast to previously proposed approaches, our prediction is not only
calibrated on the topological network information, but also on the
characterization of the dataset-difficulty which allows us to re-tune the
prediction without any training. Our predictor achieves a performance which
exceeds 100 networks per second on a single GPU, thus creating the opportunity
to perform large-scale architecture search within a few minutes. We present
results of two searches performed in 400 seconds on a single GPU. Our best
discovered networks reach 93.67% accuracy for CIFAR-10 and 81.01% for
CIFAR-100, verified by training. These networks are performance competitive
with other automatically discovered state-of-the-art networks however we only
needed a small fraction of the time to solution and computational resources.
| null |
http://arxiv.org/abs/1806.00250v1
|
http://arxiv.org/pdf/1806.00250v1.pdf
| null |
[
"R. Istrate",
"F. Scheidegger",
"G. Mariani",
"D. Nikolopoulos",
"C. Bekas",
"A. C. I. Malossi"
] |
[
"GPU",
"Neural Architecture Search",
"Reinforcement Learning"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-predictive-models-in-interactive-music
|
1801.10492
| null | null |
Deep Predictive Models in Interactive Music
|
Musical performance requires prediction to operate instruments, to perform in
groups and to improvise. In this paper, we investigate how a number of digital
musical instruments (DMIs), including two of our own, have applied predictive
machine learning models that assist users by predicting unknown states of
musical processes. We characterise these predictions as focussed within a
musical instrument, at the level of individual performers, and between members
of an ensemble. These models can connect to existing frameworks for DMI design
and have parallels in the cognitive predictions of human musicians.
We discuss how recent advances in deep learning highlight the role of
prediction in DMIs, by allowing data-driven predictive models with a long
memory of past states. The systems we review are used to motivate musical
use-cases where prediction is a necessary component, and to highlight a number
of challenges for DMI designers seeking to apply deep predictive models in
interactive music systems of the future.
| null |
http://arxiv.org/abs/1801.10492v3
|
http://arxiv.org/pdf/1801.10492v3.pdf
| null |
[
"Charles P. Martin",
"Kai Olav Ellefsen",
"Jim Torresen"
] |
[
"Prediction"
] | 2018-01-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generative-adversarial-networks-for-1
|
1806.00236
| null | null |
Generative Adversarial Networks for Unsupervised Object Co-localization
|
This paper introduces a novel approach for unsupervised object
co-localization using Generative Adversarial Networks (GANs). GAN is a powerful
tool that can implicitly learn unknown data distributions in an unsupervised
manner. From the observation that GAN discriminator is highly influenced by
pixels where objects appear, we analyze the internal layers of discriminator
and visualize the activated pixels. Our important finding is that high image
diversity of GAN, which is a main goal in GAN research, is ironically
disadvantageous for object localization, because such discriminators focus not
only on the target object, but also on the various objects, such as background
objects. Based on extensive evaluations and experimental studies, we show the
image diversity and localization performance have a negative correlation. In
addition, our approach achieves meaningful accuracy for unsupervised object
co-localization using publicly available benchmark datasets, even comparable to
state-of-the-art weakly-supervised approach.
| null |
http://arxiv.org/abs/1806.00236v2
|
http://arxiv.org/pdf/1806.00236v2.pdf
| null |
[
"Junsuk Choe",
"Joo Hyun Park",
"Hyunjung Shim"
] |
[
"Diversity",
"Object",
"Object Localization"
] | 2018-06-01T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/pwls-ultra-an-efficient-clustering-and
|
1703.09165
| null | null |
PWLS-ULTRA: An Efficient Clustering and Learning-Based Approach for Low-Dose 3D CT Image Reconstruction
|
The development of computed tomography (CT) image reconstruction methods that
significantly reduce patient radiation exposure while maintaining high image
quality is an important area of research in low-dose CT (LDCT) imaging. We
propose a new penalized weighted least squares (PWLS) reconstruction method
that exploits regularization based on an efficient Union of Learned TRAnsforms
(PWLS-ULTRA). The union of square transforms is pre-learned from numerous image
patches extracted from a dataset of CT images or volumes. The proposed
PWLS-based cost function is optimized by alternating between a CT image
reconstruction step, and a sparse coding and clustering step. The CT image
reconstruction step is accelerated by a relaxed linearized augmented Lagrangian
method with ordered-subsets that reduces the number of forward and back
projections. Simulations with 2-D and 3-D axial CT scans of the extended
cardiac-torso phantom and 3D helical chest and abdomen scans show that for both
normal-dose and low-dose levels, the proposed method significantly improves the
quality of reconstructed images compared to PWLS reconstruction with a
nonadaptive edge-preserving regularizer (PWLS-EP). PWLS with regularization
based on a union of learned transforms leads to better image reconstructions
than using a single learned square transform. We also incorporate patch-based
weights in PWLS-ULTRA that enhance image quality and help improve image
resolution uniformity. The proposed approach achieves comparable or better
image quality compared to learned overcomplete synthesis dictionaries, but
importantly, is much faster (computationally more efficient).
|
PWLS with regularization based on a union of learned transforms leads to better image reconstructions than using a single learned square transform.
|
http://arxiv.org/abs/1703.09165v3
|
http://arxiv.org/pdf/1703.09165v3.pdf
| null |
[
"Xuehang Zheng",
"Saiprasad Ravishankar",
"Yong Long",
"Jeffrey A. Fessler"
] |
[
"Clustering",
"Computed Tomography (CT)",
"Image Reconstruction"
] | 2017-03-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/asr-based-features-for-emotion-recognition-a
|
1805.09197
| null | null |
ASR-based Features for Emotion Recognition: A Transfer Learning Approach
|
During the last decade, the applications of signal processing have
drastically improved with deep learning. However areas of affecting computing
such as emotional speech synthesis or emotion recognition from spoken language
remains challenging. In this paper, we investigate the use of a neural
Automatic Speech Recognition (ASR) as a feature extractor for emotion
recognition. We show that these features outperform the eGeMAPS feature set to
predict the valence and arousal emotional dimensions, which means that the
audio-to-text mapping learning by the ASR system contain information related to
the emotional dimensions in spontaneous speech. We also examine the
relationship between first layers (closer to speech) and last layers (closer to
text) of the ASR and valence/arousal.
| null |
http://arxiv.org/abs/1805.09197v3
|
http://arxiv.org/pdf/1805.09197v3.pdf
|
WS 2018 7
|
[
"Noé Tits",
"Kevin El Haddad",
"Thierry Dutoit"
] |
[
"Automatic Speech Recognition",
"Automatic Speech Recognition (ASR)",
"Emotional Speech Synthesis",
"Emotion Recognition",
"speech-recognition",
"Speech Recognition",
"Speech Synthesis",
"Transfer Learning"
] | 2018-05-23T00:00:00 |
https://aclanthology.org/W18-3307
|
https://aclanthology.org/W18-3307.pdf
|
asr-based-features-for-emotion-recognition-a-1
| null |
[] |
https://paperswithcode.com/paper/risk-and-parameter-convergence-of-logistic
|
1803.07300
| null | null |
Risk and parameter convergence of logistic regression
|
Gradient descent, when applied to the task of logistic regression, outputs iterates which are biased to follow a unique ray defined by the data. The direction of this ray is the maximum margin predictor of a maximal linearly separable subset of the data; the gradient descent iterates converge to this ray in direction at the rate $\mathcal{O}(\ln\ln t / \ln t)$. The ray does not pass through the origin in general, and its offset is the bounded global optimum of the risk over the remaining data; gradient descent recovers this offset at a rate $\mathcal{O}((\ln t)^2 / \sqrt{t})$.
| null |
https://arxiv.org/abs/1803.07300v3
|
https://arxiv.org/pdf/1803.07300v3.pdf
| null |
[
"Ziwei Ji",
"Matus Telgarsky"
] |
[
"regression"
] | 2018-03-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hopf-higher-order-propagation-framework-for
|
1805.12421
| null | null |
HOPF: Higher Order Propagation Framework for Deep Collective Classification
|
Given a graph where every node has certain attributes associated with it and
some nodes have labels associated with them, Collective Classification (CC) is
the task of assigning labels to every unlabeled node using information from the
node as well as its neighbors. It is often the case that a node is not only
influenced by its immediate neighbors but also by higher order neighbors,
multiple hops away. Recent state-of-the-art models for CC learn end-to-end
differentiable variations of Weisfeiler-Lehman (WL) kernels to aggregate
multi-hop neighborhood information. In this work, we propose a Higher Order
Propagation Framework, HOPF, which provides an iterative inference mechanism
for these powerful differentiable kernels. Such a combination of classical
iterative inference mechanism with recent differentiable kernels allows the
framework to learn graph convolutional filters that simultaneously exploit the
attribute and label information available in the neighborhood. Further, these
iterative differentiable kernels can scale to larger hops beyond the memory
limitations of existing differentiable kernels. We also show that existing WL
kernel-based models suffer from the problem of Node Information Morphing where
the information of the node is morphed or overwhelmed by the information of its
neighbors when considering multiple hops. To address this, we propose a
specific instantiation of HOPF, called the NIP models, which preserves the node
information at every propagation step. The iterative formulation of NIP models
further helps in incorporating distant hop information concisely as summaries
of the inferred labels. We do an extensive evaluation across 11 datasets from
different domains. We show that existing CC models do not provide consistent
performance across datasets, while the proposed NIP model with iterative
inference is more robust.
|
Given a graph where every node has certain attributes associated with it and some nodes have labels associated with them, Collective Classification (CC) is the task of assigning labels to every unlabeled node using information from the node as well as its neighbors.
|
http://arxiv.org/abs/1805.12421v6
|
http://arxiv.org/pdf/1805.12421v6.pdf
| null |
[
"Priyesh Vijayan",
"Yash Chandak",
"Mitesh M. Khapra",
"Srinivasan Parthasarathy",
"Balaraman Ravindran"
] |
[
"Attribute",
"Classification",
"General Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improved-mixed-example-data-augmentation
|
1805.11272
| null | null |
Improved Mixed-Example Data Augmentation
|
In order to reduce overfitting, neural networks are typically trained with
data augmentation, the practice of artificially generating additional training
data via label-preserving transformations of existing training examples. While
these types of transformations make intuitive sense, recent work has
demonstrated that even non-label-preserving data augmentation can be
surprisingly effective, examining this type of data augmentation through linear
combinations of pairs of examples. Despite their effectiveness, little is known
about why such methods work. In this work, we aim to explore a new, more
generalized form of this type of data augmentation in order to determine
whether such linearity is necessary. By considering this broader scope of
"mixed-example data augmentation", we find a much larger space of practical
augmentation techniques, including methods that improve upon previous
state-of-the-art. This generalization has benefits beyond the promise of
improved performance, revealing a number of types of mixed-example data
augmentation that are radically different from those considered in prior work,
which provides evidence that current theories for the effectiveness of such
methods are incomplete and suggests that any such theory must explain a much
broader phenomenon. Code is available at
https://github.com/ceciliaresearch/MixedExample.
|
In order to reduce overfitting, neural networks are typically trained with data augmentation, the practice of artificially generating additional training data via label-preserving transformations of existing training examples.
|
http://arxiv.org/abs/1805.11272v4
|
http://arxiv.org/pdf/1805.11272v4.pdf
| null |
[
"Cecilia Summers",
"Michael J. Dinneen"
] |
[
"Data Augmentation",
"Image Augmentation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/document-chunking-and-learning-objective
|
1806.01351
| null | null |
Document Chunking and Learning Objective Generation for Instruction Design
|
Instructional Systems Design is the practice of creating of instructional
experiences that make the acquisition of knowledge and skill more efficient,
effective, and appealing. Specifically in designing courses, an hour of
training material can require between 30 to 500 hours of effort in sourcing and
organizing reference data for use in just the preparation of course material.
In this paper, we present the first system of its kind that helps reduce the
effort associated with sourcing reference material and course creation. We
present algorithms for document chunking and automatic generation of learning
objectives from content, creating descriptive content metadata to improve
content-discoverability. Unlike existing methods, the learning objectives
generated by our system incorporate pedagogically motivated Bloom's verbs. We
demonstrate the usefulness of our methods using real world data from the
banking industry and through a live deployment at a large pharmaceutical
company.
| null |
http://arxiv.org/abs/1806.01351v2
|
http://arxiv.org/pdf/1806.01351v2.pdf
| null |
[
"Khoi-Nguyen Tran",
"Jey Han Lau",
"Danish Contractor",
"Utkarsh Gupta",
"Bikram Sengupta",
"Christopher J. Butler",
"Mukesh Mohania"
] |
[
"Chunking",
"Descriptive"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visual-to-sound-generating-natural-sound-for
|
1712.01393
| null | null |
Visual to Sound: Generating Natural Sound for Videos in the Wild
|
As two of the five traditional human senses (sight, hearing, taste, smell,
and touch), vision and sound are basic sources through which humans understand
the world. Often correlated during natural events, these two modalities combine
to jointly affect human perception. In this paper, we pose the task of
generating sound given visual input. Such capabilities could help enable
applications in virtual reality (generating sound for virtual scenes
automatically) or provide additional accessibility to images or videos for
people with visual impairments. As a first step in this direction, we apply
learning-based methods to generate raw waveform samples given input video
frames. We evaluate our models on a dataset of videos containing a variety of
sounds (such as ambient sounds and sounds from people/animals). Our experiments
show that the generated sounds are fairly realistic and have good temporal
synchronization with the visual inputs.
|
As two of the five traditional human senses (sight, hearing, taste, smell, and touch), vision and sound are basic sources through which humans understand the world.
|
http://arxiv.org/abs/1712.01393v2
|
http://arxiv.org/pdf/1712.01393v2.pdf
|
CVPR 2018 6
|
[
"Yipin Zhou",
"Zhaowen Wang",
"Chen Fang",
"Trung Bui",
"Tamara L. Berg"
] |
[] | 2017-12-04T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Zhou_Visual_to_Sound_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Visual_to_Sound_CVPR_2018_paper.pdf
|
visual-to-sound-generating-natural-sound-for-1
| null |
[] |
https://paperswithcode.com/paper/inference-aided-reinforcement-learning-for
|
1806.00206
| null | null |
Inference Aided Reinforcement Learning for Incentive Mechanism Design in Crowdsourcing
|
Incentive mechanisms for crowdsourcing are designed to incentivize
financially self-interested workers to generate and report high-quality labels.
Existing mechanisms are often developed as one-shot static solutions, assuming
a certain level of knowledge about worker models (expertise levels, costs of
exerting efforts, etc.). In this paper, we propose a novel inference aided
reinforcement mechanism that learns to incentivize high-quality data
sequentially and requires no such prior assumptions. Specifically, we first
design a Gibbs sampling augmented Bayesian inference algorithm to estimate
workers' labeling strategies from the collected labels at each step. Then we
propose a reinforcement incentive learning (RIL) method, building on top of the
above estimates, to uncover how workers respond to different payments. RIL
dynamically determines the payment without accessing any ground-truth labels.
We theoretically prove that RIL is able to incentivize rational workers to
provide high-quality labels. Empirical results show that our mechanism performs
consistently well under both rational and non-fully rational (adaptive
learning) worker models. Besides, the payments offered by RIL are more robust
and have lower variances compared to the existing one-shot mechanisms.
| null |
http://arxiv.org/abs/1806.00206v1
|
http://arxiv.org/pdf/1806.00206v1.pdf
|
NeurIPS 2018 12
|
[
"Zehong Hu",
"Yitao Liang",
"Yang Liu",
"Jie Zhang"
] |
[
"Bayesian Inference",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-01T00:00:00 |
http://papers.nips.cc/paper/7795-inference-aided-reinforcement-learning-for-incentive-mechanism-design-in-crowdsourcing
|
http://papers.nips.cc/paper/7795-inference-aided-reinforcement-learning-for-incentive-mechanism-design-in-crowdsourcing.pdf
|
inference-aided-reinforcement-learning-for-1
| null |
[] |
https://paperswithcode.com/paper/being-curious-about-the-answers-to-questions
|
1806.00201
| null | null |
Being curious about the answers to questions: novelty search with learned attention
|
We investigate the use of attentional neural network layers in order to learn
a `behavior characterization' which can be used to drive novelty search and
curiosity-based policies. The space is structured towards answering a
particular distribution of questions, which are used in a supervised way to
train the attentional neural network. We find that in a 2d exploration task,
the structure of the space successfully encodes local sensory-motor
contingencies such that even a greedy local `do the most novel action' policy
with no reinforcement learning or evolution can explore the space quickly. We
also apply this to a high/low number guessing game task, and find that guessing
according to the learned attention profile performs active inference and can
discover the correct number more quickly than an exact but passive approach.
|
We investigate the use of attentional neural network layers in order to learn a `behavior characterization' which can be used to drive novelty search and curiosity-based policies.
|
http://arxiv.org/abs/1806.00201v1
|
http://arxiv.org/pdf/1806.00201v1.pdf
| null |
[
"Nicholas Guttenberg",
"Martin Biehl",
"Nathaniel Virgo",
"Ryota Kanai"
] |
[
"Reinforcement Learning"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/private-sequential-learning
|
1805.02136
| null | null |
Private Sequential Learning
|
We formulate a private learning model to study an intrinsic tradeoff between privacy and query complexity in sequential learning. Our model involves a learner who aims to determine a scalar value, $v^*$, by sequentially querying an external database and receiving binary responses. In the meantime, an adversary observes the learner's queries, though not the responses, and tries to infer from them the value of $v^*$. The objective of the learner is to obtain an accurate estimate of $v^*$ using only a small number of queries, while simultaneously protecting her privacy by making $v^*$ provably difficult to learn for the adversary. Our main results provide tight upper and lower bounds on the learner's query complexity as a function of desired levels of privacy and estimation accuracy. We also construct explicit query strategies whose complexity is optimal up to an additive constant.
| null |
https://arxiv.org/abs/1805.02136v3
|
https://arxiv.org/pdf/1805.02136v3.pdf
| null |
[
"John N. Tsitsiklis",
"Kuang Xu",
"Zhi Xu"
] |
[] | 2018-05-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-a-latent-space-of-multitrack
|
1806.00195
| null | null |
Learning a Latent Space of Multitrack Measures
|
Discovering and exploring the underlying structure of multi-instrumental
music using learning-based approaches remains an open problem. We extend the
recent MusicVAE model to represent multitrack polyphonic measures as vectors in
a latent space. Our approach enables several useful operations such as
generating plausible measures from scratch, interpolating between measures in a
musically meaningful way, and manipulating specific musical attributes. We also
introduce chord conditioning, which allows all of these operations to be
performed while keeping harmony fixed, and allows chords to be changed while
maintaining musical "style". By generating a sequence of measures over a
predefined chord progression, our model can produce music with convincing
long-term structure. We demonstrate that our latent space model makes it
possible to intuitively control and generate musical sequences with rich
instrumentation (see https://goo.gl/s2N7dV for generated audio).
|
Discovering and exploring the underlying structure of multi-instrumental music using learning-based approaches remains an open problem.
|
http://arxiv.org/abs/1806.00195v1
|
http://arxiv.org/pdf/1806.00195v1.pdf
| null |
[
"Ian Simon",
"Adam Roberts",
"Colin Raffel",
"Jesse Engel",
"Curtis Hawthorne",
"Douglas Eck"
] |
[] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-imbalanced-learning-for-face-recognition
|
1806.00194
| null | null |
Deep Imbalanced Learning for Face Recognition and Attribute Prediction
|
Data for face analysis often exhibit highly-skewed class distribution, i.e.,
most data belong to a few majority classes, while the minority classes only
contain a scarce amount of instances. To mitigate this issue, contemporary deep
learning methods typically follow classic strategies such as class re-sampling
or cost-sensitive training. In this paper, we conduct extensive and systematic
experiments to validate the effectiveness of these classic schemes for
representation learning on class-imbalanced data. We further demonstrate that
more discriminative deep representation can be learned by enforcing a deep
network to maintain inter-cluster margins both within and between classes. This
tight constraint effectively reduces the class imbalance inherent in the local
data neighborhood, thus carving much more balanced class boundaries locally. We
show that it is easy to deploy angular margins between the cluster
distributions on a hypersphere manifold. Such learned Cluster-based Large
Margin Local Embedding (CLMLE), when combined with a simple k-nearest cluster
algorithm, shows significant improvements in accuracy over existing methods on
both face recognition and face attribute prediction tasks that exhibit
imbalanced class distribution.
|
Data for face analysis often exhibit highly-skewed class distribution, i. e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances.
|
http://arxiv.org/abs/1806.00194v2
|
http://arxiv.org/pdf/1806.00194v2.pdf
| null |
[
"Chen Huang",
"Yining Li",
"Chen Change Loy",
"Xiaoou Tang"
] |
[
"Attribute",
"Face Recognition",
"Prediction",
"Representation Learning"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-interpretable-reasoning-network-for-multi
|
1801.04726
| null | null |
An Interpretable Reasoning Network for Multi-Relation Question Answering
|
Multi-relation Question Answering is a challenging task, due to the
requirement of elaborated analysis on questions and reasoning over multiple
fact triples in knowledge base. In this paper, we present a novel model called
Interpretable Reasoning Network that employs an interpretable, hop-by-hop
reasoning process for question answering. The model dynamically decides which
part of an input question should be analyzed at each hop; predicts a relation
that corresponds to the current parsed results; utilizes the predicted relation
to update the question representation and the state of the reasoning process;
and then drives the next-hop reasoning. Experiments show that our model yields
state-of-the-art results on two datasets. More interestingly, the model can
offer traceable and observable intermediate predictions for reasoning analysis
and failure diagnosis, thereby allowing manual manipulation in predicting the
final answer.
|
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base.
|
http://arxiv.org/abs/1801.04726v3
|
http://arxiv.org/pdf/1801.04726v3.pdf
|
COLING 2018 8
|
[
"Mantong Zhou",
"Minlie Huang",
"Xiaoyan Zhu"
] |
[
"Question Answering",
"Relation"
] | 2018-01-15T00:00:00 |
https://aclanthology.org/C18-1171
|
https://aclanthology.org/C18-1171.pdf
|
an-interpretable-reasoning-network-for-multi-2
| null |
[] |
https://paperswithcode.com/paper/video-description-a-survey-of-methods
|
1806.00186
| null | null |
Video Description: A Survey of Methods, Datasets and Evaluation Metrics
|
Video description is the automatic generation of natural language sentences that describe the contents of a given video. It has applications in human-robot interaction, helping the visually impaired and video subtitling. The past few years have seen a surge of research in this area due to the unprecedented success of deep learning in computer vision and natural language processing. Numerous methods, datasets and evaluation metrics have been proposed in the literature, calling the need for a comprehensive survey to focus research efforts in this flourishing new direction. This paper fills the gap by surveying the state of the art approaches with a focus on deep learning models; comparing benchmark datasets in terms of their domains, number of classes, and repository size; and identifying the pros and cons of various evaluation metrics like SPICE, CIDEr, ROUGE, BLEU, METEOR, and WMD. Classical video description approaches combined subject, object and verb detection with template based language models to generate sentences. However, the release of large datasets revealed that these methods can not cope with the diversity in unconstrained open domain videos. Classical approaches were followed by a very short era of statistical methods which were soon replaced with deep learning, the current state of the art in video description. Our survey shows that despite the fast-paced developments, video description research is still in its infancy due to the following reasons. Analysis of video description models is challenging because it is difficult to ascertain the contributions, towards accuracy or errors, of the visual features and the adopted language model in the final description. Existing datasets neither contain adequate visual diversity nor complexity of linguistic structures. Finally, current evaluation metrics ...
| null |
https://arxiv.org/abs/1806.00186v4
|
https://arxiv.org/pdf/1806.00186v4.pdf
| null |
[
"Nayyer Aafaq",
"Ajmal Mian",
"Wei Liu",
"Syed Zulqarnain Gilani",
"Mubarak Shah"
] |
[
"Diversity",
"Language Modeling",
"Language Modelling",
"Video Description"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nest-a-neural-network-synthesis-tool-based-on
|
1711.02017
| null | null |
NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
|
Deep neural networks (DNNs) have begun to have a pervasive impact on various
applications of machine learning. However, the problem of finding an optimal
DNN architecture for large applications is challenging. Common approaches go
for deeper and larger DNN architectures but may incur substantial redundancy.
To address these problems, we introduce a network growth algorithm that
complements network pruning to learn both weights and compact DNN architectures
during training. We propose a DNN synthesis tool (NeST) that combines both
methods to automate the generation of compact and accurate DNNs. NeST starts
with a randomly initialized sparse network called the seed architecture. It
iteratively tunes the architecture with gradient-based growth and
magnitude-based pruning of neurons and connections. Our experimental results
show that NeST yields accurate, yet very compact DNNs, with a wide range of
seed architecture selection. For the LeNet-300-100 (LeNet-5) architecture, we
reduce network parameters by 70.2x (74.3x) and floating-point operations
(FLOPs) by 79.4x (43.7x). For the AlexNet and VGG-16 architectures, we reduce
network parameters (FLOPs) by 15.7x (4.6x) and 30.2x (8.6x), respectively.
NeST's grow-and-prune paradigm delivers significant additional parameter and
FLOPs reduction relative to pruning-only methods.
| null |
http://arxiv.org/abs/1711.02017v3
|
http://arxiv.org/pdf/1711.02017v3.pdf
| null |
[
"Xiaoliang Dai",
"Hongxu Yin",
"Niraj K. Jha"
] |
[
"Network Pruning"
] | 2017-11-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/normalization.py#L13",
"description": "**Local Response Normalization** is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.\r\n\r\n$$ b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n}\\sum_{c'=\\max(0, c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta} $$\r\n\r\nWhere the size is the number of neighbouring channels used for normalization, $\\alpha$ is multiplicative factor, $\\beta$ an exponent and $k$ an additive factor",
"full_name": "Local Response Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Local Response Normalization",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42",
"description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.",
"full_name": "Grouped Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Grouped Convolution",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/dansuh17/alexnet-pytorch/blob/d0c1b1c52296ffcbecfbf5b17e1d1685b4ca6744/model.py#L40",
"description": "To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.ggfdf\r\n\r\n\r\nHow do I speak to a person at Expedia?How do I speak to a person at Expedia?To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.\r\n\r\n\r\n\r\nTo make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.chgd",
"full_name": "How do I speak to a person at Expedia?-/+/",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "How do I speak to a person at Expedia?-/+/",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
}
] |
https://paperswithcode.com/paper/the-nonlinearity-coefficient-predicting
|
1806.00179
| null |
BkeK-nRcFX
|
The Nonlinearity Coefficient - Predicting Generalization in Deep Neural Networks
|
For a long time, designing neural architectures that exhibit high performance
was considered a dark art that required expert hand-tuning. One of the few
well-known guidelines for architecture design is the avoidance of exploding
gradients, though even this guideline has remained relatively vague and
circumstantial. We introduce the nonlinearity coefficient (NLC), a measurement
of the complexity of the function computed by a neural network that is based on
the magnitude of the gradient. Via an extensive empirical study, we show that
the NLC is a powerful predictor of test error and that attaining a right-sized
NLC is essential for optimal performance.
The NLC exhibits a range of intriguing and important properties. It is
closely tied to the amount of information gained from computing a single
network gradient. It is tied to the error incurred when replacing the
nonlinearity operations in the network with linear operations. It is not
susceptible to the confounders of multiplicative scaling, additive bias and
layer width. It is stable from layer to layer. Hence, we argue that the NLC is
the first robust predictor of overfitting in deep networks.
| null |
http://arxiv.org/abs/1806.00179v2
|
http://arxiv.org/pdf/1806.00179v2.pdf
|
ICLR 2019 5
|
[
"George Philipp",
"Jaime G. Carbonell"
] |
[] | 2018-06-01T00:00:00 |
https://openreview.net/forum?id=BkeK-nRcFX
|
https://openreview.net/pdf?id=BkeK-nRcFX
|
the-nonlinearity-coefficient-predicting-1
| null |
[] |
https://paperswithcode.com/paper/understanding-batch-normalization
|
1806.02375
| null | null |
Understanding Batch Normalization
|
Batch normalization (BN) is a technique to normalize activations in
intermediate layers of deep neural networks. Its tendency to improve accuracy
and speed up training have established BN as a favorite technique in deep
learning. Yet, despite its enormous success, there remains little consensus on
the exact reason and mechanism behind these improvements. In this paper we take
a step towards a better understanding of BN, following an empirical approach.
We conduct several experiments, and show that BN primarily enables training
with larger learning rates, which is the cause for faster convergence and
better generalization. For networks without BN we demonstrate how large
gradient updates can result in diverging loss and activations growing
uncontrollably with network depth, which limits possible learning rates. BN
avoids this problem by constantly correcting activations to be zero-mean and of
unit standard deviation, which enables larger gradient steps, yields faster
convergence and may help bypass sharp local minima. We further show various
ways in which gradients and activations of deep unnormalized networks are
ill-behaved. We contrast our results against recent findings in random matrix
theory, shedding new light on classical initialization schemes and their
consequences.
| null |
http://arxiv.org/abs/1806.02375v4
|
http://arxiv.org/pdf/1806.02375v4.pdf
|
NeurIPS 2018 12
|
[
"Johan Bjorck",
"Carla Gomes",
"Bart Selman",
"Kilian Q. Weinberger"
] |
[] | 2018-06-01T00:00:00 |
http://papers.nips.cc/paper/7996-understanding-batch-normalization
|
http://papers.nips.cc/paper/7996-understanding-batch-normalization.pdf
|
understanding-batch-normalization-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "1D Convolutional Neural Networks are similar to well known and more established 2D Convolutional Neural Networks. 1D Convolutional Neural Networks are used mainly used on text and 1D signals.",
"full_name": "1-Dimensional Convolutional Neural Networks",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1D CNN",
"source_title": "Convolutional Neural Network and Rule-Based Algorithms for Classifying 12-lead ECGs",
"source_url": "https://www.researchgate.net/publication/348288032_Convolutional_Neural_Network_and_Rule-Based_Algorithms_for_Classifying_12-lead_ECGs"
}
] |
https://paperswithcode.com/paper/emotional-chatting-machine-emotional
|
1704.01074
| null | null |
Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory
|
Perception and expression of emotion are key factors to the success of
dialogue systems or conversational agents. However, this problem has not been
studied in large-scale conversation generation so far. In this paper, we
propose Emotional Chatting Machine (ECM) that can generate appropriate
responses not only in content (relevant and grammatical) but also in emotion
(emotionally consistent). To the best of our knowledge, this is the first work
that addresses the emotion factor in large-scale conversation generation. ECM
addresses the factor using three new mechanisms that respectively (1) models
the high-level abstraction of emotion expressions by embedding emotion
categories, (2) captures the change of implicit internal emotion states, and
(3) uses explicit emotion expressions with an external emotion vocabulary.
Experiments show that the proposed model can generate responses appropriate not
only in content but also in emotion.
|
Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents.
|
http://arxiv.org/abs/1704.01074v4
|
http://arxiv.org/pdf/1704.01074v4.pdf
| null |
[
"Hao Zhou",
"Minlie Huang",
"Tianyang Zhang",
"Xiaoyan Zhu",
"Bing Liu"
] |
[] | 2017-04-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/igcv3-interleaved-low-rank-group-convolutions
|
1806.00178
| null | null |
IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks
|
In this paper, we are interested in building lightweight and efficient
convolutional neural networks. Inspired by the success of two design patterns,
composition of structured sparse kernels, e.g., interleaved group convolutions
(IGC), and composition of low-rank kernels, e.g., bottle-neck modules, we study
the combination of such two design patterns, using the composition of
structured sparse low-rank kernels, to form a convolutional kernel. Rather than
introducing a complementary condition over channels, we introduce a loose
complementary condition, which is formulated by imposing the complementary
condition over super-channels, to guide the design for generating a dense
convolutional kernel. The resulting network is called IGCV3. We empirically
demonstrate that the combination of low-rank and sparse kernels boosts the
performance and the superiority of our proposed approach to the
state-of-the-arts, IGCV2 and MobileNetV2 over image classification on CIFAR and
ImageNet and object detection on COCO.
|
In this paper, we are interested in building lightweight and efficient convolutional neural networks.
|
http://arxiv.org/abs/1806.00178v2
|
http://arxiv.org/pdf/1806.00178v2.pdf
| null |
[
"Ke Sun",
"Mingjie Li",
"Dong Liu",
"Jingdong Wang"
] |
[
"image-classification",
"Image Classification",
"object-detection",
"Object Detection"
] | 2018-06-01T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Depthwise Convolution** is a type of convolution where we apply a single convolutional filter for each input channel. In the regular 2D [convolution](https://paperswithcode.com/method/convolution) performed over multiple input channels, the filter is as deep as the input and lets us freely mix channels to generate each element in the output. In contrast, depthwise convolutions keep each channel separate. To summarize the steps, we:\r\n\r\n1. Split the input and filter into channels.\r\n2. We convolve each input with the respective filter.\r\n3. We stack the convolved outputs together.\r\n\r\nImage Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)",
"full_name": "Depthwise Convolution",
"introduced_year": 2016,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Depthwise Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Pointwise Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) that uses a 1x1 kernel: a kernel that iterates through every single point. This kernel has a depth of however many channels the input image has. It can be used in conjunction with [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution) to produce an efficient class of convolutions known as [depthwise-separable convolutions](https://paperswithcode.com/method/depthwise-separable-convolution).\r\n\r\nImage Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)",
"full_name": "Pointwise Convolution",
"introduced_year": 2016,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Pointwise Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/kwotsin/TensorFlow-Xception/blob/c42ad8cab40733f9150711be3537243278612b22/xception.py#L67",
"description": "While [standard convolution](https://paperswithcode.com/method/convolution) performs the channelwise and spatial-wise computation in one step, **Depthwise Separable Convolution** splits the computation into two steps: [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) applies a single convolutional filter per each input channel and [pointwise convolution](https://paperswithcode.com/method/pointwise-convolution) is used to create a linear combination of the output of the depthwise convolution. The comparison of standard convolution and depthwise separable convolution is shown to the right.\r\n\r\nCredit: [Depthwise Convolution Is All You Need for Learning Multiple Visual Domains](https://paperswithcode.com/paper/depthwise-convolution-is-all-you-need-for)",
"full_name": "Depthwise Separable Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Depthwise Separable Convolution",
"source_title": "Xception: Deep Learning With Depthwise Separable Convolutions",
"source_url": "http://openaccess.thecvf.com/content_cvpr_2017/html/Chollet_Xception_Deep_Learning_CVPR_2017_paper.html"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/mobilenet.py#L45",
"description": "An **Inverted Residual Block**, sometimes called an **MBConv Block**, is a type of residual block used for image models that uses an inverted structure for efficiency reasons. It was originally proposed for the [MobileNetV2](https://paperswithcode.com/method/mobilenetv2) CNN architecture. It has since been reused for several mobile-optimized CNNs.\r\n\r\nA traditional [Residual Block](https://paperswithcode.com/method/residual-block) has a wide -> narrow -> wide structure with the number of channels. The input has a high number of channels, which are compressed with a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution). The number of channels is then increased again with a 1x1 [convolution](https://paperswithcode.com/method/convolution) so input and output can be added. \r\n\r\nIn contrast, an Inverted Residual Block follows a narrow -> wide -> narrow approach, hence the inversion. We first widen with a 1x1 convolution, then use a 3x3 [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) (which greatly reduces the number of parameters), then we use a 1x1 convolution to reduce the number of channels so input and output can be added.",
"full_name": "Inverted Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Inverted Residual Block",
"source_title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks",
"source_url": "http://arxiv.org/abs/1801.04381v4"
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Tether has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Tether transaction not confirmed, your Tether wallet not showing balance, or you're trying to recover a lost Tether wallet, knowing where to get help is essential. That’s why the Tether customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Tether Customer Support Number +1-833-534-1729\r\nTether operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Tether Transaction Not Confirmed\r\nOne of the most common concerns is when a Tether transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Tether Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Tether wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Tether Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Tether wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Tether Deposit Not Received\r\nIf someone has sent you Tether but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Tether deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Tether Transaction Stuck or Pending\r\nSometimes your Tether transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Tether Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Tether wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Tether Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Tether tech.\r\n\r\n24/7 Availability: Tether doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Tether Support and Wallet Issues\r\nQ1: Can Tether support help me recover stolen BTC?\r\nA: While Tether transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Tether transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Tether’s official number (Tether is decentralized), it connects you to trained professionals experienced in resolving all major Tether issues.\r\n\r\nFinal Thoughts\r\nTether is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Tether transaction not confirmed, your Tether wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Tether customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Tether Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Tether Customer Service Number +1-833-534-1729",
"source_title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks",
"source_url": "http://arxiv.org/abs/1801.04381v4"
}
] |
https://paperswithcode.com/paper/reparameterization-gradient-for-non
|
1806.00176
| null | null |
Reparameterization Gradient for Non-differentiable Models
|
We present a new algorithm for stochastic variational inference that targets
at models with non-differentiable densities. One of the key challenges in
stochastic variational inference is to come up with a low-variance estimator of
the gradient of a variational objective. We tackle the challenge by
generalizing the reparameterization trick, one of the most effective techniques
for addressing the variance issue for differentiable models, so that the trick
works for non-differentiable models as well. Our algorithm splits the space of
latent variables into regions where the density of the variables is
differentiable, and their boundaries where the density may fail to be
differentiable. For each differentiable region, the algorithm applies the
standard reparameterization trick and estimates the gradient restricted to the
region. For each potentially non-differentiable boundary, it uses a form of
manifold sampling and computes the direction for variational parameters that,
if followed, would increase the boundary's contribution to the variational
objective. The sum of all the estimates becomes the gradient estimate of our
algorithm. Our estimator enjoys the reduced variance of the reparameterization
gradient while remaining unbiased even for non-differentiable models. The
experiments with our preliminary implementation confirm the benefit of reduced
variance and unbiasedness.
|
We tackle the challenge by generalizing the reparameterization trick, one of the most effective techniques for addressing the variance issue for differentiable models, so that the trick works for non-differentiable models as well.
|
http://arxiv.org/abs/1806.00176v2
|
http://arxiv.org/pdf/1806.00176v2.pdf
|
NeurIPS 2018 12
|
[
"Wonyeol Lee",
"Hangyeol Yu",
"Hongseok Yang"
] |
[
"Variational Inference"
] | 2018-06-01T00:00:00 |
http://papers.nips.cc/paper/7799-reparameterization-gradient-for-non-differentiable-models
|
http://papers.nips.cc/paper/7799-reparameterization-gradient-for-non-differentiable-models.pdf
|
reparameterization-gradient-for-non-1
| null |
[] |
https://paperswithcode.com/paper/on-oracle-efficient-pac-rl-with-rich
|
1803.00606
| null | null |
On Oracle-Efficient PAC RL with Rich Observations
|
We study the computational tractability of PAC reinforcement learning with
rich observations. We present new provably sample-efficient algorithms for
environments with deterministic hidden state dynamics and stochastic rich
observations. These methods operate in an oracle model of computation --
accessing policy and value function classes exclusively through standard
optimization primitives -- and therefore represent computationally efficient
alternatives to prior algorithms that require enumeration. With stochastic
hidden state dynamics, we prove that the only known sample-efficient algorithm,
OLIVE, cannot be implemented in the oracle model. We also present several
examples that illustrate fundamental challenges of tractable PAC reinforcement
learning in such general settings.
| null |
http://arxiv.org/abs/1803.00606v4
|
http://arxiv.org/pdf/1803.00606v4.pdf
|
NeurIPS 2018 12
|
[
"Christoph Dann",
"Nan Jiang",
"Akshay Krishnamurthy",
"Alekh Agarwal",
"John Langford",
"Robert E. Schapire"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-03-01T00:00:00 |
http://papers.nips.cc/paper/7416-on-oracle-efficient-pac-rl-with-rich-observations
|
http://papers.nips.cc/paper/7416-on-oracle-efficient-pac-rl-with-rich-observations.pdf
|
on-oracle-efficient-pac-rl-with-rich-1
| null |
[] |
https://paperswithcode.com/paper/fast-exploration-with-simplified-models-and
|
1806.00175
| null |
HygS7n0cFQ
|
Fast Exploration with Simplified Models and Approximately Optimistic Planning in Model Based Reinforcement Learning
|
Humans learn to play video games significantly faster than the
state-of-the-art reinforcement learning (RL) algorithms. People seem to build
simple models that are easy to learn to support planning and strategic
exploration. Inspired by this, we investigate two issues in leveraging
model-based RL for sample efficiency. First we investigate how to perform
strategic exploration when exact planning is not feasible and empirically show
that optimistic Monte Carlo Tree Search outperforms posterior sampling methods.
Second we show how to learn simple deterministic models to support fast
learning using object representation. We illustrate the benefit of these ideas
by introducing a novel algorithm, Strategic Object Oriented Reinforcement
Learning (SOORL), that outperforms state-of-the-art algorithms in the game of
Pitfall! in less than 50 episodes.
| null |
http://arxiv.org/abs/1806.00175v2
|
http://arxiv.org/pdf/1806.00175v2.pdf
| null |
[
"Ramtin Keramati",
"Jay Whang",
"Patrick Cho",
"Emma Brunskill"
] |
[
"Model-based Reinforcement Learning",
"Object",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-01T00:00:00 |
https://openreview.net/forum?id=HygS7n0cFQ
|
https://openreview.net/pdf?id=HygS7n0cFQ
| null | null |
[] |
https://paperswithcode.com/paper/community-recovery-in-a-preferential
|
1801.06818
| null | null |
Community Recovery in a Preferential Attachment Graph
|
A message passing algorithm is derived for recovering communities within a
graph generated by a variation of the Barab\'{a}si-Albert preferential
attachment model. The estimator is assumed to know the arrival times, or order
of attachment, of the vertices. The derivation of the algorithm is based on
belief propagation under an independence assumption. Two precursors to the
message passing algorithm are analyzed: the first is a degree thresholding (DT)
algorithm and the second is an algorithm based on the arrival times of the
children (C) of a given vertex, where the children of a given vertex are the
vertices that attached to it. Comparison of the performance of the algorithms
shows it is beneficial to know the arrival times, not just the number, of the
children. The probability of correct classification of a vertex is
asymptotically determined by the fraction of vertices arriving before it. Two
extensions of Algorithm C are given: the first is based on joint likelihood of
the children of a fixed set of vertices; it can sometimes be used to seed the
message passing algorithm. The second is the message passing algorithm.
Simulation results are given.
| null |
http://arxiv.org/abs/1801.06818v5
|
http://arxiv.org/pdf/1801.06818v5.pdf
| null |
[
"Bruce Hajek",
"Suryanarayana Sankagiri"
] |
[] | 2018-01-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/memory-augmented-self-play
|
1805.11016
| null | null |
Memory Augmented Self-Play
|
Self-play is an unsupervised training procedure which enables the
reinforcement learning agents to explore the environment without requiring any
external rewards. We augment the self-play setting by providing an external
memory where the agent can store experience from the previous tasks. This
enables the agent to come up with more diverse self-play tasks resulting in
faster exploration of the environment. The agent pretrained in the memory
augmented self-play setting easily outperforms the agent pretrained in
no-memory self-play setting.
|
Self-play is an unsupervised training procedure which enables the reinforcement learning agents to explore the environment without requiring any external rewards.
|
http://arxiv.org/abs/1805.11016v2
|
http://arxiv.org/pdf/1805.11016v2.pdf
| null |
[
"Shagun Sodhani",
"Vardaan Pahuja"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/training-lstm-networks-with-resistive-cross
|
1806.00166
| null | null |
Training LSTM Networks with Resistive Cross-Point Devices
|
In our previous work we have shown that resistive cross point devices, so
called Resistive Processing Unit (RPU) devices, can provide significant power
and speed benefits when training deep fully connected networks as well as
convolutional neural networks. In this work, we further extend the RPU concept
for training recurrent neural networks (RNNs) namely LSTMs. We show that the
mapping of recurrent layers is very similar to the mapping of fully connected
layers and therefore the RPU concept can potentially provide large acceleration
factors for RNNs as well. In addition, we study the effect of various device
imperfections and system parameters on training performance. Symmetry of
updates becomes even more crucial for RNNs; already a few percent asymmetry
results in an increase in the test error compared to the ideal case trained
with floating point numbers. Furthermore, the input signal resolution to device
arrays needs to be at least 7 bits for successful training. However, we show
that a stochastic rounding scheme can reduce the input signal resolution back
to 5 bits. Further, we find that RPU device variations and hardware noise are
enough to mitigate overfitting, so that there is less need for using dropout.
We note that the models trained here are roughly 1500 times larger than the
fully connected network trained on MNIST dataset in terms of the total number
of multiplication and summation operations performed per epoch. Thus, here we
attempt to study the validity of the RPU approach for large scale networks.
| null |
http://arxiv.org/abs/1806.00166v1
|
http://arxiv.org/pdf/1806.00166v1.pdf
| null |
[
"Tayfun Gokmen",
"Malte Rasch",
"Wilfried Haensch"
] |
[] | 2018-06-01T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/fitting-a-deeply-nested-hierarchical-model-to
|
1806.02321
| null | null |
Fitting a deeply-nested hierarchical model to a large book review dataset using a moment-based estimator
|
We consider a particular instance of a common problem in recommender systems:
using a database of book reviews to inform user-targeted recommendations. In
our dataset, books are categorized into genres and sub-genres. To exploit this
nested taxonomy, we use a hierarchical model that enables information pooling
across across similar items at many levels within the genre hierarchy. The main
challenge in deploying this model is computational: the data sizes are large,
and fitting the model at scale using off-the-shelf maximum likelihood
procedures is prohibitive. To get around this computational bottleneck, we
extend a moment-based fitting procedure proposed for fitting single-level
hierarchical models to the general case of arbitrarily deep hierarchies. This
extension is an order of magnetite faster than standard maximum likelihood
procedures. The fitting method can be deployed beyond recommender systems to
general contexts with deeply-nested hierarchical generalized linear mixed
models.
| null |
http://arxiv.org/abs/1806.02321v1
|
http://arxiv.org/pdf/1806.02321v1.pdf
| null |
[
"Ningshan Zhang",
"Kyle Schmaus",
"Patrick O. Perry"
] |
[
"Recommendation Systems"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-control-variates-for-variance
|
1806.00159
| null | null |
Neural Control Variates for Variance Reduction
|
In statistics and machine learning, approximation of an intractable integration is often achieved by using the unbiased Monte Carlo estimator, but the variances of the estimation are generally high in many applications. Control variates approaches are well-known to reduce the variance of the estimation. These control variates are typically constructed by employing predefined parametric functions or polynomials, determined by using those samples drawn from the relevant distributions. Instead, we propose to construct those control variates by learning neural networks to handle the cases when test functions are complex. In many applications, obtaining a large number of samples for Monte Carlo estimation is expensive, which may result in overfitting when training a neural network. We thus further propose to employ auxiliary random variables induced by the original ones to extend data samples for training the neural networks. We apply the proposed control variates with augmented variables to thermodynamic integration and reinforcement learning. Experimental results demonstrate that our method can achieve significant variance reduction compared with other alternatives.
| null |
https://arxiv.org/abs/1806.00159v2
|
https://arxiv.org/pdf/1806.00159v2.pdf
| null |
[
"Ruosi Wan",
"Mingjun Zhong",
"Haoyi Xiong",
"Zhanxing Zhu"
] |
[
"Reinforcement Learning"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/understanding-convolution-for-semantic
|
1702.08502
| null | null |
Understanding Convolution for Semantic Segmentation
|
Recent advances in deep learning, especially deep convolutional neural
networks (CNNs), have led to significant improvement over previous semantic
segmentation systems. Here we show how to improve pixel-wise semantic
segmentation by manipulating convolution-related operations that are of both
theoretical and practical value. First, we design dense upsampling convolution
(DUC) to generate pixel-level prediction, which is able to capture and decode
more detailed information that is generally missing in bilinear upsampling.
Second, we propose a hybrid dilated convolution (HDC) framework in the encoding
phase. This framework 1) effectively enlarges the receptive fields (RF) of the
network to aggregate global information; 2) alleviates what we call the
"gridding issue" caused by the standard dilated convolution operation. We
evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a
state-of-art result of 80.1% mIOU in the test set at the time of submission. We
also have achieved state-of-the-art overall on the KITTI road estimation
benchmark and the PASCAL VOC2012 segmentation task. Our source code can be
found at https://github.com/TuSimple/TuSimple-DUC .
|
This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation.
|
http://arxiv.org/abs/1702.08502v3
|
http://arxiv.org/pdf/1702.08502v3.pdf
| null |
[
"Panqu Wang",
"Pengfei Chen",
"Ye Yuan",
"Ding Liu",
"Zehua Huang",
"Xiaodi Hou",
"Garrison Cottrell"
] |
[
"Segmentation",
"Semantic Segmentation",
"Thermal Image Segmentation"
] | 2017-02-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/ecb88c5d11895a68e5f20917d27a0debbc0f0697/torch/nn/modules/conv.py#L260",
"description": "**Dilated Convolutions** are a type of [convolution](https://paperswithcode.com/method/convolution) that “inflate” the kernel by inserting holes between the kernel elements. An additional parameter $l$ (dilation rate) indicates how much the kernel is widened. There are usually $l-1$ spaces inserted between kernel elements. \r\n\r\nNote that concept has existed in past literature under different names, for instance the *algorithme a trous*, an algorithm for wavelet decomposition (Holschneider et al., 1987; Shensa, 1992).",
"full_name": "Dilated Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Dilated Convolution",
"source_title": "Multi-Scale Context Aggregation by Dilated Convolutions",
"source_url": "http://arxiv.org/abs/1511.07122v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/speeding-up-context-based-sentence
|
1710.10380
| null | null |
Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding
|
Context plays an important role in human language understanding, thus it may
also be useful for machines learning vector representations of language. In
this paper, we explore an asymmetric encoder-decoder structure for unsupervised
context-based sentence representation learning. We carefully designed
experiments to show that neither an autoregressive decoder nor an RNN decoder
is required. After that, we designed a model which still keeps an RNN as the
encoder, while using a non-autoregressive convolutional decoder. We further
combine a suite of effective designs to significantly improve model efficiency
while also achieving better performance. Our model is trained on two different
large unlabelled corpora, and in both cases the transferability is evaluated on
a set of downstream NLP tasks. We empirically show that our model is simple and
fast while producing rich sentence representations that excel in downstream
tasks.
| null |
http://arxiv.org/abs/1710.10380v3
|
http://arxiv.org/pdf/1710.10380v3.pdf
|
WS 2018 7
|
[
"Shuai Tang",
"Hailin Jin",
"Chen Fang",
"Zhaowen Wang",
"Virginia R. de Sa"
] |
[
"Decoder",
"Representation Learning",
"Sentence"
] | 2017-10-28T00:00:00 |
https://aclanthology.org/W18-3009
|
https://aclanthology.org/W18-3009.pdf
|
speeding-up-context-based-sentence-1
| null |
[] |
https://paperswithcode.com/paper/interpreting-deep-learning-the-machine
|
1806.00148
| null | null |
Interpreting Deep Learning: The Machine Learning Rorschach Test?
|
Theoretical understanding of deep learning is one of the most important tasks
facing the statistics and machine learning communities. While deep neural
networks (DNNs) originated as engineering methods and models of biological
networks in neuroscience and psychology, they have quickly become a centerpiece
of the machine learning toolbox. Unfortunately, DNN adoption powered by recent
successes combined with the open-source nature of the machine learning
community, has outpaced our theoretical understanding. We cannot reliably
identify when and why DNNs will make mistakes. In some applications like text
translation these mistakes may be comical and provide for fun fodder in
research talks, a single error can be very costly in tasks like medical
imaging. As we utilize DNNs in increasingly sensitive applications, a better
understanding of their properties is thus imperative. Recent advances in DNN
theory are numerous and include many different sources of intuition, such as
learning theory, sparse signal analysis, physics, chemistry, and psychology. An
interesting pattern begins to emerge in the breadth of possible
interpretations. The seemingly limitless approaches are mostly constrained by
the lens with which the mathematical operations are viewed. Ultimately, the
interpretation of DNNs appears to mimic a type of Rorschach test --- a
psychological test wherein subjects interpret a series of seemingly ambiguous
ink-blots. Validation for DNN theory requires a convergence of the literature.
We must distinguish between universal results that are invariant to the
analysis perspective and those that are specific to a particular network
configuration. Simultaneously we must deal with the fact that many standard
statistical tools for quantifying generalization or empirically assessing
important network features are difficult to apply to DNNs.
| null |
http://arxiv.org/abs/1806.00148v1
|
http://arxiv.org/pdf/1806.00148v1.pdf
| null |
[
"Adam S. Charles"
] |
[
"BIG-bench Machine Learning",
"Deep Learning",
"Learning Theory"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tandem-blocks-in-deep-convolutional-neural
|
1806.00145
| null |
rkmoiMbCb
|
Tandem Blocks in Deep Convolutional Neural Networks
|
Due to the success of residual networks (resnets) and related architectures,
shortcut connections have quickly become standard tools for building
convolutional neural networks. The explanations in the literature for the
apparent effectiveness of shortcuts are varied and often contradictory. We
hypothesize that shortcuts work primarily because they act as linear
counterparts to nonlinear layers. We test this hypothesis by using several
variations on the standard residual block, with different types of linear
connections, to build small image classification networks. Our experiments show
that other kinds of linear connections can be even more effective than the
identity shortcuts. Our results also suggest that the best type of linear
connection for a given application may depend on both network width and depth.
| null |
http://arxiv.org/abs/1806.00145v1
|
http://arxiv.org/pdf/1806.00145v1.pdf
|
ICLR 2018 1
|
[
"Chris Hettinger",
"Tanner Christensen",
"Jeffrey Humpherys",
"Tyler J. Jarvis"
] |
[
"General Classification",
"image-classification",
"Image Classification"
] | 2018-06-01T00:00:00 |
https://openreview.net/forum?id=rkmoiMbCb
|
https://openreview.net/pdf?id=rkmoiMbCb
|
tandem-blocks-in-deep-convolutional-neural-1
| null |
[] |
https://paperswithcode.com/paper/invariance-of-weight-distributions-in
|
1711.09090
| null | null |
Invariance of Weight Distributions in Rectified MLPs
|
An interesting approach to analyzing neural networks that has received
renewed attention is to examine the equivalent kernel of the neural network.
This is based on the fact that a fully connected feedforward network with one
hidden layer, a certain weight distribution, an activation function, and an
infinite number of neurons can be viewed as a mapping into a Hilbert space. We
derive the equivalent kernels of MLPs with ReLU or Leaky ReLU activations for
all rotationally-invariant weight distributions, generalizing a previous result
that required Gaussian weight distributions. Additionally, the Central Limit
Theorem is used to show that for certain activation functions, kernels
corresponding to layers with weight distributions having $0$ mean and finite
absolute third moment are asymptotically universal, and are well approximated
by the kernel corresponding to layers with spherical Gaussian weights. In deep
networks, as depth increases the equivalent kernel approaches a pathological
fixed point, which can be used to argue why training randomly initialized
networks can be difficult. Our results also have implications for weight
initialization.
| null |
http://arxiv.org/abs/1711.09090v3
|
http://arxiv.org/pdf/1711.09090v3.pdf
|
ICML 2018 7
|
[
"Russell Tsuchida",
"Farbod Roosta-Khorasani",
"Marcus Gallagher"
] |
[] | 2017-11-24T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1925
|
http://proceedings.mlr.press/v80/tsuchida18a/tsuchida18a.pdf
|
invariance-of-weight-distributions-in-1
| null |
[
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Feedforward Network**, or a **Multilayer Perceptron (MLP)**, is a neural network with solely densely connected layers. This is the classic neural network architecture of the literature. It consists of inputs $x$ passed through units $h$ (of which there can be many layers) to predict a target $y$. Activation functions are generally chosen to be non-linear to allow for flexible functional approximation.\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Feedforward Network",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Feedforward Network",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!\r\n\r\nHow do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!",
"full_name": "HuMan(Expedia)||How do I get a human at Expedia?",
"introduced_year": 2014,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "HuMan(Expedia)||How do I get a human at Expedia?",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/sea-surface-temperature-prediction-and
|
1806.00144
| null | null |
Sea surface temperature prediction and reconstruction using patch-level neural network representations
|
The forecasting and reconstruction of ocean and atmosphere dynamics from
satellite observation time series are key challenges. While model-driven
representations remain the classic approaches, data-driven representations
become more and more appealing to benefit from available large-scale
observation and simulation datasets. In this work we investigate the relevance
of recently introduced bilinear residual neural network representations, which
mimic numerical integration schemes such as Runge-Kutta, for the forecasting
and assimilation of geophysical fields from satellite-derived remote sensing
data. As a case-study, we consider satellite-derived Sea Surface Temperature
time series off South Africa, which involves intense and complex upper ocean
dynamics. Our numerical experiments demonstrate that the proposed patch-level
neural-network-based representations outperform other data-driven models,
including analog schemes, both in terms of forecasting and missing data
interpolation performance with a relative gain up to 50\% for highly dynamic
areas.
| null |
http://arxiv.org/abs/1806.00144v1
|
http://arxiv.org/pdf/1806.00144v1.pdf
| null |
[
"Said Ouala",
"Cedric Herzet",
"Ronan Fablet"
] |
[
"Numerical Integration",
"Time Series",
"Time Series Analysis"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/modeling-preemptive-behaviors-for-uncommon
|
1806.00143
| null | null |
Modeling Preemptive Behaviors for Uncommon Hazardous Situations From Demonstrations
|
This paper presents a learning from demonstration approach to programming
safe, autonomous behaviors for uncommon driving scenarios. Simulation is used
to re-create a targeted driving situation, one containing a road-side hazard
creating a significant occlusion in an urban neighborhood, and collect optimal
driving behaviors from 24 users. Paper employs a key-frame based approach
combined with an algorithm to linearly combine models in order to extend the
behavior to novel variations of the target situation. This approach is
theoretically agnostic to the kind of LfD framework used for modeling data and
our results suggest it generalizes well to variations containing an additional
number of hazards occurring in sequence. The linear combination algorithm is
informed by analysis of driving data, which also suggests that decision-making
algorithms need to consider a trade-off between road-rules and immediate
rewards to tackle some complex cases.
| null |
http://arxiv.org/abs/1806.00143v1
|
http://arxiv.org/pdf/1806.00143v1.pdf
| null |
[
"Priyam Parashar",
"Akansel Cosgun",
"Alireza Nakhaei",
"Kikuo Fujimura"
] |
[
"Decision Making"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-the-convergence-of-stochastic-gradient-1
|
1805.08114
| null | null |
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
|
Stochastic gradient descent is the method of choice for large scale
optimization of machine learning objective functions. Yet, its performance is
greatly variable and heavily depends on the choice of the stepsizes. This has
motivated a large body of research on adaptive stepsizes. However, there is
currently a gap in our theoretical understanding of these methods, especially
in the non-convex setting. In this paper, we start closing this gap: we
theoretically analyze in the convex and non-convex settings a generalized
version of the AdaGrad stepsizes. We show sufficient conditions for these
stepsizes to achieve almost sure asymptotic convergence of the gradients to
zero, proving the first guarantee for generalized AdaGrad stepsizes in the
non-convex setting. Moreover, we show that these stepsizes allow to
automatically adapt to the level of noise of the stochastic gradients in both
the convex and non-convex settings, interpolating between $O(1/T)$ and
$O(1/\sqrt{T})$, up to logarithmic terms.
| null |
http://arxiv.org/abs/1805.08114v3
|
http://arxiv.org/pdf/1805.08114v3.pdf
| null |
[
"Xiaoyu Li",
"Francesco Orabona"
] |
[] | 2018-05-21T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/Dawn-Of-Eve/nadir/blob/main/src/nadir/adagrad.py",
"description": "**AdaGrad** is a stochastic optimization method that adapts the learning rate to the parameters. It performs smaller updates for parameters associated with frequently occurring features, and larger updates for parameters associated with infrequently occurring features. In its update rule, Adagrad modifies the general learning rate $\\eta$ at each time step $t$ for every parameter $\\theta\\_{i}$ based on the past gradients for $\\theta\\_{i}$: \r\n\r\n$$ \\theta\\_{t+1, i} = \\theta\\_{t, i} - \\frac{\\eta}{\\sqrt{G\\_{t, ii} + \\epsilon}}g\\_{t, i} $$\r\n\r\nThe benefit of AdaGrad is that it eliminates the need to manually tune the learning rate; most leave it at a default value of $0.01$. Its main weakness is the accumulation of the squared gradients in the denominator. Since every added term is positive, the accumulated sum keeps growing during training, causing the learning rate to shrink and becoming infinitesimally small.\r\n\r\nImage: [Alec Radford](https://twitter.com/alecrad)",
"full_name": "AdaGrad",
"introduced_year": 2011,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "AdaGrad",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/on-curvature-aided-incremental-aggregated
|
1806.00125
| null | null |
Accelerating Incremental Gradient Optimization with Curvature Information
|
This paper studies an acceleration technique for incremental aggregated gradient ({\sf IAG}) method through the use of \emph{curvature} information for solving strongly convex finite sum optimization problems. These optimization problems of interest arise in large-scale learning applications. Our technique utilizes a curvature-aided gradient tracking step to produce accurate gradient estimates incrementally using Hessian information. We propose and analyze two methods utilizing the new technique, the curvature-aided IAG ({\sf CIAG}) method and the accelerated CIAG ({\sf A-CIAG}) method, which are analogous to gradient method and Nesterov's accelerated gradient method, respectively. Setting $\kappa$ to be the condition number of the objective function, we prove the $R$ linear convergence rates of $1 - \frac{4c_0 \kappa}{(\kappa+1)^2}$ for the {\sf CIAG} method, and $1 - \sqrt{\frac{c_1}{2\kappa}}$ for the {\sf A-CIAG} method, where $c_0,c_1 \leq 1$ are constants inversely proportional to the distance between the initial point and the optimal solution. When the initial iterate is close to the optimal solution, the $R$ linear convergence rates match with the gradient and accelerated gradient method, albeit {\sf CIAG} and {\sf A-CIAG} operate in an incremental setting with strictly lower computation complexity. Numerical experiments confirm our findings. The source codes used for this paper can be found on \url{http://github.com/hoitowai/ciag/}.
|
This paper studies an acceleration technique for incremental aggregated gradient ({\sf IAG}) method through the use of \emph{curvature} information for solving strongly convex finite sum optimization problems.
|
https://arxiv.org/abs/1806.00125v2
|
https://arxiv.org/pdf/1806.00125v2.pdf
| null |
[
"Hoi-To Wai",
"Wei Shi",
"Cesar A. Uribe",
"Angelia Nedich",
"Anna Scaglione"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/question-answering-through-transfer-learning
|
1702.02171
| null | null |
Question Answering through Transfer Learning from Large Fine-grained Supervision Data
|
We show that the task of question answering (QA) can significantly benefit
from the transfer learning of models trained on a different large, fine-grained
QA dataset. We achieve the state of the art in two well-studied QA datasets,
WikiQA and SemEval-2016 (Task 3A), through a basic transfer learning technique
from SQuAD. For WikiQA, our model outperforms the previous best model by more
than 8%. We demonstrate that finer supervision provides better guidance for
learning lexical and syntactic information than coarser supervision, through
quantitative results and visual analysis. We also show that a similar transfer
learning procedure achieves the state of the art on an entailment task.
|
We show that the task of question answering (QA) can significantly benefit from the transfer learning of models trained on a different large, fine-grained QA dataset.
|
http://arxiv.org/abs/1702.02171v6
|
http://arxiv.org/pdf/1702.02171v6.pdf
|
ACL 2017 7
|
[
"Sewon Min",
"Minjoon Seo",
"Hannaneh Hajishirzi"
] |
[
"Question Answering",
"Transfer Learning"
] | 2017-02-07T00:00:00 |
https://aclanthology.org/P17-2081
|
https://aclanthology.org/P17-2081.pdf
|
question-answering-through-transfer-learning-1
| null |
[] |
https://paperswithcode.com/paper/technical-report-inconsistency-in-answer-set
|
1806.00119
| null | null |
Technical Report: Inconsistency in Answer Set Programs and Extensions
|
Answer Set Programming (ASP) is a well-known problem solving approach based
on nonmonotonic logic programs. HEX-programs extend ASP with external atoms for
accessing arbitrary external information, which can introduce values that do
not appear in the input program. In this work we consider inconsistent ASP- and
HEX-programs, i.e., programs without answer sets. We study characterizations of
inconsistency, introduce a novel notion for explaining inconsistencies in terms
of input facts, analyze the complexity of reasoning tasks in context of
inconsistency analysis, and present techniques for computing inconsistency
reasons. This theoretical work is motivated by two concrete applications, which
we also present. The first one is the new modeling technique of query answering
over subprograms as a convenient alternative to the well-known saturation
technique. The second application is a new evaluation algorithm for
HEX-programs based on conflict-driven learning for programs with multiple
components: while for certain program classes previous techniques suffer an
evaluation bottleneck, the new approach shows significant, potentially
exponential speedup in our experiments. Since well-known ASP extensions such as
constraint ASP and DL-programs correspond to special cases of HEX, all
presented results are interesting beyond the specific formalism.
| null |
http://arxiv.org/abs/1806.00119v1
|
http://arxiv.org/pdf/1806.00119v1.pdf
| null |
[
"Christoph Redl"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/probabilistically-safe-robot-planning-with
|
1806.00109
| null | null |
Probabilistically Safe Robot Planning with Confidence-Based Human Predictions
|
In order to safely operate around humans, robots can employ predictive models
of human motion. Unfortunately, these models cannot capture the full complexity
of human behavior and necessarily introduce simplifying assumptions. As a
result, predictions may degrade whenever the observed human behavior departs
from the assumed structure, which can have negative implications for safety. In
this paper, we observe that how "rational" human actions appear under a
particular model can be viewed as an indicator of that model's ability to
describe the human's current motion. By reasoning about this model confidence
in a real-time Bayesian framework, we show that the robot can very quickly
modulate its predictions to become more uncertain when the model performs
poorly. Building on recent work in provably-safe trajectory planning, we
leverage these confidence-aware human motion predictions to generate assured
autonomous robot motion. Our new analysis combines worst-case tracking error
guarantees for the physical robot with probabilistic time-varying human
predictions, yielding a quantitative, probabilistic safety certificate. We
demonstrate our approach with a quadcopter navigating around a human.
| null |
http://arxiv.org/abs/1806.00109v1
|
http://arxiv.org/pdf/1806.00109v1.pdf
| null |
[
"Jaime F. Fisac",
"Andrea Bajcsy",
"Sylvia L. Herbert",
"David Fridovich-Keil",
"Steven Wang",
"Claire J. Tomlin",
"Anca D. Dragan"
] |
[
"Trajectory Planning"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/meta-learning-probabilistic-inference-for
|
1805.09921
| null |
HkxStoC5F7
|
Meta-Learning Probabilistic Inference For Prediction
|
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods. 2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass. VERSA substitutes optimization at test time with forward passes through inference networks, amortizing the cost of inference and relieving the need for second derivatives during training. 3) We evaluate VERSA on benchmark datasets where the method sets new state-of-the-art results, handles arbitrary numbers of shots, and for classification, arbitrary numbers of classes at train and test time. The power of the approach is then demonstrated through a challenging few-shot ShapeNet view reconstruction task.
|
2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass.
|
https://arxiv.org/abs/1805.09921v4
|
https://arxiv.org/pdf/1805.09921v4.pdf
|
ICLR 2019 5
|
[
"Jonathan Gordon",
"John Bronskill",
"Matthias Bauer",
"Sebastian Nowozin",
"Richard E. Turner"
] |
[
"Few-Shot Image Classification",
"Few-Shot Learning",
"Meta-Learning",
"Prediction"
] | 2018-05-24T00:00:00 |
https://openreview.net/forum?id=HkxStoC5F7
|
https://openreview.net/pdf?id=HkxStoC5F7
|
meta-learning-probabilistic-inference-for-1
| null |
[] |
https://paperswithcode.com/paper/monet-multiview-semi-supervised-keypoint-via
|
1806.00104
| null | null |
MONET: Multiview Semi-supervised Keypoint Detection via Epipolar Divergence
|
This paper presents MONET -- an end-to-end semi-supervised learning framework for a keypoint detector using multiview image streams. In particular, we consider general subjects such as non-human species where attaining a large scale annotated dataset is challenging. While multiview geometry can be used to self-supervise the unlabeled data, integrating the geometry into learning a keypoint detector is challenging due to representation mismatch. We address this mismatch by formulating a new differentiable representation of the epipolar constraint called epipolar divergence---a generalized distance from the epipolar lines to the corresponding keypoint distribution. Epipolar divergence characterizes when two view keypoint distributions produce zero reprojection error. We design a twin network that minimizes the epipolar divergence through stereo rectification that can significantly alleviate computational complexity and sampling aliasing in training. We demonstrate that our framework can localize customized keypoints of diverse species, e.g., humans, dogs, and monkeys.
|
While multiview geometry can be used to self-supervise the unlabeled data, integrating the geometry into learning a keypoint detector is challenging due to representation mismatch.
|
https://arxiv.org/abs/1806.00104v2
|
https://arxiv.org/pdf/1806.00104v2.pdf
|
ICCV 2019 10
|
[
"Yuan Yao",
"Yasamin Jafarian",
"Hyun Soo Park"
] |
[
"Data Augmentation",
"Keypoint Detection"
] | 2018-05-31T00:00:00 |
http://openaccess.thecvf.com/content_ICCV_2019/html/Yao_MONET_Multiview_Semi-Supervised_Keypoint_Detection_via_Epipolar_Divergence_ICCV_2019_paper.html
|
http://openaccess.thecvf.com/content_ICCV_2019/papers/Yao_MONET_Multiview_Semi-Supervised_Keypoint_Detection_via_Epipolar_Divergence_ICCV_2019_paper.pdf
|
monet-multiview-semi-supervised-keypoint
| null |
[
{
"code_snippet_url": "",
"description": "Mixture model network (MoNet) is a general framework allowing to design convolutional deep architectures on non-Euclidean domains such as graphs and manifolds.\r\n\r\nImage and description from: [Geometric deep learning on graphs and manifolds using mixture model CNNs](https://arxiv.org/pdf/1611.08402.pdf)",
"full_name": "Mixture model network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "MoNet",
"source_title": "Geometric deep learning on graphs and manifolds using mixture model CNNs",
"source_url": "http://arxiv.org/abs/1611.08402v3"
}
] |
https://paperswithcode.com/paper/variational-inverse-control-with-events-a
|
1805.11686
| null | null |
Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition
|
The design of a reward function often poses a major practical challenge to
real-world applications of reinforcement learning. Approaches such as inverse
reinforcement learning attempt to overcome this challenge, but require expert
demonstrations, which can be difficult or expensive to obtain in practice. We
propose variational inverse control with events (VICE), which generalizes
inverse reinforcement learning methods to cases where full demonstrations are
not needed, such as when only samples of desired goal states are available. Our
method is grounded in an alternative perspective on control and reinforcement
learning, where an agent's goal is to maximize the probability that one or more
events will happen at some point in the future, rather than maximizing
cumulative rewards. We demonstrate the effectiveness of our methods on
continuous control tasks, with a focus on high-dimensional observations like
images where rewards are hard or even impossible to specify.
| null |
http://arxiv.org/abs/1805.11686v3
|
http://arxiv.org/pdf/1805.11686v3.pdf
|
NeurIPS 2018 12
|
[
"Justin Fu",
"Avi Singh",
"Dibya Ghosh",
"Larry Yang",
"Sergey Levine"
] |
[
"continuous-control",
"Continuous Control",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/8073-variational-inverse-control-with-events-a-general-framework-for-data-driven-reward-definition
|
http://papers.nips.cc/paper/8073-variational-inverse-control-with-events-a-general-framework-for-data-driven-reward-definition.pdf
|
variational-inverse-control-with-events-a-1
| null |
[] |
https://paperswithcode.com/paper/ratio-matching-mmd-nets-low-dimensional
|
1806.00101
| null |
SJg7spEYDS
|
Generative Ratio Matching Networks
|
Deep generative models can learn to generate realistic-looking images, but many of the most effective methods are adversarial and involve a saddlepoint optimization, which requires a careful balancing of training between a generator network and a critic network. Maximum mean discrepancy networks (MMD-nets) avoid this issue by using kernel as a fixed adversary, but unfortunately, they have not on their own been able to match the generative quality of adversarial training. In this work, we take their insight of using kernels as fixed adversaries further and present a novel method for training deep generative models that does not involve saddlepoint optimization. We call our method generative ratio matching or GRAM for short. In GRAM, the generator and the critic networks do not play a zero-sum game against each other, instead, they do so against a fixed kernel. Thus GRAM networks are not only stable to train like MMD-nets but they also match and beat the generative quality of adversarially trained generative networks.
| null |
https://arxiv.org/abs/1806.00101v3
|
https://arxiv.org/pdf/1806.00101v3.pdf
|
ICLR 2020 1
|
[
"Akash Srivastava",
"Kai Xu",
"Michael U. Gutmann",
"Charles Sutton"
] |
[] | 2018-05-31T00:00:00 |
https://openreview.net/forum?id=SJg7spEYDS
|
https://openreview.net/pdf?id=SJg7spEYDS
| null | null |
[] |
https://paperswithcode.com/paper/multi-view-silhouette-and-depth-decomposition
|
1802.09987
| null | null |
Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation
|
We consider the problem of scaling deep generative shape models to
high-resolution. Drawing motivation from the canonical view representation of
objects, we introduce a novel method for the fast up-sampling of 3D objects in
voxel space through networks that perform super-resolution on the six
orthographic depth projections. This allows us to generate high-resolution
objects with more efficient scaling than methods which work directly in 3D. We
decompose the problem of 2D depth super-resolution into silhouette and depth
prediction to capture both structure and fine detail. This allows our method to
generate sharp edges more easily than an individual network. We evaluate our
work on multiple experiments concerning high-resolution 3D objects, and show
our system is capable of accurately predicting novel objects at resolutions as
large as 512$\mathbf{\times}$512$\mathbf{\times}$512 -- the highest resolution
reported for this task. We achieve state-of-the-art performance on 3D object
reconstruction from RGB images on the ShapeNet dataset, and further demonstrate
the first effective 3D super-resolution method.
|
We consider the problem of scaling deep generative shape models to high-resolution.
|
http://arxiv.org/abs/1802.09987v3
|
http://arxiv.org/pdf/1802.09987v3.pdf
|
NeurIPS 2018 12
|
[
"Edward Smith",
"Scott Fujimoto",
"David Meger"
] |
[
"3D Object Reconstruction",
"3D Object Super-Resolution",
"Depth Estimation",
"Depth Prediction",
"Object Reconstruction",
"Super-Resolution"
] | 2018-02-27T00:00:00 |
http://papers.nips.cc/paper/7883-multi-view-silhouette-and-depth-decomposition-for-high-resolution-3d-object-representation
|
http://papers.nips.cc/paper/7883-multi-view-silhouette-and-depth-decomposition-for-high-resolution-3d-object-representation.pdf
|
multi-view-silhouette-and-depth-decomposition-1
| null |
[] |
https://paperswithcode.com/paper/imaging-with-spads-and-dmds-seeing-through
|
1806.00094
| null | null |
Imaging with SPADs and DMDs: Seeing through Diffraction-Photons
|
This paper addresses the problem of imaging in the presence of diffraction-photons. Diffraction-photons arise from the low contrast ratio of DMDs ($\sim\,1000:1$), and very much degrade the quality of images captured by SPAD-based systems. Herein, a joint illumination-deconvolution scheme is designed to overcome diffraction-photons, enabling the acquisition of intensity and depth images. Additionally, a proof-of-concept experiment is conducted to demonstrate the viability of the designed scheme. It is shown that by co-designing the illumination and deconvolution phases of imaging, one can substantially overcome diffraction-photons.
| null |
https://arxiv.org/abs/1806.00094v2
|
https://arxiv.org/pdf/1806.00094v2.pdf
| null |
[
"Ibrahim Alsolami",
"Wolfgang Heidrich"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/peernets-exploiting-peer-wisdom-against
|
1806.00088
| null |
Sk4jFoA9K7
|
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
|
Deep learning systems have become ubiquitous in many aspects of our lives.
Unfortunately, it has been shown that such systems are vulnerable to
adversarial attacks, making them prone to potential unlawful uses. Designing
deep neural networks that are robust to adversarial attacks is a fundamental
step in making such systems safer and deployable in a broader variety of
applications (e.g. autonomous driving), but more importantly is a necessary
step to design novel and more advanced architectures built on new computational
paradigms rather than marginally building on the existing ones. In this paper
we introduce PeerNets, a novel family of convolutional networks alternating
classical Euclidean convolutions with graph convolutions to harness information
from a graph of peer samples. This results in a form of non-local forward
propagation in the model, where latent features are conditioned on the global
structure induced by the graph, that is up to 3 times more robust to a variety
of white- and black-box adversarial attacks compared to conventional
architectures with almost no drop in accuracy.
|
Deep learning systems have become ubiquitous in many aspects of our lives.
|
http://arxiv.org/abs/1806.00088v1
|
http://arxiv.org/pdf/1806.00088v1.pdf
|
ICLR 2019 5
|
[
"Jan Svoboda",
"Jonathan Masci",
"Federico Monti",
"Michael M. Bronstein",
"Leonidas Guibas"
] |
[
"Autonomous Driving"
] | 2018-05-31T00:00:00 |
https://openreview.net/forum?id=Sk4jFoA9K7
|
https://openreview.net/pdf?id=Sk4jFoA9K7
|
peernets-exploiting-peer-wisdom-against-1
| null |
[] |
https://paperswithcode.com/paper/resisting-adversarial-attacks-using-gaussian
|
1806.00081
| null | null |
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders
|
Susceptibility of deep neural networks to adversarial attacks poses a major
theoretical and practical challenge. All efforts to harden classifiers against
such attacks have seen limited success. Two distinct categories of samples to
which deep networks are vulnerable, "adversarial samples" and "fooling
samples", have been tackled separately so far due to the difficulty posed when
considered together. In this work, we show how one can address them both under
one unified framework. We tie a discriminative model with a generative model,
rendering the adversarial objective to entail a conflict. Our model has the
form of a variational autoencoder, with a Gaussian mixture prior on the latent
vector. Each mixture component of the prior distribution corresponds to one of
the classes in the data. This enables us to perform selective classification,
leading to the rejection of adversarial samples instead of misclassification.
Our method inherently provides a way of learning a selective classifier in a
semi-supervised scenario as well, which can resist adversarial attacks. We also
show how one can reclassify the rejected adversarial samples.
| null |
http://arxiv.org/abs/1806.00081v2
|
http://arxiv.org/pdf/1806.00081v2.pdf
| null |
[
"Partha Ghosh",
"Arpan Losalka",
"Michael J. Black"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rethinking-knowledge-graph-propagation-for
|
1805.11724
| null | null |
Rethinking Knowledge Graph Propagation for Zero-Shot Learning
|
Graph convolutional neural networks have recently shown great potential for
the task of zero-shot learning. These models are highly sample efficient as
related concepts in the graph structure share statistical strength allowing
generalization to new classes when faced with a lack of data. However,
multi-layer architectures, which are required to propagate knowledge to distant
nodes in the graph, dilute the knowledge by performing extensive Laplacian
smoothing at each layer and thereby consequently decrease performance. In order
to still enjoy the benefit brought by the graph structure while preventing
dilution of knowledge from distant nodes, we propose a Dense Graph Propagation
(DGP) module with carefully designed direct links among distant nodes. DGP
allows us to exploit the hierarchical graph structure of the knowledge graph
through additional connections. These connections are added based on a node's
relationship to its ancestors and descendants. A weighting scheme is further
used to weigh their contribution depending on the distance to the node to
improve information propagation in the graph. Combined with finetuning of the
representations in a two-stage training approach our method outperforms
state-of-the-art zero-shot learning approaches.
|
Graph convolutional neural networks have recently shown great potential for the task of zero-shot learning.
|
http://arxiv.org/abs/1805.11724v3
|
http://arxiv.org/pdf/1805.11724v3.pdf
|
CVPR 2019 6
|
[
"Michael Kampffmeyer",
"Yinbo Chen",
"Xiaodan Liang",
"Hao Wang",
"Yu-jia Zhang",
"Eric P. Xing"
] |
[
"Zero-Shot Learning"
] | 2018-05-29T00:00:00 |
http://openaccess.thecvf.com/content_CVPR_2019/html/Kampffmeyer_Rethinking_Knowledge_Graph_Propagation_for_Zero-Shot_Learning_CVPR_2019_paper.html
|
http://openaccess.thecvf.com/content_CVPR_2019/papers/Kampffmeyer_Rethinking_Knowledge_Graph_Propagation_for_Zero-Shot_Learning_CVPR_2019_paper.pdf
|
rethinking-knowledge-graph-propagation-for-1
| null |
[] |
https://paperswithcode.com/paper/how-convolutional-neural-network-see-the
|
1804.11191
| null | null |
How convolutional neural network see the world - A survey of convolutional neural network visualization methods
|
Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive
performance on many computer vision related tasks, such as object detection,
image recognition, image retrieval, etc. These achievements benefit from the
CNNs outstanding capability to learn the input features with deep layers of
neuron structures and iterative training process. However, these learned
features are hard to identify and interpret from a human vision perspective,
causing a lack of understanding of the CNNs internal working mechanism. To
improve the CNN interpretability, the CNN visualization is well utilized as a
qualitative analysis method, which translates the internal features into
visually perceptible patterns. And many CNN visualization works have been
proposed in the literature to interpret the CNN in perspectives of network
structure, operation, and semantic concept. In this paper, we expect to provide
a comprehensive survey of several representative CNN visualization methods,
including Activation Maximization, Network Inversion, Deconvolutional Neural
Networks (DeconvNet), and Network Dissection based visualization. These methods
are presented in terms of motivations, algorithms, and experiment results.
Based on these visualization methods, we also discuss their practical
applications to demonstrate the significance of the CNN interpretability in
areas of network design, optimization, security enhancement, etc.
|
Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc.
|
http://arxiv.org/abs/1804.11191v2
|
http://arxiv.org/pdf/1804.11191v2.pdf
| null |
[
"Zhuwei Qin",
"Fuxun Yu",
"ChenChen Liu",
"Xiang Chen"
] |
[
"Image Retrieval",
"object-detection",
"Object Detection",
"Retrieval"
] | 2018-04-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Network Dissection** is an interpretability method for [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks) that evaluates the alignment between individual hidden units and a set of visual semantic concepts. By identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, and colors. \r\n\r\nThe measurement of interpretability proceeds in three steps:\r\n\r\n- Identify a broad set of human-labeled visual concepts.\r\n- Gather the response of the hidden variables to known concepts.\r\n- Quantify alignment of hidden variable−concept pairs.",
"full_name": "Network Dissection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Interpretability Methods** seek to explain the predictions made by neural networks by introducing mechanisms to enduce or enforce interpretability. For example, LIME approximates the neural network with a locally interpretable model. Below you can find a continuously updating list of interpretability methods.",
"name": "Interpretability",
"parent": null
},
"name": "Network Dissection",
"source_title": "Interpreting Deep Visual Representations via Network Dissection",
"source_url": "http://arxiv.org/abs/1711.05611v2"
},
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "Interpretability",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Interpretability",
"source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression",
"source_url": "http://arxiv.org/abs/1310.1533v2"
}
] |
https://paperswithcode.com/paper/darts-deceiving-autonomous-cars-with-toxic
|
1802.06430
| null | null |
DARTS: Deceiving Autonomous Cars with Toxic Signs
|
Sign recognition is an integral part of autonomous cars. Any
misclassification of traffic signs can potentially lead to a multitude of
disastrous consequences, ranging from a life-threatening accident to even a
large-scale interruption of transportation services relying on autonomous cars.
In this paper, we propose and examine security attacks against sign recognition
systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed
attacks DARTS). In particular, we introduce two novel methods to create these
toxic signs. First, we propose Out-of-Distribution attacks, which expand the
scope of adversarial examples by enabling the adversary to generate these
starting from an arbitrary point in the image space compared to prior attacks
which are restricted to existing training/test data (In-Distribution). Second,
we present the Lenticular Printing attack, which relies on an optical
phenomenon to deceive the traffic sign recognition system. We extensively
evaluate the effectiveness of the proposed attacks in both virtual and
real-world settings and consider both white-box and black-box threat models.
Our results demonstrate that the proposed attacks are successful under both
settings and threat models. We further show that Out-of-Distribution attacks
can outperform In-Distribution attacks on classifiers defended using the
adversarial training defense, exposing a new attack vector for these defenses.
|
In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS).
|
http://arxiv.org/abs/1802.06430v3
|
http://arxiv.org/pdf/1802.06430v3.pdf
| null |
[
"Chawin Sitawarin",
"Arjun Nitin Bhagoji",
"Arsalan Mosenia",
"Mung Chiang",
"Prateek Mittal"
] |
[
"Traffic Sign Recognition"
] | 2018-02-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/efficient-low-rank-multimodal-fusion-with
|
1806.00064
| null | null |
Efficient Low-rank Multimodal Fusion with Modality-Specific Factors
|
Multimodal research is an emerging field of artificial intelligence, and one
of the main research problems in this field is multimodal fusion. The fusion of
multimodal data is the process of integrating multiple unimodal representations
into one compact multimodal representation. Previous research in this field has
exploited the expressiveness of tensors for multimodal representation. However,
these methods often suffer from exponential increase in dimensions and in
computational complexity introduced by transformation of input into tensor. In
this paper, we propose the Low-rank Multimodal Fusion method, which performs
multimodal fusion using low-rank tensors to improve efficiency. We evaluate our
model on three different tasks: multimodal sentiment analysis, speaker trait
analysis, and emotion recognition. Our model achieves competitive results on
all these tasks while drastically reducing computational complexity. Additional
experiments also show that our model can perform robustly for a wide range of
low-rank settings, and is indeed much more efficient in both training and
inference compared to other methods that utilize tensor representations.
|
Previous research in this field has exploited the expressiveness of tensors for multimodal representation.
|
http://arxiv.org/abs/1806.00064v1
|
http://arxiv.org/pdf/1806.00064v1.pdf
|
ACL 2018 7
|
[
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
] |
[
"Emotion Recognition",
"Multimodal Sentiment Analysis",
"Sentiment Analysis"
] | 2018-05-31T00:00:00 |
https://aclanthology.org/P18-1209
|
https://aclanthology.org/P18-1209.pdf
|
efficient-low-rank-multimodal-fusion-with-1
| null |
[] |
https://paperswithcode.com/paper/a-highly-parallel-fpga-implementation-of
|
1806.01087
| null | null |
A Highly Parallel FPGA Implementation of Sparse Neural Network Training
|
We demonstrate an FPGA implementation of a parallel and reconfigurable
architecture for sparse neural networks, capable of on-chip training and
inference. The network connectivity uses pre-determined, structured sparsity to
significantly reduce complexity by lowering memory and computational
requirements. The architecture uses a notion of edge-processing, leading to
efficient pipelining and parallelization. Moreover, the device can be
reconfigured to trade off resource utilization with training time to fit
networks and datasets of varying sizes. The combined effects of complexity
reduction and easy reconfigurability enable significantly greater exploration
of network hyperparameters and structures on-chip. As proof of concept, we show
implementation results on an Artix-7 FPGA.
|
We demonstrate an FPGA implementation of a parallel and reconfigurable architecture for sparse neural networks, capable of on-chip training and inference.
|
http://arxiv.org/abs/1806.01087v2
|
http://arxiv.org/pdf/1806.01087v2.pdf
| null |
[
"Sourya Dey",
"Diandian Chen",
"Zongyang Li",
"Souvik Kundu",
"Kuan-Wen Huang",
"Keith M. Chugg",
"Peter A. Beerel"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/defending-against-machine-learning-model
|
1806.00054
| null | null |
Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations
|
Machine learning models are vulnerable to simple model stealing attacks if
the adversary can obtain output labels for chosen inputs. To protect against
these attacks, it has been proposed to limit the information provided to the
adversary by omitting probability scores, significantly impacting the utility
of the provided service. In this work, we illustrate how a service provider can
still provide useful, albeit misleading, class probability information, while
significantly limiting the success of the attack. Our defense forces the
adversary to discard the class probabilities, requiring significantly more
queries before they can train a model with comparable performance. We evaluate
several attack strategies, model architectures, and hyperparameters under
varying adversarial models, and evaluate the efficacy of our defense against
the strongest adversary. Finally, we quantify the amount of noise injected into
the class probabilities to mesure the loss in utility, e.g., adding 1.26 nats
per query on CIFAR-10 and 3.27 on MNIST. Our evaluation shows our defense can
degrade the accuracy of the stolen model at least 20%, or require up to 64
times more queries while keeping the accuracy of the protected model almost
intact.
| null |
http://arxiv.org/abs/1806.00054v4
|
http://arxiv.org/pdf/1806.00054v4.pdf
| null |
[
"Taesung Lee",
"Benjamin Edwards",
"Ian Molloy",
"Dong Su"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/interpretable-set-functions
|
1806.00050
| null | null |
Interpretable Set Functions
|
We propose learning flexible but interpretable functions that aggregate a
variable-length set of permutation-invariant feature vectors to predict a
label. We use a deep lattice network model so we can architect the model
structure to enhance interpretability, and add monotonicity constraints between
inputs-and-outputs. We then use the proposed set function to automate the
engineering of dense, interpretable features from sparse categorical features,
which we call semantic feature engine. Experiments on real-world data show the
achieved accuracy is similar to deep sets or deep neural networks, and is
easier to debug and understand.
| null |
http://arxiv.org/abs/1806.00050v1
|
http://arxiv.org/pdf/1806.00050v1.pdf
| null |
[
"Andrew Cotter",
"Maya Gupta",
"Heinrich Jiang",
"James Muller",
"Taman Narayan",
"Serena Wang",
"Tao Zhu"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Deep Sets",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Backbone Architectures",
"parent": null
},
"name": "Deep Sets",
"source_title": "Deep Sets",
"source_url": "http://arxiv.org/abs/1703.06114v3"
}
] |
https://paperswithcode.com/paper/following-high-level-navigation-instructions
|
1806.00047
| null | null |
Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning
|
We introduce a method for following high-level navigation instructions by
mapping directly from images, instructions and pose estimates to continuous
low-level velocity commands for real-time control. The Grounded Semantic
Mapping Network (GSMN) is a fully-differentiable neural network architecture
that builds an explicit semantic map in the world reference frame by
incorporating a pinhole camera projection model within the network. The
information stored in the map is learned from experience, while the
local-to-world transformation is computed explicitly. We train the model using
DAggerFM, a modified variant of DAgger that trades tabular convergence
guarantees for improved training speed and memory use. We test GSMN in virtual
environments on a realistic quadcopter simulator and show that incorporating an
explicit mapping and grounding modules allows GSMN to outperform strong neural
baselines and almost reach an expert policy performance. Finally, we analyze
the learned map representations and show that using an explicit map leads to an
interpretable instruction-following model.
|
We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control.
|
http://arxiv.org/abs/1806.00047v1
|
http://arxiv.org/pdf/1806.00047v1.pdf
| null |
[
"Valts Blukis",
"Nataly Brukhim",
"Andrew Bennett",
"Ross A. Knepper",
"Yoav Artzi"
] |
[
"Imitation Learning",
"Instruction Following"
] | 2018-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/dropping-convexity-for-more-efficient-and
|
1702.08134
| null | null |
Dropping Convexity for More Efficient and Scalable Online Multiview Learning
|
Multiview representation learning is very popular for latent factor analysis.
It naturally arises in many data analysis, machine learning, and information
retrieval applications to model dependent structures among multiple data
sources. For computational convenience, existing approaches usually formulate
the multiview representation learning as convex optimization problems, where
global optima can be obtained by certain algorithms in polynomial time.
However, many pieces of evidence have corroborated that heuristic nonconvex
approaches also have good empirical computational performance and convergence
to the global optima, although there is a lack of theoretical justification.
Such a gap between theory and practice motivates us to study a nonconvex
formulation for multiview representation learning, which can be efficiently
solved by a simple stochastic gradient descent (SGD) algorithm. We first
illustrate the geometry of the nonconvex formulation; Then, we establish
asymptotic global rates of convergence to the global optima by diffusion
approximations. Numerical experiments are provided to support our theory.
| null |
http://arxiv.org/abs/1702.08134v9
|
http://arxiv.org/pdf/1702.08134v9.pdf
| null |
[
"Zhehui Chen",
"Lin F. Yang",
"Chris J. Li",
"Tuo Zhao"
] |
[
"Information Retrieval",
"Multiview Learning",
"Representation Learning",
"Retrieval"
] | 2017-02-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/text-normalization-using-memory-augmented
|
1806.00044
| null | null |
Text normalization using memory augmented neural networks
|
We perform text normalization, i.e. the transformation of words from the
written to the spoken form, using a memory augmented neural network. With the
addition of dynamic memory access and storage mechanism, we present a neural
architecture that will serve as a language-agnostic text normalization system
while avoiding the kind of unacceptable errors made by the LSTM-based recurrent
neural networks. By successfully reducing the frequency of such mistakes, we
show that this novel architecture is indeed a better alternative. Our proposed
system requires significantly lesser amounts of data, training time and compute
resources. Additionally, we perform data up-sampling, circumventing the data
sparsity problem in some semiotic classes, to show that sufficient examples in
any particular class can improve the performance of our text normalization
system. Although a few occurrences of these errors still remain in certain
semiotic classes, we demonstrate that memory augmented networks with
meta-learning capabilities can open many doors to a superior text normalization
system.
|
We perform text normalization, i. e. the transformation of words from the written to the spoken form, using a memory augmented neural network.
|
http://arxiv.org/abs/1806.00044v3
|
http://arxiv.org/pdf/1806.00044v3.pdf
| null |
[
"Subhojeet Pramanik",
"Aman Hussain"
] |
[
"Meta-Learning",
"Text Normalization"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/verifying-properties-of-binarized-deep-neural
|
1709.06662
| null | null |
Verifying Properties of Binarized Deep Neural Networks
|
Understanding properties of deep neural networks is an important challenge in
deep learning. In this paper, we take a step in this direction by proposing a
rigorous way of verifying properties of a popular class of neural networks,
Binarized Neural Networks, using the well-developed means of Boolean
satisfiability. Our main contribution is a construction that creates a
representation of a binarized neural network as a Boolean formula. Our encoding
is the first exact Boolean representation of a deep neural network. Using this
encoding, we leverage the power of modern SAT solvers along with a proposed
counterexample-guided search procedure to verify various properties of these
networks. A particular focus will be on the critical property of robustness to
adversarial perturbations. For this property, our experimental results
demonstrate that our approach scales to medium-size deep neural networks used
in image classification tasks. To the best of our knowledge, this is the first
work on verifying properties of deep neural networks using an exact Boolean
encoding of the network.
| null |
http://arxiv.org/abs/1709.06662v2
|
http://arxiv.org/pdf/1709.06662v2.pdf
| null |
[
"Nina Narodytska",
"Shiva Prasad Kasiviswanathan",
"Leonid Ryzhyk",
"Mooly Sagiv",
"Toby Walsh"
] |
[
"image-classification",
"Image Classification"
] | 2017-09-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/efficient-algorithms-and-lower-bounds-for
|
1806.00040
| null | null |
Efficient Algorithms and Lower Bounds for Robust Linear Regression
|
We study the problem of high-dimensional linear regression in a robust model
where an $\epsilon$-fraction of the samples can be adversarially corrupted. We
focus on the fundamental setting where the covariates of the uncorrupted
samples are drawn from a Gaussian distribution $\mathcal{N}(0, \Sigma)$ on
$\mathbb{R}^d$. We give nearly tight upper bounds and computational lower
bounds for this problem. Specifically, our main contributions are as follows:
For the case that the covariance matrix is known to be the identity, we give
a sample near-optimal and computationally efficient algorithm that outputs a
candidate hypothesis vector $\widehat{\beta}$ which approximates the unknown
regression vector $\beta$ within $\ell_2$-norm $O(\epsilon \log(1/\epsilon)
\sigma)$, where $\sigma$ is the standard deviation of the random observation
noise. An error of $\Omega (\epsilon \sigma)$ is information-theoretically
necessary, even with infinite sample size. Prior work gave an algorithm for
this problem with sample complexity $\tilde{\Omega}(d^2/\epsilon^2)$ whose
error guarantee scales with the $\ell_2$-norm of $\beta$.
For the case of unknown covariance, we show that we can efficiently achieve
the same error guarantee as in the known covariance case using an additional
$\tilde{O}(d^2/\epsilon^2)$ unlabeled examples. On the other hand, an error of
$O(\epsilon \sigma)$ can be information-theoretically attained with
$O(d/\epsilon^2)$ samples. We prove a Statistical Query (SQ) lower bound
providing evidence that this quadratic tradeoff in the sample size is inherent.
More specifically, we show that any polynomial time SQ learning algorithm for
robust linear regression (in Huber's contamination model) with estimation
complexity $O(d^{2-c})$, where $c>0$ is an arbitrarily small constant, must
incur an error of $\Omega(\sqrt{\epsilon} \sigma)$.
| null |
http://arxiv.org/abs/1806.00040v1
|
http://arxiv.org/pdf/1806.00040v1.pdf
| null |
[
"Ilias Diakonikolas",
"Weihao Kong",
"Alistair Stewart"
] |
[
"regression"
] | 2018-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/assessing-generative-models-via-precision-and
|
1806.00035
| null | null |
Assessing Generative Models via Precision and Recall
|
Recent advances in generative modeling have led to an increased interest in
the study of statistical divergences as means of model comparison. Commonly
used evaluation methods, such as the Frechet Inception Distance (FID),
correlate well with the perceived quality of samples and are sensitive to mode
dropping. However, these metrics are unable to distinguish between different
failure cases since they only yield one-dimensional scores. We propose a novel
definition of precision and recall for distributions which disentangles the
divergence into two separate dimensions. The proposed notion is intuitive,
retains desirable properties, and naturally leads to an efficient algorithm
that can be used to evaluate generative models. We relate this notion to total
variation as well as to recent evaluation metrics such as Inception Score and
FID. To demonstrate the practical utility of the proposed approach we perform
an empirical study on several variants of Generative Adversarial Networks and
Variational Autoencoders. In an extensive set of experiments we show that the
proposed metric is able to disentangle the quality of generated samples from
the coverage of the target distribution.
|
Recent advances in generative modeling have led to an increased interest in the study of statistical divergences as means of model comparison.
|
http://arxiv.org/abs/1806.00035v2
|
http://arxiv.org/pdf/1806.00035v2.pdf
|
NeurIPS 2018 12
|
[
"Mehdi S. M. Sajjadi",
"Olivier Bachem",
"Mario Lucic",
"Olivier Bousquet",
"Sylvain Gelly"
] |
[] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/7769-assessing-generative-models-via-precision-and-recall
|
http://papers.nips.cc/paper/7769-assessing-generative-models-via-precision-and-recall.pdf
|
assessing-generative-models-via-precision-and-1
| null |
[] |
https://paperswithcode.com/paper/fast-diverse-and-accurate-image-captioning
|
1805.12589
| null | null |
Fast, Diverse and Accurate Image Captioning Guided By Part-of-Speech
|
Image captioning is an ambiguous problem, with many suitable captions for an
image. To address ambiguity, beam search is the de facto method for sampling
multiple captions. However, beam search is computationally expensive and known
to produce generic captions. To address this concern, some variational
auto-encoder (VAE) and generative adversarial net (GAN) based methods have been
proposed. Though diverse, GAN and VAE are less accurate. In this paper, we
first predict a meaningful summary of the image, then generate the caption
based on that summary. We use part-of-speech as summaries, since our summary
should drive caption generation. We achieve the trifecta: (1) High accuracy for
the diverse captions as evaluated by standard captioning metrics and user
studies; (2) Faster computation of diverse captions compared to beam search and
diverse beam search; and (3) High diversity as evaluated by counting novel
sentences, distinct n-grams and mutual overlap (i.e., mBleu-4) scores.
| null |
http://arxiv.org/abs/1805.12589v3
|
http://arxiv.org/pdf/1805.12589v3.pdf
|
CVPR 2019 6
|
[
"Aditya Deshpande",
"Jyoti Aneja",
"Li-Wei Wang",
"Alexander Schwing",
"D. A. Forsyth"
] |
[
"Caption Generation",
"Diversity",
"Image Captioning"
] | 2018-05-31T00:00:00 |
http://openaccess.thecvf.com/content_CVPR_2019/html/Deshpande_Fast_Diverse_and_Accurate_Image_Captioning_Guided_by_Part-Of-Speech_CVPR_2019_paper.html
|
http://openaccess.thecvf.com/content_CVPR_2019/papers/Deshpande_Fast_Diverse_and_Accurate_Image_Captioning_Guided_by_Part-Of-Speech_CVPR_2019_paper.pdf
|
fast-diverse-and-accurate-image-captioning-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/allennlp-a-deep-semantic-natural-language
|
1803.07640
| null | null |
AllenNLP: A Deep Semantic Natural Language Processing Platform
|
This paper describes AllenNLP, a platform for research on deep learning
methods in natural language understanding. AllenNLP is designed to support
researchers who want to build novel language understanding models quickly and
easily. It is built on top of PyTorch, allowing for dynamic computation graphs,
and provides (1) a flexible data API that handles intelligent batching and
padding, (2) high-level abstractions for common operations in working with
text, and (3) a modular and extensible experiment framework that makes doing
good science easy. It also includes reference implementations of high quality
approaches for both core semantic problems (e.g. semantic role labeling (Palmer
et al., 2005)) and language understanding applications (e.g. machine
comprehension (Rajpurkar et al., 2016)). AllenNLP is an ongoing open-source
effort maintained by engineers and researchers at the Allen Institute for
Artificial Intelligence.
|
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding.
|
http://arxiv.org/abs/1803.07640v2
|
http://arxiv.org/pdf/1803.07640v2.pdf
|
WS 2018 7
|
[
"Matt Gardner",
"Joel Grus",
"Mark Neumann",
"Oyvind Tafjord",
"Pradeep Dasigi",
"Nelson Liu",
"Matthew Peters",
"Michael Schmitz",
"Luke Zettlemoyer"
] |
[
"Natural Language Understanding",
"Reading Comprehension",
"Semantic Role Labeling"
] | 2018-03-20T00:00:00 |
https://aclanthology.org/W18-2501
|
https://aclanthology.org/W18-2501.pdf
|
allennlp-a-deep-semantic-natural-language-1
| null |
[] |
https://paperswithcode.com/paper/dimensionality-reduction-for-stationary-time
|
1803.02312
| null | null |
Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization
|
Stochastic optimization naturally arises in machine learning. Efficient
algorithms with provable guarantees, however, are still largely missing, when
the objective function is nonconvex and the data points are dependent. This
paper studies this fundamental challenge through a streaming PCA problem for
stationary time series data. Specifically, our goal is to estimate the
principle component of time series data with respect to the covariance matrix
of the stationary distribution. Computationally, we propose a variant of Oja's
algorithm combined with downsampling to control the bias of the stochastic
gradient caused by the data dependency. Theoretically, we quantify the
uncertainty of our proposed stochastic algorithm based on diffusion
approximations. This allows us to prove the asymptotic rate of convergence and
further implies near optimal asymptotic sample complexity. Numerical
experiments are provided to support our analysis.
| null |
http://arxiv.org/abs/1803.02312v4
|
http://arxiv.org/pdf/1803.02312v4.pdf
|
NeurIPS 2018 12
|
[
"Minshuo Chen",
"Lin Yang",
"Mengdi Wang",
"Tuo Zhao"
] |
[
"Dimensionality Reduction",
"Stochastic Optimization",
"Time Series",
"Time Series Analysis"
] | 2018-03-06T00:00:00 |
http://papers.nips.cc/paper/7609-dimensionality-reduction-for-stationary-time-series-via-stochastic-nonconvex-optimization
|
http://papers.nips.cc/paper/7609-dimensionality-reduction-for-stationary-time-series-via-stochastic-nonconvex-optimization.pdf
|
dimensionality-reduction-for-stationary-time-1
| null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/punny-captions-witty-wordplay-in-image
|
1704.08224
| null | null |
Punny Captions: Witty Wordplay in Image Descriptions
|
Wit is a form of rich interaction that is often grounded in a specific
situation (e.g., a comment in response to an event). In this work, we attempt
to build computational models that can produce witty descriptions for a given
image. Inspired by a cognitive account of humor appreciation, we employ
linguistic wordplay, specifically puns, in image descriptions. We develop two
approaches which involve retrieving witty descriptions for a given image from a
large corpus of sentences, or generating them via an encoder-decoder neural
network architecture. We compare our approach against meaningful baseline
approaches via human studies and show substantial improvements. We find that
when a human is subject to similar constraints as the model regarding word
usage and style, people vote the image descriptions generated by our model to
be slightly wittier than human-written witty descriptions. Unsurprisingly,
humans are almost always wittier than the model when they are free to choose
the vocabulary, style, etc.
|
Wit is a form of rich interaction that is often grounded in a specific situation (e. g., a comment in response to an event).
|
http://arxiv.org/abs/1704.08224v2
|
http://arxiv.org/pdf/1704.08224v2.pdf
|
NAACL 2018 6
|
[
"Arjun Chandrasekaran",
"Devi Parikh",
"Mohit Bansal"
] |
[
"Decoder"
] | 2017-04-26T00:00:00 |
https://aclanthology.org/N18-2121
|
https://aclanthology.org/N18-2121.pdf
|
punny-captions-witty-wordplay-in-image-1
| null |
[] |
https://paperswithcode.com/paper/deep-reinforcement-learning-for-de-novo-drug
|
1711.10907
| null | null |
Deep Reinforcement Learning for De-Novo Drug Design
|
We propose a novel computational strategy for de novo design of molecules
with desired properties termed ReLeaSE (Reinforcement Learning for Structural
Evolution). Based on deep and reinforcement learning approaches, ReLeaSE
integrates two deep neural networks - generative and predictive - that are
trained separately but employed jointly to generate novel targeted chemical
libraries. ReLeaSE employs simple representation of molecules by their SMILES
strings only. Generative models are trained with stack-augmented memory network
to produce chemically feasible SMILES strings, and predictive models are
derived to forecast the desired properties of the de novo generated compounds.
In the first phase of the method, generative and predictive models are trained
separately with a supervised learning algorithm. In the second phase, both
models are trained jointly with the reinforcement learning approach to bias the
generation of new chemical structures towards those with the desired physical
and/or biological properties. In the proof-of-concept study, we have employed
the ReLeaSE method to design chemical libraries with a bias toward structural
complexity or biased toward compounds with either maximal, minimal, or specific
range of physical properties such as melting point or hydrophobicity, as well
as to develop novel putative inhibitors of JAK2. The approach proposed herein
can find a general use for generating targeted chemical libraries of novel
compounds optimized for either a single desired property or multiple
properties.
|
In the first phase of the method, generative and predictive models are trained separately with a supervised learning algorithm.
|
http://arxiv.org/abs/1711.10907v2
|
http://arxiv.org/pdf/1711.10907v2.pdf
| null |
[
"Mariya Popova",
"Olexandr Isayev",
"Alexander Tropsha"
] |
[
"Deep Reinforcement Learning",
"Drug Design",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2017-11-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/approximate-knowledge-compilation-by-online
|
1805.12565
| null | null |
Approximate Knowledge Compilation by Online Collapsed Importance Sampling
|
We introduce collapsed compilation, a novel approximate inference algorithm
for discrete probabilistic graphical models. It is a collapsed sampling
algorithm that incrementally selects which variable to sample next based on the
partial sample obtained so far. This online collapsing, together with knowledge
compilation inference on the remaining variables, naturally exploits local
structure and context- specific independence in the distribution. These
properties are naturally exploited in exact inference, but are difficult to
harness for approximate inference. More- over, by having a partially compiled
circuit available during sampling, collapsed compilation has access to a highly
effective proposal distribution for importance sampling. Our experimental
evaluation shows that collapsed compilation performs well on standard
benchmarks. In particular, when the amount of exact inference is equally
limited, collapsed compilation is competitive with the state of the art, and
outperforms it on several benchmarks.
|
In particular, when the amount of exact inference is equally limited, collapsed compilation is competitive with the state of the art, and outperforms it on several benchmarks.
|
http://arxiv.org/abs/1805.12565v1
|
http://arxiv.org/pdf/1805.12565v1.pdf
|
NeurIPS 2018 12
|
[
"Tal Friedman",
"Guy Van Den Broeck"
] |
[] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/8026-approximate-knowledge-compilation-by-online-collapsed-importance-sampling
|
http://papers.nips.cc/paper/8026-approximate-knowledge-compilation-by-online-collapsed-importance-sampling.pdf
|
approximate-knowledge-compilation-by-online-1
| null |
[] |
https://paperswithcode.com/paper/unsupervised-text-style-transfer-using
|
1805.11749
| null | null |
Unsupervised Text Style Transfer using Language Models as Discriminators
|
Binary classifiers are often employed as discriminators in GAN-based
unsupervised style transfer systems to ensure that transferred sentences are
similar to sentences in the target domain. One difficulty with this approach is
that the error signal provided by the discriminator can be unstable and is
sometimes insufficient to train the generator to produce fluent language. In
this paper, we propose a new technique that uses a target domain language model
as the discriminator, providing richer and more stable token-level feedback
during the learning process. We train the generator to minimize the negative
log likelihood (NLL) of generated sentences, evaluated by the language model.
By using a continuous approximation of discrete sampling under the generator,
our model can be trained using back-propagation in an end- to-end fashion.
Moreover, our empirical results show that when using a language model as a
structured discriminator, it is possible to forgo adversarial steps during
training, making the process more stable. We compare our model with previous
work using convolutional neural networks (CNNs) as discriminators and show that
our approach leads to improved performance on three tasks: word substitution
decipherment, sentiment modification, and related language translation.
|
Binary classifiers are often employed as discriminators in GAN-based unsupervised style transfer systems to ensure that transferred sentences are similar to sentences in the target domain.
|
http://arxiv.org/abs/1805.11749v3
|
http://arxiv.org/pdf/1805.11749v3.pdf
|
NeurIPS 2018 12
|
[
"Zichao Yang",
"Zhiting Hu",
"Chris Dyer",
"Eric P. Xing",
"Taylor Berg-Kirkpatrick"
] |
[
"Decipherment",
"Language Modeling",
"Language Modelling",
"Style Transfer",
"Text Style Transfer",
"Translation",
"Unsupervised Text Style Transfer"
] | 2018-05-30T00:00:00 |
http://papers.nips.cc/paper/7959-unsupervised-text-style-transfer-using-language-models-as-discriminators
|
http://papers.nips.cc/paper/7959-unsupervised-text-style-transfer-using-language-models-as-discriminators.pdf
|
unsupervised-text-style-transfer-using-1
| null |
[] |
https://paperswithcode.com/paper/the-complexity-of-splitting-necklaces-and
|
1805.12559
| null | null |
The Complexity of Splitting Necklaces and Bisecting Ham Sandwiches
|
We resolve the computational complexity of two problems known as
NECKLACE-SPLITTING and DISCRETE HAM SANDWICH, showing that they are
PPA-complete. For NECKLACE SPLITTING, this result is specific to the important
special case in which two thieves share the necklace. We do this via a
PPA-completeness result for an approximate version of the CONSENSUS-HALVING
problem, strengthening our recent result that the problem is PPA-complete for
inverse-exponential precision. At the heart of our construction is a smooth
embedding of the high-dimensional M\"obius strip in the CONSENSUS-HALVING
problem. These results settle the status of PPA as a class that captures the
complexity of "natural" problems whose definitions do not incorporate a
circuit.
| null |
http://arxiv.org/abs/1805.12559v2
|
http://arxiv.org/pdf/1805.12559v2.pdf
| null |
[
"Aris Filos-Ratsikas",
"Paul W. Goldberg"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distributed-stochastic-gradient-tracking
|
1805.11454
| null | null |
Distributed Stochastic Gradient Tracking Methods
|
In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method (DSGT) and a gossip-like stochastic gradient tracking method (GSGT). We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant stepsize choice). Under DSGT, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size $n$, which is a comparable performance to a centralized stochastic gradient algorithm. Moreover, we show that when the network is well-connected, GSGT incurs lower communication cost than DSGT while maintaining a similar computational cost. Numerical example further demonstrates the effectiveness of the proposed methods.
| null |
https://arxiv.org/abs/1805.11454v5
|
https://arxiv.org/pdf/1805.11454v5.pdf
| null |
[
"Shi Pu",
"Angelia Nedić"
] |
[] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/differentially-private-database-release-via
|
1710.01641
| null | null |
Differentially Private Database Release via Kernel Mean Embeddings
|
We lay theoretical foundations for new database release mechanisms that allow
third-parties to construct consistent estimators of population statistics,
while ensuring that the privacy of each individual contributing to the database
is protected. The proposed framework rests on two main ideas. First, releasing
(an estimate of) the kernel mean embedding of the data generating random
variable instead of the database itself still allows third-parties to construct
consistent estimators of a wide class of population statistics. Second, the
algorithm can satisfy the definition of differential privacy by basing the
released kernel mean embedding on entirely synthetic data points, while
controlling accuracy through the metric available in a Reproducing Kernel
Hilbert Space. We describe two instantiations of the proposed framework,
suitable under different scenarios, and prove theoretical results guaranteeing
differential privacy of the resulting algorithms and the consistency of
estimators constructed from their outputs.
|
First, releasing (an estimate of) the kernel mean embedding of the data generating random variable instead of the database itself still allows third-parties to construct consistent estimators of a wide class of population statistics.
|
http://arxiv.org/abs/1710.01641v2
|
http://arxiv.org/pdf/1710.01641v2.pdf
|
ICML 2018 7
|
[
"Matej Balog",
"Ilya Tolstikhin",
"Bernhard Schölkopf"
] |
[] | 2017-10-04T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1964
|
http://proceedings.mlr.press/v80/balog18a/balog18a.pdf
|
differentially-private-database-release-via-1
| null |
[] |
https://paperswithcode.com/paper/lots-about-attacking-deep-features
|
1611.06179
| null | null |
LOTS about Attacking Deep Features
|
Deep neural networks provide state-of-the-art performance on various tasks
and are, therefore, widely used in real world applications. DNNs are becoming
frequently utilized in biometrics for extracting deep features, which can be
used in recognition systems for enrolling and recognizing new individuals. It
was revealed that deep neural networks suffer from a fundamental problem,
namely, they can unexpectedly misclassify examples formed by slightly
perturbing correctly recognized inputs. Various approaches have been developed
for generating these so-called adversarial examples, but they aim at attacking
end-to-end networks. For biometrics, it is natural to ask whether systems using
deep features are immune to or, at least, more resilient to attacks than
end-to-end networks. In this paper, we introduce a general technique called the
layerwise origin-target synthesis (LOTS) that can be efficiently used to form
adversarial examples that mimic the deep features of the target. We analyze and
compare the adversarial robustness of the end-to-end VGG Face network with
systems that use Euclidean or cosine distance between gallery templates and
extracted deep features. We demonstrate that iterative LOTS is very effective
and show that systems utilizing deep features are easier to attack than the
end-to-end network.
| null |
http://arxiv.org/abs/1611.06179v5
|
http://arxiv.org/pdf/1611.06179v5.pdf
| null |
[
"Andras Rozsa",
"Manuel Günther",
"Terrance E. Boult"
] |
[
"Adversarial Robustness"
] | 2016-11-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Ethereum has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Ethereum transaction not confirmed, your Ethereum wallet not showing balance, or you're trying to recover a lost Ethereum wallet, knowing where to get help is essential. That’s why the Ethereum customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Ethereum Customer Support Number +1-833-534-1729\r\nEthereum operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Ethereum Transaction Not Confirmed\r\nOne of the most common concerns is when a Ethereum transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Ethereum Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Ethereum wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Ethereum Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Ethereum wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Ethereum Deposit Not Received\r\nIf someone has sent you Ethereum but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Ethereum deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Ethereum Transaction Stuck or Pending\r\nSometimes your Ethereum transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Ethereum Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Ethereum wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Ethereum Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Ethereum tech.\r\n\r\n24/7 Availability: Ethereum doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Ethereum Support and Wallet Issues\r\nQ1: Can Ethereum support help me recover stolen BTC?\r\nA: While Ethereum transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Ethereum transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Ethereum’s official number (Ethereum is decentralized), it connects you to trained professionals experienced in resolving all major Ethereum issues.\r\n\r\nFinal Thoughts\r\nEthereum is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Ethereum transaction not confirmed, your Ethereum wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Ethereum customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Ethereum Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Ethereum Customer Service Number +1-833-534-1729",
"source_title": "Very Deep Convolutional Networks for Large-Scale Image Recognition",
"source_url": "http://arxiv.org/abs/1409.1556v6"
}
] |
https://paperswithcode.com/paper/rmdl-random-multimodel-deep-learning-for
|
1805.01890
| null | null |
RMDL: Random Multimodel Deep Learning for Classification
|
The continually increasing number of complex datasets each year necessitates
ever improving machine learning methods for robust and accurate categorization
of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a
new ensemble, deep learning approach for classification. Deep learning models
have achieved state-of-the-art results across many domains. RMDL solves the
problem of finding the best deep learning structure and architecture while
simultaneously improving robustness and accuracy through ensembles of deep
learning architectures. RDML can accept as input a variety data to include
text, video, images, and symbolic. This paper describes RMDL and shows test
results for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB,
and 20newsgroup. These test results show that RDML produces consistently better
performance than standard methods over a broad range of data types and
classification problems.
|
This paper introduces Random Multimodel Deep Learning (RMDL): a new ensemble, deep learning approach for classification.
|
http://arxiv.org/abs/1805.01890v2
|
http://arxiv.org/pdf/1805.01890v2.pdf
| null |
[
"Kamran Kowsari",
"Mojtaba Heidarysafa",
"Donald E. Brown",
"Kiana Jafari Meimandi",
"Laura E. Barnes"
] |
[
"Classification",
"Deep Learning",
"Document Classification",
"Face Recognition",
"General Classification",
"Hierarchical Text Classification of Blurbs (GermEval 2019)",
"Image Classification",
"Multi-Label Text Classification",
"Unsupervised Pre-training"
] | 2018-05-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/analysis-of-fast-structured-dictionary
|
1805.12529
| null | null |
Analysis of Fast Structured Dictionary Learning
|
Sparsity-based models and techniques have been exploited in many signal processing and imaging applications. Data-driven methods based on dictionary and sparsifying transform learning enable learning rich image features from data, and can outperform analytical models. In particular, alternating optimization algorithms have been popular for learning such models. In this work, we focus on alternating minimization for a specific structured unitary sparsifying operator learning problem, and provide a convergence analysis. While the algorithm converges to the critical points of the problem generally, our analysis establishes under mild assumptions, the local linear convergence of the algorithm to the underlying sparsifying model of the data. Analysis and numerical simulations show that our assumptions hold for standard probabilistic data models. In practice, the algorithm is robust to initialization.
| null |
https://arxiv.org/abs/1805.12529v3
|
https://arxiv.org/pdf/1805.12529v3.pdf
| null |
[
"Saiprasad Ravishankar",
"Anna Ma",
"Deanna Needell"
] |
[
"Dictionary Learning",
"Operator learning"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fully-automated-organ-segmentation-in-male
|
1805.12526
| null | null |
Fully Automated Organ Segmentation in Male Pelvic CT Images
|
Accurate segmentation of prostate and surrounding organs at risk is important for prostate cancer radiotherapy treatment planning. We present a fully automated workflow for male pelvic CT image segmentation using deep learning. The architecture consists of a 2D localization network followed by a 3D segmentation network for volumetric segmentation of prostate, bladder, rectum, and femoral heads. We used a multi-channel 2D U-Net followed by a 3D U-Net with encoding arm modified with aggregated residual networks, known as ResNeXt. The models were trained and tested on a pelvic CT image dataset comprising 136 patients. Test results show that 3D U-Net based segmentation achieves mean (SD) Dice coefficient values of 90 (2.0)% ,96 (3.0)%, 95 (1.3)%, 95 (1.5)%, and 84 (3.7)% for prostate, left femoral head, right femoral head, bladder, and rectum, respectively, using the proposed fully automated segmentation method.
| null |
https://arxiv.org/abs/1805.12526v2
|
https://arxiv.org/pdf/1805.12526v2.pdf
| null |
[
"Anjali Balagopal",
"Samaneh Kazemifar",
"Dan Nguyen",
"Mu-Han Lin",
"Raquibul Hannan",
"Amir Owrangi",
"Steve Jiang"
] |
[
"Image Segmentation",
"Organ Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **ResNeXt Block** is a type of [residual block](https://paperswithcode.com/method/residual-block) used as part of the [ResNeXt](https://paperswithcode.com/method/resnext) CNN architecture. It uses a \"split-transform-merge\" strategy (branched paths within a single module) similar to an [Inception module](https://paperswithcode.com/method/inception-module), i.e. it aggregates a set of transformations. Compared to a Residual Block, it exposes a new dimension, *cardinality* (size of set of transformations) $C$, as an essential factor in addition to depth and width. \r\n\r\nFormally, a set of aggregated transformations can be represented as: $\\mathcal{F}(x)=\\sum_{i=1}^{C}\\mathcal{T}_i(x)$, where $\\mathcal{T}_i(x)$ can be an arbitrary function. Analogous to a simple neuron, $\\mathcal{T}_i$ should project $x$ into an (optionally low-dimensional) embedding and then transform it.",
"full_name": "ResNeXt Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "ResNeXt Block",
"source_title": "Aggregated Residual Transformations for Deep Neural Networks",
"source_url": "http://arxiv.org/abs/1611.05431v2"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42",
"description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.",
"full_name": "Grouped Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Grouped Convolution",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/resnet.py#L124",
"description": "A **ResNeXt** repeats a building block that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width. \r\n\r\nFormally, a set of aggregated transformations can be represented as: $\\mathcal{F}(x)=\\sum_{i=1}^{C}\\mathcal{T}_i(x)$, where $\\mathcal{T}_i(x)$ can be an arbitrary function. Analogous to a simple neuron, $\\mathcal{T}_i$ should project $x$ into an (optionally low-dimensional) embedding and then transform it.",
"full_name": "ResNeXt",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "ResNeXt",
"source_title": "Aggregated Residual Transformations for Deep Neural Networks",
"source_url": "http://arxiv.org/abs/1611.05431v2"
}
] |
https://paperswithcode.com/paper/whole-brain-susceptibility-mapping-using
|
1805.12521
| null | null |
Whole Brain Susceptibility Mapping Using Harmonic Incompatibility Removal
|
Quantitative susceptibility mapping (QSM) aims to visualize the three
dimensional susceptibility distribution by solving the field-to-source inverse
problem using the phase data in magnetic resonance signal. However, the inverse
problem is ill-posed since the Fourier transform of integral kernel has zeroes
in the frequency domain. Although numerous regularization based models have
been proposed to overcome this problem, the incompatibility in the field data
has not received enough attention, which leads to deterioration of the
recovery. In this paper, we show that the data acquisition process of QSM
inherently generates a harmonic incompatibility in the measured local field.
Based on such discovery, we propose a novel regularization based susceptibility
reconstruction model with an additional sparsity based regularization term on
the harmonic incompatibility. Numerical experiments show that the proposed
method achieves better performance than the existing approaches.
| null |
http://arxiv.org/abs/1805.12521v2
|
http://arxiv.org/pdf/1805.12521v2.pdf
| null |
[
"Chenglong Bao",
"Jae Kyu Choi",
"Bin Dong"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scaling-provable-adversarial-defenses
|
1805.12514
| null | null |
Scaling provable adversarial defenses
|
Recent work has developed methods for learning deep network classifiers that
are provably robust to norm-bounded adversarial perturbation; however, these
methods are currently only possible for relatively small feedforward networks.
In this paper, in an effort to scale these approaches to substantially larger
models, we extend previous work in three main directions. First, we present a
technique for extending these training procedures to much more general
networks, with skip connections (such as ResNets) and general nonlinearities;
the approach is fully modular, and can be implemented automatically (analogous
to automatic differentiation). Second, in the specific case of $\ell_\infty$
adversarial perturbations and networks with ReLU nonlinearities, we adopt a
nonlinear random projection for training, which scales linearly in the number
of hidden units (previous approaches scaled quadratically). Third, we show how
to further improve robust error through cascade models. On both MNIST and CIFAR
data sets, we train classifiers that improve substantially on the state of the
art in provable robust adversarial error bounds: from 5.8% to 3.1% on MNIST
(with $\ell_\infty$ perturbations of $\epsilon=0.1$), and from 80% to 36.4% on
CIFAR (with $\ell_\infty$ perturbations of $\epsilon=2/255$). Code for all
experiments in the paper is available at
https://github.com/locuslab/convex_adversarial/.
|
Recent work has developed methods for learning deep network classifiers that are provably robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks.
|
http://arxiv.org/abs/1805.12514v2
|
http://arxiv.org/pdf/1805.12514v2.pdf
|
NeurIPS 2018 12
|
[
"Eric Wong",
"Frank R. Schmidt",
"Jan Hendrik Metzen",
"J. Zico Kolter"
] |
[] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/8060-scaling-provable-adversarial-defenses
|
http://papers.nips.cc/paper/8060-scaling-provable-adversarial-defenses.pdf
|
scaling-provable-adversarial-defenses-1
| null |
[
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/cyberattack-detection-using-deep-generative
|
1805.12511
| null | null |
Cyberattack Detection using Deep Generative Models with Variational Inference
|
Recent years have witnessed a rise in the frequency and intensity of
cyberattacks targeted at critical infrastructure systems. This study designs a
versatile, data-driven cyberattack detection platform for infrastructure
systems cybersecurity, with a special demonstration in water sector. A deep
generative model with variational inference autonomously learns normal system
behavior and detects attacks as they occur. The model can process the natural
data in its raw form and automatically discover and learn its representations,
hence augmenting system knowledge discovery and reducing the need for laborious
human engineering and domain expertise. The proposed model is applied to a
simulated cyberattack detection problem involving a drinking water distribution
system subject to programmable logic controller hacks, malicious actuator
activation, and deception attacks. The model is only provided with observations
of the system, such as pump pressure and tank water level reads, and is blind
to the internal structures and workings of the water distribution system. The
simulated attacks are manifested in the model's generated reproduction
probability plot, indicating its ability to discern the attacks. There is,
however, need for improvements in reducing false alarms, especially by
optimizing detection thresholds. Altogether, the results indicate ability of
the model in distinguishing attacks and their repercussions from normal system
operation in water distribution systems, and the promise it holds for
cyberattack detection in other domains.
| null |
http://arxiv.org/abs/1805.12511v1
|
http://arxiv.org/pdf/1805.12511v1.pdf
| null |
[
"Sarin E. Chandy",
"Amin Rasekh",
"Zachary A. Barker",
"M. Ehsan Shafiee"
] |
[
"Variational Inference"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/accurate-pedestrian-localization-in-overhead
|
1805.12510
| null | null |
Accurate pedestrian localization in overhead depth images via Height-Augmented HOG
|
We tackle the challenge of reliably and automatically localizing pedestrians
in real-life conditions through overhead depth imaging at unprecedented
high-density conditions. Leveraging upon a combination of Histogram of Oriented
Gradients-like feature descriptors, neural networks, data augmentation and
custom data annotation strategies, this work contributes a robust and scalable
machine learning-based localization algorithm, which delivers near-human
localization performance in real-time, even with local pedestrian density of
about 3 ped/m2, a case in which most state-of-the art algorithms degrade
significantly in performance.
| null |
http://arxiv.org/abs/1805.12510v1
|
http://arxiv.org/pdf/1805.12510v1.pdf
| null |
[
"Werner Kroneman",
"Alessandro Corbetta",
"Federico Toschi"
] |
[
"BIG-bench Machine Learning",
"Data Augmentation"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/asymptotic-performance-of-regularized-multi
|
1805.12507
| null | null |
Efficacy of regularized multi-task learning based on SVM models
|
This paper investigates the efficacy of a regularized multi-task learning (MTL) framework based on SVM (M-SVM) to answer whether MTL always provides reliable results and how MTL outperforms independent learning. We first find that M-SVM is Bayes risk consistent in the limit of large sample size. This implies that despite the task dissimilarities, M-SVM always produces a reliable decision rule for each task in terms of misclassification error when the data size is large enough. Furthermore, we find that the task-interaction vanishes as the data size goes to infinity, and the convergence rates of M-SVM and its single-task counterpart have the same upper bound. The former suggests that M-SVM cannot improve the limit classifier's performance; based on the latter, we conjecture that the optimal convergence rate is not improved when the task number is fixed. As a novel insight of MTL, our theoretical and experimental results achieved an excellent agreement that the benefit of the MTL methods lies in the improvement of the pre-convergence-rate factor (PCR, to be denoted in Section III) rather than the convergence rate. Moreover, this improvement of PCR factors is more significant when the data size is small.
| null |
https://arxiv.org/abs/1805.12507v2
|
https://arxiv.org/pdf/1805.12507v2.pdf
| null |
[
"Shaohan Chen",
"Zhou Fang",
"Sijie Lu",
"Chuanhou Gao"
] |
[
"Multi-Task Learning"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/robust-gyroscope-aided-camera-self
|
1805.12506
| null | null |
Robust Gyroscope-Aided Camera Self-Calibration
|
Camera calibration for estimating the intrinsic parameters and lens
distortion is a prerequisite for various monocular vision applications
including feature tracking and video stabilization. This application paper
proposes a model for estimating the parameters on the fly by fusing gyroscope
and camera data, both readily available in modern day smartphones. The model is
based on joint estimation of visual feature positions, camera parameters, and
the camera pose, the movement of which is assumed to follow the movement
predicted by the gyroscope. Our model assumes the camera movement to be free,
but continuous and differentiable, and individual features are assumed to stay
stationary. The estimation is performed online using an extended Kalman filter,
and it is shown to outperform existing methods in robustness and insensitivity
to initialization. We demonstrate the method using simulated data and empirical
data from an iPad.
|
This application paper proposes a model for estimating the parameters on the fly by fusing gyroscope and camera data, both readily available in modern day smartphones.
|
http://arxiv.org/abs/1805.12506v1
|
http://arxiv.org/pdf/1805.12506v1.pdf
| null |
[
"Santiago Cortés Reina",
"Arno Solin",
"Juho Kannala"
] |
[
"Camera Calibration",
"Video Stabilization"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.