paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/stationary-geometric-graphical-model
|
1806.03571
| null | null |
Stationary Geometric Graphical Model Selection
|
We consider the problem of model selection in Gaussian Markov fields in the
sample deficient scenario. In many practically important cases, the underlying
networks are embedded into Euclidean spaces. Using the natural geometric
structure, we introduce the notion of spatially stationary distributions over
geometric graphs. This directly generalizes the notion of stationary time
series to the multidimensional setting lacking time axis. We show that the idea
of spatial stationarity leads to a dramatic decrease in the sample complexity
of the model selection compared to abstract graphs with the same level of
sparsity. For geometric graphs on randomly spread vertices and edges of bounded
length, we develop tight information-theoretic bounds on sample complexity and
show that a finite number of independent samples is sufficient for a consistent
recovery. Finally, we develop an efficient technique capable of reliably and
consistently reconstructing graphs with a bounded number of measurements.
| null |
http://arxiv.org/abs/1806.03571v2
|
http://arxiv.org/pdf/1806.03571v2.pdf
| null |
[
"Ilya Soloveychik",
"Vahid Tarokh"
] |
[
"model",
"Model Selection",
"Time Series",
"Time Series Analysis"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-factor-graph-models-for-cross-lingual
|
1805.04570
| null | null |
Neural Factor Graph Models for Cross-lingual Morphological Tagging
|
Morphological analysis involves predicting the syntactic traits of a word
(e.g. {POS: Noun, Case: Acc, Gender: Fem}). Previous work in morphological
tagging improves performance for low-resource languages (LRLs) through
cross-lingual training with a high-resource language (HRL) from the same
family, but is limited by the strict, often false, assumption that tag sets
exactly overlap between the HRL and LRL. In this paper we propose a method for
cross-lingual morphological tagging that aims to improve information sharing
between languages by relaxing this assumption. The proposed model uses
factorial conditional random fields with neural network potentials, making it
possible to (1) utilize the expressive power of neural network representations
to smooth over superficial differences in the surface forms, (2) model pairwise
and transitive relationships between tags, and (3) accurately generate tag sets
that are unseen or rare in the training data. Experiments on four languages
from the Universal Dependencies Treebank demonstrate superior tagging
accuracies over existing cross-lingual approaches.
|
Morphological analysis involves predicting the syntactic traits of a word (e. g. {POS: Noun, Case: Acc, Gender: Fem}).
|
http://arxiv.org/abs/1805.04570v3
|
http://arxiv.org/pdf/1805.04570v3.pdf
|
ACL 2018 7
|
[
"Chaitanya Malaviya",
"Matthew R. Gormley",
"Graham Neubig"
] |
[
"Morphological Analysis",
"Morphological Tagging",
"POS",
"TAG"
] | 2018-05-11T00:00:00 |
https://aclanthology.org/P18-1247
|
https://aclanthology.org/P18-1247.pdf
|
neural-factor-graph-models-for-cross-lingual-1
| null |
[] |
https://paperswithcode.com/paper/explainable-recommendation-via-multi-task
|
1806.03568
| null | null |
Explainable Recommendation via Multi-Task Learning in Opinionated Text Data
|
Explaining automatically generated recommendations allows users to make more
informed and accurate decisions about which results to utilize, and therefore
improves their satisfaction. In this work, we develop a multi-task learning
solution for explainable recommendation. Two companion learning tasks of user
preference modeling for recommendation} and \textit{opinionated content
modeling for explanation are integrated via a joint tensor factorization. As a
result, the algorithm predicts not only a user's preference over a list of
items, i.e., recommendation, but also how the user would appreciate a
particular item at the feature level, i.e., opinionated textual explanation.
Extensive experiments on two large collections of Amazon and Yelp reviews
confirmed the effectiveness of our solution in both recommendation and
explanation tasks, compared with several existing recommendation algorithms.
And our extensive user study clearly demonstrates the practical value of the
explainable recommendations generated by our algorithm.
|
Explaining automatically generated recommendations allows users to make more informed and accurate decisions about which results to utilize, and therefore improves their satisfaction.
|
http://arxiv.org/abs/1806.03568v1
|
http://arxiv.org/pdf/1806.03568v1.pdf
| null |
[
"Nan Wang",
"Hongning Wang",
"Yiling Jia",
"Yue Yin"
] |
[
"Explainable Recommendation",
"Multi-Task Learning"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/accurate-building-detection-in-vhr-remote
|
1806.00908
| null | null |
Accurate Building Detection in VHR Remote Sensing Images using Geometric Saliency
|
This paper aims to address the problem of detecting buildings from remote
sensing images with very high resolution (VHR). Inspired by the observation
that buildings are always more distinguishable in geometries than in texture or
spectral, we propose a new geometric building index (GBI) for accurate building
detection, which relies on the geometric saliency of building structures. The
geometric saliency of buildings is derived from a mid-level geometric
representations based on meaningful junctions that can locally describe
anisotropic geometrical structures of images. The resulting GBI is measured by
integrating the derived geometric saliency of buildings. Experiments on three
public datasets demonstrate that the proposed GBI achieves very promising
performance, and meanwhile shows impressive generalization capability.
| null |
http://arxiv.org/abs/1806.00908v2
|
http://arxiv.org/pdf/1806.00908v2.pdf
| null |
[
"Jin Huang",
"Gui-Song Xia",
"Fan Hu",
"Liangpei Zhang"
] |
[] | 2018-06-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rtseg-real-time-semantic-segmentation
|
1803.02758
| null | null |
RTSeg: Real-time Semantic Segmentation Comparative Study
|
Semantic segmentation benefits robotics related applications especially autonomous driving. Most of the research on semantic segmentation is only on increasing the accuracy of segmentation models with little attention to computationally efficient solutions. The few work conducted in this direction does not provide principled methods to evaluate the different design choices for segmentation. In this paper, we address this gap by presenting a real-time semantic segmentation benchmarking framework with a decoupled design for feature extraction and decoding methods. The framework is comprised of different network architectures for feature extraction such as VGG16, Resnet18, MobileNet, and ShuffleNet. It is also comprised of multiple meta-architectures for segmentation that define the decoding methodology. These include SkipNet, UNet, and Dilation Frontend. Experimental results are presented on the Cityscapes dataset for urban scenes. The modular design allows novel architectures to emerge, that lead to 143x GFLOPs reduction in comparison to SegNet. This benchmarking framework is publicly available at "https://github.com/MSiam/TFSegmentation".
|
In this paper, we address this gap by presenting a real-time semantic segmentation benchmarking framework with a decoupled design for feature extraction and decoding methods.
|
https://arxiv.org/abs/1803.02758v5
|
https://arxiv.org/pdf/1803.02758v5.pdf
| null |
[
"Mennatullah Siam",
"Mostafa Gamal",
"Moemen Abdel-Razek",
"Senthil Yogamani",
"Martin Jagersand"
] |
[
"Autonomous Driving",
"Benchmarking",
"Real-Time Semantic Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-03-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/kwotsin/TensorFlow-Xception/blob/c42ad8cab40733f9150711be3537243278612b22/xception.py#L67",
"description": "While [standard convolution](https://paperswithcode.com/method/convolution) performs the channelwise and spatial-wise computation in one step, **Depthwise Separable Convolution** splits the computation into two steps: [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) applies a single convolutional filter per each input channel and [pointwise convolution](https://paperswithcode.com/method/pointwise-convolution) is used to create a linear combination of the output of the depthwise convolution. The comparison of standard convolution and depthwise separable convolution is shown to the right.\r\n\r\nCredit: [Depthwise Convolution Is All You Need for Learning Multiple Visual Domains](https://paperswithcode.com/paper/depthwise-convolution-is-all-you-need-for)",
"full_name": "Depthwise Separable Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Depthwise Separable Convolution",
"source_title": "Xception: Deep Learning With Depthwise Separable Convolutions",
"source_url": "http://openaccess.thecvf.com/content_cvpr_2017/html/Chollet_Xception_Deep_Learning_CVPR_2017_paper.html"
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/956b4ebab0bbf98de4e1548287df5197a3c7154e/pytorch/pytorchcv/models/mobilenet.py#L14",
"description": "**MobileNet** is a type of convolutional neural network designed for mobile and embedded vision applications. They are based on a streamlined architecture that uses depthwise separable convolutions to build lightweight deep neural networks that can have low latency for mobile and embedded devices.",
"full_name": "MobileNetV1",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "MobileNetV1",
"source_title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications",
"source_url": "http://arxiv.org/abs/1704.04861v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "**Depthwise Convolution** is a type of convolution where we apply a single convolutional filter for each input channel. In the regular 2D [convolution](https://paperswithcode.com/method/convolution) performed over multiple input channels, the filter is as deep as the input and lets us freely mix channels to generate each element in the output. In contrast, depthwise convolutions keep each channel separate. To summarize the steps, we:\r\n\r\n1. Split the input and filter into channels.\r\n2. We convolve each input with the respective filter.\r\n3. We stack the convolved outputs together.\r\n\r\nImage Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)",
"full_name": "Depthwise Convolution",
"introduced_year": 2016,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Depthwise Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Pointwise Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) that uses a 1x1 kernel: a kernel that iterates through every single point. This kernel has a depth of however many channels the input image has. It can be used in conjunction with [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution) to produce an efficient class of convolutions known as [depthwise-separable convolutions](https://paperswithcode.com/method/depthwise-separable-convolution).\r\n\r\nImage Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)",
"full_name": "Pointwise Convolution",
"introduced_year": 2016,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Pointwise Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/c03fa67de3c9e454e9b6d35fe9cbb6b15c28fda7/pytorch/pytorchcv/models/common.py#L862",
"description": "**Channel Shuffle** is an operation to help information flow across feature channels in convolutional neural networks. It was used as part of the [ShuffleNet](https://paperswithcode.com/method/shufflenet) architecture. \r\n\r\nIf we allow a group [convolution](https://paperswithcode.com/method/convolution) to obtain input data from different groups, the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups. \r\n\r\nThe above can be efficiently and elegantly implemented by a channel shuffle operation: suppose a convolutional layer with $g$ groups whose output has $g \\times n$ channels; we first reshape the output channel dimension into $\\left(g, n\\right)$, transposing and then flattening it back as the input of next layer. Channel shuffle is also differentiable, which means it can be embedded into network structures for end-to-end training.",
"full_name": "Channel Shuffle",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "The following is a list of miscellaneous components used in neural networks.",
"name": "Miscellaneous Components",
"parent": null
},
"name": "Channel Shuffle",
"source_title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices",
"source_url": "http://arxiv.org/abs/1707.01083v2"
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/c03fa67de3c9e454e9b6d35fe9cbb6b15c28fda7/pytorch/pytorchcv/models/shufflenet.py#L18",
"description": "A **ShuffleNet Block** is an image model block that utilises a [channel shuffle](https://paperswithcode.com/method/channel-shuffle) operation, along with depthwise convolutions, for an efficient architectural design. It was proposed as part of the [ShuffleNet](https://paperswithcode.com/method/shufflenet) architecture. The starting point is the [Residual Block](https://paperswithcode.com/method/residual-block) unit from [ResNets](https://paperswithcode.com/method/resnet), which is then modified with a pointwise group [convolution](https://paperswithcode.com/method/convolution) and a channel shuffle operation.",
"full_name": "ShuffleNet Block",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "ShuffleNet Block",
"source_title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices",
"source_url": "http://arxiv.org/abs/1707.01083v2"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/yassouali/pytorch_segmentation/blob/8b8e3ee20a3aa733cb19fc158ad5d7773ed6da7f/models/segnet.py#L9",
"description": "**SegNet** is a semantic segmentation model. This core trainable segmentation architecture consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the\r\nVGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature maps. Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to\r\nperform non-linear upsampling.",
"full_name": "SegNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "SegNet",
"source_title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation",
"source_url": "http://arxiv.org/abs/1511.00561v3"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/mindspore-ecosystem/mindcv/blob/main/mindcv/models/shufflenetv1.py",
"description": "**ShuffleNet** is a convolutional neural network designed specially for mobile devices with very limited computing power. The architecture utilizes two new operations, pointwise group [convolution](https://paperswithcode.com/method/convolution) and [channel shuffle](https://paperswithcode.com/method/channel-shuffle), to reduce computation cost while maintaining accuracy.",
"full_name": "ShuffleNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "ShuffleNet",
"source_title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices",
"source_url": "http://arxiv.org/abs/1707.01083v2"
}
] |
https://paperswithcode.com/paper/generic-coreset-for-scalable-learning-of
|
1802.07382
| null | null |
Generic Coreset for Scalable Learning of Monotonic Kernels: Logistic Regression, Sigmoid and more
|
Coreset (or core-set) is a small weighted \emph{subset} $Q$ of an input set $P$ with respect to a given \emph{monotonic} function $f:\mathbb{R}\to\mathbb{R}$ that \emph{provably} approximates its fitting loss $\sum_{p\in P}f(p\cdot x)$ to \emph{any} given $x\in\mathbb{R}^d$. Using $Q$ we can obtain approximation of $x^*$ that minimizes this loss, by running \emph{existing} optimization algorithms on $Q$. In this work we provide: (i) A lower bound which proves that there are sets with no coresets smaller than $n=|P|$ for general monotonic loss functions. (ii) A proof that, under a natural assumption that holds e.g. for logistic regression and the sigmoid activation functions, a small coreset exists for \emph{any} input $P$. (iii) A generic coreset construction algorithm that computes such a small coreset $Q$ in $O(nd+n\log n)$ time, and (iv) Experimental results which demonstrate that our coresets are effective and are much smaller in practice than predicted in theory.
| null |
https://arxiv.org/abs/1802.07382v3
|
https://arxiv.org/pdf/1802.07382v3.pdf
| null |
[
"Elad Tolochinsky",
"Ibrahim Jubran",
"Dan Feldman"
] |
[
"regression"
] | 2018-02-21T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Coresets",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Coresets",
"source_title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach",
"source_url": "http://arxiv.org/abs/1708.00489v4"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/building-bayesian-neural-networks-with-blocks
|
1806.03563
| null | null |
Building Bayesian Neural Networks with Blocks: On Structure, Interpretability and Uncertainty
|
We provide simple schemes to build Bayesian Neural Networks (BNNs), block by
block, inspired by a recent idea of computation skeletons. We show how by
adjusting the types of blocks that are used within the computation skeleton, we
can identify interesting relationships with Deep Gaussian Processes (DGPs),
deep kernel learning (DKL), random features type approximation and other
topics. We give strategies to approximate the posterior via doubly stochastic
variational inference for such models which yield uncertainty estimates. We
give a detailed theoretical analysis and point out extensions that may be of
independent interest. As a special case, we instantiate our procedure to define
a Bayesian {\em additive} Neural network -- a promising strategy to identify
statistical interactions and has direct benefits for obtaining interpretable
models.
| null |
http://arxiv.org/abs/1806.03563v1
|
http://arxiv.org/pdf/1806.03563v1.pdf
| null |
[
"Hao Henry Zhou",
"Yunyang Xiong",
"Vikas Singh"
] |
[
"Gaussian Processes",
"Variational Inference"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/what-knowledge-is-needed-to-solve-the-rte5
|
1806.03561
| null | null |
What Knowledge is Needed to Solve the RTE5 Textual Entailment Challenge?
|
This document gives a knowledge-oriented analysis of about 20 interesting
Recognizing Textual Entailment (RTE) examples, drawn from the 2005 RTE5
competition test set. The analysis ignores shallow statistical matching
techniques between T and H, and rather asks: What would it take to reasonably
infer that T implies H? What world knowledge would be needed for this task?
Although such knowledge-intensive techniques have not had much success in RTE
evaluations, ultimately an intelligent system should be expected to know and
deploy this kind of world knowledge required to perform this kind of reasoning.
The selected examples are typically ones which our RTE system (called BLUE)
got wrong and ones which require world knowledge to answer. In particular, the
analysis covers cases where there was near-perfect lexical overlap between T
and H, yet the entailment was NO, i.e., examples that most likely all current
RTE systems will have got wrong. A nice example is #341 (page 26), that
requires inferring from "a river floods" that "a river overflows its banks".
Seems it should be easy, right? Enjoy!
| null |
http://arxiv.org/abs/1806.03561v1
|
http://arxiv.org/pdf/1806.03561v1.pdf
| null |
[
"Peter Clark"
] |
[
"Natural Language Inference",
"RTE",
"World Knowledge"
] | 2018-06-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/paultsw/nice_pytorch/blob/15cfc543fc3dc81ee70398b8dfc37b67269ede95/nice/layers.py#L109",
"description": "**Affine Coupling** is a method for implementing a normalizing flow (where we stack a sequence of invertible bijective transformation functions). Affine coupling is one of these bijective transformation functions. Specifically, it is an example of a reversible transformation where the forward function, the reverse function and the log-determinant are computationally efficient. For the forward function, we split the input dimension into two parts:\r\n\r\n$$ \\mathbf{x}\\_{a}, \\mathbf{x}\\_{b} = \\text{split}\\left(\\mathbf{x}\\right) $$\r\n\r\nThe second part stays the same $\\mathbf{x}\\_{b} = \\mathbf{y}\\_{b}$, while the first part $\\mathbf{x}\\_{a}$ undergoes an affine transformation, where the parameters for this transformation are learnt using the second part $\\mathbf{x}\\_{b}$ being put through a neural network. Together we have:\r\n\r\n$$ \\left(\\log{\\mathbf{s}, \\mathbf{t}}\\right) = \\text{NN}\\left(\\mathbf{x}\\_{b}\\right) $$\r\n\r\n$$ \\mathbf{s} = \\exp\\left(\\log{\\mathbf{s}}\\right) $$\r\n\r\n$$ \\mathbf{y}\\_{a} = \\mathbf{s} \\odot \\mathbf{x}\\_{a} + \\mathbf{t} $$\r\n\r\n$$ \\mathbf{y}\\_{b} = \\mathbf{x}\\_{b} $$\r\n\r\n$$ \\mathbf{y} = \\text{concat}\\left(\\mathbf{y}\\_{a}, \\mathbf{y}\\_{b}\\right) $$\r\n\r\nImage: [GLOW](https://paperswithcode.com/method/glow)",
"full_name": "Affine Coupling",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Bijective Transformations** are transformations that are bijective, i.e. they can be reversed. They are used within the context of normalizing flow models. Below you can find a continuously updating list of bijective transformation methods.",
"name": "Bijective Transformation",
"parent": null
},
"name": "Affine Coupling",
"source_title": "NICE: Non-linear Independent Components Estimation",
"source_url": "http://arxiv.org/abs/1410.8516v6"
},
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
}
] |
https://paperswithcode.com/paper/semantic-correspondence-a-hierarchical
|
1806.03560
| null | null |
Semantic Correspondence: A Hierarchical Approach
|
Establishing semantic correspondence across images when the objects in the
images have undergone complex deformations remains a challenging task in the
field of computer vision. In this paper, we propose a hierarchical method to
tackle this problem by first semantically targeting the foreground objects to
localize the search space and then looking deeply into multiple levels of the
feature representation to search for point-level correspondence. In contrast to
existing approaches, which typically penalize large discrepancies, our approach
allows for significant displacements, with the aim to accommodate large
deformations of the objects in scene. Localizing the search space by
semantically matching object-level correspondence, our method robustly handles
large deformations of objects. Representing the target region by concatenated
hypercolumn features which take into account the hierarchical levels of the
surrounding context, helps to clear the ambiguity to further improve the
accuracy. By conducting multiple experiments across scenes with non-rigid
objects, we validate the proposed approach, and show that it outperforms the
state of the art methods for semantic correspondence establishment.
| null |
http://arxiv.org/abs/1806.03560v1
|
http://arxiv.org/pdf/1806.03560v1.pdf
| null |
[
"Akila Pemasiri",
"Kien Nguyen",
"Sridha Sridhara",
"and Clinton Fookes"
] |
[
"Semantic correspondence"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/genesis-of-basic-and-multi-layer-echo-state
|
1804.08996
| null | null |
Genesis of Basic and Multi-Layer Echo State Network Recurrent Autoencoders for Efficient Data Representations
|
It is a widely accepted fact that data representations intervene noticeably
in machine learning tools. The more they are well defined the better the
performance results are. Feature extraction-based methods such as autoencoders
are conceived for finding more accurate data representations from the original
ones. They efficiently perform on a specific task in terms of 1) high accuracy,
2) large short term memory and 3) low execution time. Echo State Network (ESN)
is a recent specific kind of Recurrent Neural Network which presents very rich
dynamics thanks to its reservoir-based hidden layer. It is widely used in
dealing with complex non-linear problems and it has outperformed classical
approaches in a number of tasks including regression, classification, etc. In
this paper, the noticeable dynamism and the large memory provided by ESN and
the strength of Autoencoders in feature extraction are gathered within an ESN
Recurrent Autoencoder (ESN-RAE). In order to bring up sturdier alternative to
conventional reservoir-based networks, not only single layer basic ESN is used
as an autoencoder, but also Multi-Layer ESN (ML-ESN-RAE). The new features,
once extracted from ESN's hidden layer, are applied to classification tasks.
The classification rates rise considerably compared to those obtained when
applying the original data features. An accuracy-based comparison is performed
between the proposed recurrent AEs and two variants of an ELM feed-forward AEs
(Basic and ML) in both of noise free and noisy environments. The empirical
study reveals the main contribution of recurrent connections in improving the
classification performance results.
| null |
http://arxiv.org/abs/1804.08996v2
|
http://arxiv.org/pdf/1804.08996v2.pdf
| null |
[
"Naima Chouikhi",
"Boudour Ammar",
"Adel M. ALIMI"
] |
[
"Classification",
"General Classification"
] | 2018-04-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/sparse-over-complete-patch-matching
|
1806.03556
| null | null |
Sparse Over-complete Patch Matching
|
Image patch matching, which is the process of identifying corresponding
patches across images, has been used as a subroutine for many computer vision
and image processing tasks. State -of-the-art patch matching techniques take
image patches as input to a convolutional neural network to extract the patch
features and evaluate their similarity. Our aim in this paper is to improve on
the state of the art patch matching techniques by observing the fact that a
sparse-overcomplete representation of an image posses statistical properties of
natural visual scenes which can be exploited for patch matching. We propose a
new paradigm which encodes image patch details by encoding the patch and
subsequently using this sparse representation as input to a neural network to
compare the patches. As sparse coding is based on a generative model of natural
image patches, it can represent the patch in terms of the fundamental visual
components from which it has been composed of, leading to similar sparse codes
for patches which are built from similar components. Once the sparse coded
features are extracted, we employ a fully-connected neural network, which
captures the non-linear relationships between features, for comparison. We have
evaluated our approach using the Liberty and Notredame subsets of the popular
UBC patch dataset and set a new benchmark outperforming all state-of-the-art
patch matching techniques for these datasets.
| null |
http://arxiv.org/abs/1806.03556v2
|
http://arxiv.org/pdf/1806.03556v2.pdf
| null |
[
"Akila Pemasiri",
"Kien Nguyen",
"Sridha Sridharan",
"Clinton Fookes"
] |
[
"Patch Matching"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/consistent-position-bias-estimation-without
|
1806.03555
| null | null |
Consistent Position Bias Estimation without Online Interventions for Learning-to-Rank
|
Presentation bias is one of the key challenges when learning from implicit
feedback in search engines, as it confounds the relevance signal with
uninformative signals due to position in the ranking, saliency, and other
presentation factors. While it was recently shown how counterfactual
learning-to-rank (LTR) approaches \cite{Joachims/etal/17a} can provably
overcome presentation bias if observation propensities are known, it remains to
show how to accurately estimate these propensities. In this paper, we propose
the first method for producing consistent propensity estimates without manual
relevance judgments, disruptive interventions, or restrictive relevance
modeling assumptions. We merely require that we have implicit feedback data
from multiple different ranking functions. Furthermore, we argue that our
estimation technique applies to an extended class of Contextual Position-Based
Propensity Models, where propensities not only depend on position but also on
observable features of the query and document. Initial simulation studies
confirm that the approach is scalable, accurate, and robust.
| null |
http://arxiv.org/abs/1806.03555v1
|
http://arxiv.org/pdf/1806.03555v1.pdf
| null |
[
"Aman Agarwal",
"Ivan Zaitsev",
"Thorsten Joachims"
] |
[
"counterfactual",
"Learning-To-Rank",
"Position"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-estimation-and-analysis-framework-for-the
|
1806.03551
| null | null |
An Estimation and Analysis Framework for the Rasch Model
|
The Rasch model is widely used for item response analysis in applications
ranging from recommender systems to psychology, education, and finance. While a
number of estimators have been proposed for the Rasch model over the last
decades, the available analytical performance guarantees are mostly asymptotic.
This paper provides a framework that relies on a novel linear minimum
mean-squared error (L-MMSE) estimator which enables an exact, nonasymptotic,
and closed-form analysis of the parameter estimation error under the Rasch
model. The proposed framework provides guidelines on the number of items and
responses required to attain low estimation errors in tests or surveys. We
furthermore demonstrate its efficacy on a number of real-world collaborative
filtering datasets, which reveals that the proposed L-MMSE estimator performs
on par with state-of-the-art nonlinear estimators in terms of predictive
performance.
| null |
http://arxiv.org/abs/1806.03551v1
|
http://arxiv.org/pdf/1806.03551v1.pdf
|
ICML 2018 7
|
[
"Andrew S. Lan",
"Mung Chiang",
"Christoph Studer"
] |
[
"Collaborative Filtering",
"parameter estimation",
"Recommendation Systems"
] | 2018-06-09T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1977
|
http://proceedings.mlr.press/v80/lan18a/lan18a.pdf
|
an-estimation-and-analysis-framework-for-the-1
| null |
[] |
https://paperswithcode.com/paper/linear-spectral-estimators-and-an-application
|
1806.03547
| null | null |
Linear Spectral Estimators and an Application to Phase Retrieval
|
Phase retrieval refers to the problem of recovering real- or complex-valued
vectors from magnitude measurements. The best-known algorithms for this problem
are iterative in nature and rely on so-called spectral initializers that
provide accurate initialization vectors. We propose a novel class of estimators
suitable for general nonlinear measurement systems, called linear spectral
estimators (LSPEs), which can be used to compute accurate initialization
vectors for phase retrieval problems. The proposed LSPEs not only provide
accurate initialization vectors for noisy phase retrieval systems with
structured or random measurement matrices, but also enable the derivation of
sharp and nonasymptotic mean-squared error bounds. We demonstrate the efficacy
of LSPEs on synthetic and real-world phase retrieval problems, and show that
our estimators significantly outperform existing methods for structured
measurement systems that arise in practice.
| null |
http://arxiv.org/abs/1806.03547v1
|
http://arxiv.org/pdf/1806.03547v1.pdf
|
ICML 2018 7
|
[
"Ramina Ghods",
"Andrew S. Lan",
"Tom Goldstein",
"Christoph Studer"
] |
[
"Retrieval"
] | 2018-06-09T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2280
|
http://proceedings.mlr.press/v80/ghods18a/ghods18a.pdf
|
linear-spectral-estimators-and-an-application-1
| null |
[] |
https://paperswithcode.com/paper/hierarchical-bi-level-multi-objective
|
1806.01016
| null | null |
Hierarchical Bi-level Multi-Objective Evolution of Single- and Multi-layer Echo State Network Autoencoders for Data Representations
|
Echo State Network (ESN) presents a distinguished kind of recurrent neural
networks. It is built upon a sparse, random and large hidden infrastructure
called reservoir. ESNs have succeeded in dealing with several non-linear
problems such as prediction, classification, etc. Thanks to its rich dynamics,
ESN is used as an Autoencoder (AE) to extract features from original data
representations. ESN is not only used with its basic single layer form but also
with the recently proposed Multi-Layer (ML) architecture. The well setting of
ESN (basic and ML) architectures and training parameters is a crucial and hard
labor task. Generally, a number of parameters (hidden neurons, sparsity rates,
input scaling) is manually altered to achieve minimum learning error. However,
this randomly hand crafted task, on one hand, may not guarantee best training
results and on the other hand, it can raise the network's complexity. In this
paper, a hierarchical bi-level evolutionary optimization is proposed to deal
with these issues. The first level includes a multi-objective architecture
optimization providing maximum learning accuracy while sustaining the
complexity at a reduced standard. Multi-objective Particle Swarm Optimization
(MOPSO) is used to optimize ESN structure in a way to provide a trade-off
between the network complexity decreasing and the accuracy increasing. A
pareto-front of optimal solutions is generated by the end of the MOPSO process.
These solutions present the set of candidates that succeeded in providing a
compromise between different objectives (learning error and network
complexity). At the second level, each of the solutions already found undergo a
mono-objective weights optimization to enhance the obtained pareto-front.
Empirical results ensure the effectiveness of the evolved ESN recurrent AEs
(basic and ML) for noisy and noise free data.
| null |
http://arxiv.org/abs/1806.01016v2
|
http://arxiv.org/pdf/1806.01016v2.pdf
| null |
[
"Naima Chouikhi",
"Boudour Ammar",
"Adel M. ALIMI"
] |
[] | 2018-06-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/not-all-samples-are-created-equal-deep
|
1803.00942
| null | null |
Not All Samples Are Created Equal: Deep Learning with Importance Sampling
|
Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on "informative" examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the per-sample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.
|
Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored.
|
https://arxiv.org/abs/1803.00942v3
|
https://arxiv.org/pdf/1803.00942v3.pdf
|
ICML 2018 7
|
[
"Angelos Katharopoulos",
"François Fleuret"
] |
[
"All",
"image-classification",
"Image Classification"
] | 2018-03-02T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2178
|
http://proceedings.mlr.press/v80/katharopoulos18a/katharopoulos18a.pdf
|
not-all-samples-are-created-equal-deep-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/representation-learning-on-graphs-with
|
1806.03536
| null | null |
Representation Learning on Graphs with Jumping Knowledge Networks
|
Recent deep learning approaches for representation learning on graphs follow
a neighborhood aggregation procedure. We analyze some important properties of
these models, and propose a strategy to overcome those. In particular, the
range of "neighboring" nodes that a node's representation draws from strongly
depends on the graph structure, analogous to the spread of a random walk. To
adapt to local neighborhood properties and tasks, we explore an architecture --
jumping knowledge (JK) networks -- that flexibly leverages, for each node,
different neighborhood ranges to enable better structure-aware representation.
In a number of experiments on social, bioinformatics and citation networks, we
demonstrate that our model achieves state-of-the-art performance. Furthermore,
combining the JK framework with models like Graph Convolutional Networks,
GraphSAGE and Graph Attention Networks consistently improves those models'
performance.
|
Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.
|
http://arxiv.org/abs/1806.03536v2
|
http://arxiv.org/pdf/1806.03536v2.pdf
|
ICML 2018 7
|
[
"Keyulu Xu",
"Chengtao Li",
"Yonglong Tian",
"Tomohiro Sonobe",
"Ken-ichi Kawarabayashi",
"Stefanie Jegelka"
] |
[
"Graph Attention",
"Node Classification",
"Node Property Prediction",
"Representation Learning"
] | 2018-06-09T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2453
|
http://proceedings.mlr.press/v80/xu18c/xu18c.pdf
|
representation-learning-on-graphs-with-1
| null |
[
{
"code_snippet_url": "",
"description": "A Graph Convolutional Network, or GCN, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of convolutional neural networks which operate directly on graphs.\r\n\r\nImage source: [Semi-Supervised Classification with Graph Convolutional Networks](https://arxiv.org/pdf/1609.02907v4.pdf)",
"full_name": "Graph Convolutional Networks",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "Graph Convolutional Networks",
"source_title": "Semi-Supervised Classification with Graph Convolutional Networks",
"source_url": "http://arxiv.org/abs/1609.02907v4"
}
] |
https://paperswithcode.com/paper/cell-detection-with-star-convex-polygons
|
1806.03535
| null | null |
Cell Detection with Star-convex Polygons
|
Automatic detection and segmentation of cells and nuclei in microscopy images
is important for many biological applications. Recent successful learning-based
approaches include per-pixel cell segmentation with subsequent pixel grouping,
or localization of bounding boxes with subsequent shape refinement. In
situations of crowded cells, these can be prone to segmentation errors, such as
falsely merging bordering cells or suppressing valid cell instances due to the
poor approximation with bounding boxes. To overcome these issues, we propose to
localize cell nuclei via star-convex polygons, which are a much better shape
representation as compared to bounding boxes and thus do not need shape
refinement. To that end, we train a convolutional neural network that predicts
for every pixel a polygon for the cell instance at that position. We
demonstrate the merits of our approach on two synthetic datasets and one
challenging dataset of diverse fluorescence microscopy images.
|
Automatic detection and segmentation of cells and nuclei in microscopy images is important for many biological applications.
|
http://arxiv.org/abs/1806.03535v2
|
http://arxiv.org/pdf/1806.03535v2.pdf
| null |
[
"Uwe Schmidt",
"Martin Weigert",
"Coleman Broaddus",
"Gene Myers"
] |
[
"Cell Detection",
"Cell Segmentation",
"Medical Image Segmentation",
"Segmentation",
"valid"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-search-in-long-documents-using
|
1806.03529
| null | null |
Learning to Search in Long Documents Using Document Structure
|
Reading comprehension models are based on recurrent neural networks that
sequentially process the document tokens. As interest turns to answering more
complex questions over longer documents, sequential reading of large portions
of text becomes a substantial bottleneck. Inspired by how humans use document
structure, we propose a novel framework for reading comprehension. We represent
documents as trees, and model an agent that learns to interleave quick
navigation through the document tree with more expensive answer extraction. To
encourage exploration of the document tree, we propose a new algorithm, based
on Deep Q-Network (DQN), which strategically samples tree nodes at training
time. Empirically we find our algorithm improves question answering performance
compared to DQN and a strong information-retrieval (IR) baseline, and that
ensembling our model with the IR baseline results in further gains in
performance.
|
Reading comprehension models are based on recurrent neural networks that sequentially process the document tokens.
|
http://arxiv.org/abs/1806.03529v2
|
http://arxiv.org/pdf/1806.03529v2.pdf
|
COLING 2018 8
|
[
"Mor Geva",
"Jonathan Berant"
] |
[
"Information Retrieval",
"Question Answering",
"Reading Comprehension",
"Retrieval"
] | 2018-06-09T00:00:00 |
https://aclanthology.org/C18-1014
|
https://aclanthology.org/C18-1014.pdf
|
learning-to-search-in-long-documents-using-2
| null |
[
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **DQN**, or Deep Q-Network, approximates a state-value function in a [Q-Learning](https://paperswithcode.com/method/q-learning) framework with a neural network. In the Atari Games case, they take in several frames of the game as an input and output state values for each action as an output. \r\n\r\nIt is usually used in conjunction with [Experience Replay](https://paperswithcode.com/method/experience-replay), for storing the episode steps in memory for off-policy learning, where samples are drawn from the replay memory at random. Additionally, the Q-Network is usually optimized towards a frozen target network that is periodically updated with the latest weights every $k$ steps (where $k$ is a hyperparameter). The latter makes training more stable by preventing short-term oscillations from a moving target. The former tackles autocorrelation that would occur from on-line learning, and having a replay memory makes the problem more like a supervised learning problem.\r\n\r\nImage Source: [here](https://www.researchgate.net/publication/319643003_Autonomous_Quadrotor_Landing_using_Deep_Reinforcement_Learning)",
"full_name": "Deep Q-Network",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Q-Learning Networks",
"parent": "Off-Policy TD Control"
},
"name": "DQN",
"source_title": "Playing Atari with Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1312.5602v1"
}
] |
https://paperswithcode.com/paper/second-language-acquisition-modeling-an
|
1806.04525
| null | null |
Second Language Acquisition Modeling: An Ensemble Approach
|
Accurate prediction of students knowledge is a fundamental building block of
personalized learning systems. Here, we propose a novel ensemble model to
predict student knowledge gaps. Applying our approach to student trace data
from the online educational platform Duolingo we achieved highest score on both
evaluation metrics for all three datasets in the 2018 Shared Task on Second
Language Acquisition Modeling. We describe our model and discuss relevance of
the task compared to how it would be setup in a production environment for
personalized education.
| null |
http://arxiv.org/abs/1806.04525v1
|
http://arxiv.org/pdf/1806.04525v1.pdf
|
WS 2018 6
|
[
"Anton Osika",
"Susanna Nilsson",
"Andrii Sydorchuk",
"Faruk Sahin",
"Anders Huss"
] |
[
"Language Acquisition"
] | 2018-06-09T00:00:00 |
https://aclanthology.org/W18-0525
|
https://aclanthology.org/W18-0525.pdf
|
second-language-acquisition-modeling-an-1
| null |
[] |
https://paperswithcode.com/paper/a-taxonomy-and-survey-of-intrusion-detection
|
1806.03517
| null | null |
A Taxonomy of Network Threats and the Effect of Current Datasets on Intrusion Detection Systems
|
As the world moves towards being increasingly dependent on computers and automation, building secure applications, systems and networks are some of the main challenges faced in the current decade. The number of threats that individuals and businesses face is rising exponentially due to the increasing complexity of networks and services of modern networks. To alleviate the impact of these threats, researchers have proposed numerous solutions for anomaly detection; however, current tools often fail to adapt to ever-changing architectures, associated threats and zero-day attacks. This manuscript aims to pinpoint research gaps and shortcomings of current datasets, their impact on building Network Intrusion Detection Systems (NIDS) and the growing number of sophisticated threats. To this end, this manuscript provides researchers with two key pieces of information; a survey of prominent datasets, analyzing their use and impact on the development of the past decade's Intrusion Detection Systems (IDS) and a taxonomy of network threats and associated tools to carry out these attacks. The manuscript highlights that current IDS research covers only 33.3% of our threat taxonomy. Current datasets demonstrate a clear lack of real-network threats, attack representation and include a large number of deprecated threats, which together limit the detection accuracy of current machine learning IDS approaches. The unique combination of the taxonomy and the analysis of the datasets provided in this manuscript aims to improve the creation of datasets and the collection of real-world data. As a result, this will improve the efficiency of the next generation IDS and reflect network threats more accurately within new datasets.
|
This manuscript aims to pinpoint research gaps and shortcomings of current datasets, their impact on building Network Intrusion Detection Systems (NIDS) and the growing number of sophisticated threats.
|
https://arxiv.org/abs/1806.03517v2
|
https://arxiv.org/pdf/1806.03517v2.pdf
| null |
[
"Hanan Hindy",
"David Brosset",
"Ethan Bayne",
"Amar Seeam",
"Christos Tachtatzis",
"Robert Atkinson",
"Xavier Bellekens"
] |
[
"Anomaly Detection",
"Intrusion Detection",
"Network Intrusion Detection"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/field-weighted-factorization-machines-for
|
1806.03514
| null | null |
Field-weighted Factorization Machines for Click-Through Rate Prediction in Display Advertising
|
Click-through rate (CTR) prediction is a critical task in online display advertising. The data involved in CTR prediction are typically multi-field categorical data, i.e., every feature is categorical and belongs to one and only one field. One of the interesting characteristics of such data is that features from one field often interact differently with features from different other fields. Recently, Field-aware Factorization Machines (FFMs) have been among the best performing models for CTR prediction by explicitly modeling such difference. However, the number of parameters in FFMs is in the order of feature number times field number, which is unacceptable in the real-world production systems. In this paper, we propose Field-weighted Factorization Machines (FwFMs) to model the different feature interactions between different fields in a much more memory-efficient way. Our experimental evaluations show that FwFMs can achieve competitive prediction performance with only as few as 4% parameters of FFMs. When using the same number of parameters, FwFMs can bring 0.92% and 0.47% AUC lift over FFMs on two real CTR prediction data sets.
|
The data involved in CTR prediction are typically multi-field categorical data, i. e., every feature is categorical and belongs to one and only one field.
|
https://arxiv.org/abs/1806.03514v2
|
https://arxiv.org/pdf/1806.03514v2.pdf
| null |
[
"Junwei Pan",
"Jian Xu",
"Alfonso Lobos Ruiz",
"Wenliang Zhao",
"Shengjun Pan",
"Yu Sun",
"Quan Lu"
] |
[
"Click-Through Rate Prediction",
"Prediction"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/feature-pyramid-network-for-multi-class-land
|
1806.03510
| null | null |
Feature Pyramid Network for Multi-Class Land Segmentation
|
Semantic segmentation is in-demand in satellite imagery processing. Because
of the complex environment, automatic categorization and segmentation of land
cover is a challenging problem. Solving it can help to overcome many obstacles
in urban planning, environmental engineering or natural landscape monitoring.
In this paper, we propose an approach for automatic multi-class land
segmentation based on a fully convolutional neural network of feature pyramid
network (FPN) family. This network is consisted of pre-trained on ImageNet
Resnet50 encoder and neatly developed decoder. Based on validation results,
leaderboard score and our own experience this network shows reliable results
for the DEEPGLOBE - CVPR 2018 land cover classification sub-challenge.
Moreover, this network moderately uses memory that allows using GTX 1080 or
1080 TI video cards to perform whole training and makes pretty fast
predictions.
|
Semantic segmentation is in-demand in satellite imagery processing.
|
http://arxiv.org/abs/1806.03510v2
|
http://arxiv.org/pdf/1806.03510v2.pdf
| null |
[
"Selim S. Seferbekov",
"Vladimir I. Iglovikov",
"Alexander V. Buslaev",
"Alexey A. Shvets"
] |
[
"Decoder",
"General Classification",
"Land Cover Classification",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-the-universal-approximation-property-and
|
1803.05391
| null | null |
On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks
|
Large-scale deep neural networks are both memory intensive and
computation-intensive, thereby posing stringent requirements on the computing
platforms. Hardware accelerations of deep neural networks have been extensively
investigated in both industry and academia. Specific forms of binary neural
networks (BNNs) and stochastic computing based neural networks (SCNNs) are
particularly appealing to hardware implementations since they can be
implemented almost entirely with binary operations. Despite the obvious
advantages in hardware implementation, these approximate computing techniques
are questioned by researchers in terms of accuracy and universal applicability.
Also it is important to understand the relative pros and cons of SCNNs and BNNs
in theory and in actual hardware implementations. In order to address these
concerns, in this paper we prove that the "ideal" SCNNs and BNNs satisfy the
universal approximation property with probability 1 (due to the stochastic
behavior). The proof is conducted by first proving the property for SCNNs from
the strong law of large numbers, and then using SCNNs as a "bridge" to prove
for BNNs. Based on the universal approximation property, we further prove that
SCNNs and BNNs exhibit the same energy complexity. In other words, they have
the same asymptotic energy consumption with the growing of network size. We
also provide a detailed analysis of the pros and cons of SCNNs and BNNs for
hardware implementations and conclude that SCNNs are more suitable for
hardware.
| null |
http://arxiv.org/abs/1803.05391v2
|
http://arxiv.org/pdf/1803.05391v2.pdf
| null |
[
"Yanzhi Wang",
"Zheng Zhan",
"Jiayu Li",
"Jian Tang",
"Bo Yuan",
"Liang Zhao",
"Wujie Wen",
"Siyue Wang",
"Xue Lin"
] |
[] | 2018-03-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-fast-and-scalable-joint-estimator-for-1
|
1806.00548
| null | null |
A Fast and Scalable Joint Estimator for Integrating Additional Knowledge in Learning Multiple Related Sparse Gaussian Graphical Models
|
We consider the problem of including additional knowledge in estimating
sparse Gaussian graphical models (sGGMs) from aggregated samples, arising often
in bioinformatics and neuroimaging applications. Previous joint sGGM estimators
either fail to use existing knowledge or cannot scale-up to many tasks (large
$K$) under a high-dimensional (large $p$) situation. In this paper, we propose
a novel \underline{J}oint \underline{E}lementary \underline{E}stimator
incorporating additional \underline{K}nowledge (JEEK) to infer multiple related
sparse Gaussian Graphical models from large-scale heterogeneous data. Using
domain knowledge as weights, we design a novel hybrid norm as the minimization
objective to enforce the superposition of two weighted sparsity constraints,
one on the shared interactions and the other on the task-specific structural
patterns. This enables JEEK to elegantly consider various forms of existing
knowledge based on the domain at hand and avoid the need to design
knowledge-specific optimization. JEEK is solved through a fast and entry-wise
parallelizable solution that largely improves the computational efficiency of
the state-of-the-art $O(p^5K^4)$ to $O(p^2K^4)$. We conduct a rigorous
statistical analysis showing that JEEK achieves the same convergence rate
$O(\log(Kp)/n_{tot})$ as the state-of-the-art estimators that are much harder
to compute. Empirically, on multiple synthetic datasets and two real-world
data, JEEK outperforms the speed of the state-of-arts significantly while
achieving the same level of prediction accuracy. Available as R tool @
http://jointnets.org/
|
We consider the problem of including additional knowledge in estimating sparse Gaussian graphical models (sGGMs) from aggregated samples, arising often in bioinformatics and neuroimaging applications.
|
http://arxiv.org/abs/1806.00548v4
|
http://arxiv.org/pdf/1806.00548v4.pdf
|
ICML 2018 7
|
[
"Beilun Wang",
"Arshdeep Sekhon",
"Yanjun Qi"
] |
[
"2k",
"Computational Efficiency",
"Structured Prediction"
] | 2018-06-01T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2327
|
http://proceedings.mlr.press/v80/wang18f/wang18f.pdf
|
a-fast-and-scalable-joint-estimator-for-2
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/emulating-dynamic-non-linear-simulators-using
|
1802.07575
| null | null |
Emulating dynamic non-linear simulators using Gaussian processes
|
The dynamic emulation of non-linear deterministic computer codes where the
output is a time series, possibly multivariate, is examined. Such computer
models simulate the evolution of some real-world phenomenon over time, for
example models of the climate or the functioning of the human brain. The models
we are interested in are highly non-linear and exhibit tipping points,
bifurcations and chaotic behaviour. However, each simulation run could be too
time-consuming to perform analyses that require many runs, including
quantifying the variation in model output with respect to changes in the
inputs. Therefore, Gaussian process emulators are used to approximate the
output of the code. To do this, the flow map of the system under study is
emulated over a short time period. Then, it is used in an iterative way to
predict the whole time series. A number of ways are proposed to take into
account the uncertainty of inputs to the emulators, after fixed initial
conditions, and the correlation between them through the time series. The
methodology is illustrated with two examples: the highly non-linear dynamical
systems described by the Lorenz and Van der Pol equations. In both cases, the
predictive performance is relatively high and the measure of uncertainty
provided by the method reflects the extent of predictability in each system.
| null |
http://arxiv.org/abs/1802.07575v4
|
http://arxiv.org/pdf/1802.07575v4.pdf
| null |
[
"Hossein Mohammadi",
"Peter Challenor",
"Marc Goodfellow"
] |
[
"Gaussian Processes",
"Time Series",
"Time Series Analysis"
] | 2018-02-21T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/exploring-hidden-dimensions-in-parallelizing
|
1802.04924
| null | null |
Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks
|
The past few years have witnessed growth in the computational requirements
for training deep convolutional neural networks. Current approaches parallelize
training onto multiple devices by applying a single parallelization strategy
(e.g., data or model parallelism) to all layers in a network. Although easy to
reason about, these approaches result in suboptimal runtime performance in
large-scale distributed training, since different layers in a network may
prefer different parallelization strategies. In this paper, we propose
layer-wise parallelism that allows each layer in a network to use an individual
parallelization strategy. We jointly optimize how each layer is parallelized by
solving a graph search problem. Our evaluation shows that layer-wise
parallelism outperforms state-of-the-art approaches by increasing training
throughput, reducing communication costs, achieving better scalability to
multiple GPUs, while maintaining original network accuracy.
| null |
http://arxiv.org/abs/1802.04924v2
|
http://arxiv.org/pdf/1802.04924v2.pdf
| null |
[
"Zhihao Jia",
"Sina Lin",
"Charles R. Qi",
"Alex Aiken"
] |
[] | 2018-02-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/russe2018-a-shared-task-on-word-sense
|
1803.05795
| null | null |
RUSSE'2018: A Shared Task on Word Sense Induction for the Russian Language
|
The paper describes the results of the first shared task on word sense
induction (WSI) for the Russian language. While similar shared tasks were
conducted in the past for some Romance and Germanic languages, we explore the
performance of sense induction and disambiguation methods for a Slavic language
that shares many features with other Slavic languages, such as rich morphology
and virtually free word order. The participants were asked to group contexts of
a given word in accordance with its senses that were not provided beforehand.
For instance, given a word "bank" and a set of contexts for this word, e.g.
"bank is a financial institution that accepts deposits" and "river bank is a
slope beside a body of water", a participant was asked to cluster such contexts
in the unknown in advance number of clusters corresponding to, in this case,
the "company" and the "area" senses of the word "bank". For the purpose of this
evaluation campaign, we developed three new evaluation datasets based on sense
inventories that have different sense granularity. The contexts in these
datasets were sampled from texts of Wikipedia, the academic corpus of Russian,
and an explanatory dictionary of Russian. Overall, 18 teams participated in the
competition submitting 383 models. Multiple teams managed to substantially
outperform competitive state-of-the-art baselines from the previous years based
on sense embeddings.
| null |
http://arxiv.org/abs/1803.05795v3
|
http://arxiv.org/pdf/1803.05795v3.pdf
| null |
[
"Alexander Panchenko",
"Anastasiya Lopukhina",
"Dmitry Ustalov",
"Konstantin Lopukhin",
"Nikolay Arefyev",
"Alexey Leontyev",
"Natalia Loukachevitch"
] |
[
"Word Sense Induction"
] | 2018-03-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generalized-earley-parser-bridging-symbolic
|
1806.03497
| null | null |
Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data for Future Prediction
|
Future predictions on sequence data (e.g., videos or audios) require the
algorithms to capture non-Markovian and compositional properties of high-level
semantics. Context-free grammars are natural choices to capture such
properties, but traditional grammar parsers (e.g., Earley parser) only take
symbolic sentences as inputs. In this paper, we generalize the Earley parser to
parse sequence data which is neither segmented nor labeled. This generalized
Earley parser integrates a grammar parser with a classifier to find the optimal
segmentation and labels, and makes top-down future predictions. Experiments
show that our method significantly outperforms other approaches for future
human activity prediction.
| null |
http://arxiv.org/abs/1806.03497v1
|
http://arxiv.org/pdf/1806.03497v1.pdf
|
ICML 2018 7
|
[
"Siyuan Qi",
"Baoxiong Jia",
"Song-Chun Zhu"
] |
[
"Activity Prediction",
"Future prediction"
] | 2018-06-09T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1920
|
http://proceedings.mlr.press/v80/qi18a/qi18a.pdf
|
generalized-earley-parser-bridging-symbolic-1
| null |
[] |
https://paperswithcode.com/paper/bridging-the-gap-between-2d-and-3d-organ
|
1804.00392
| null | null |
Bridging the Gap Between 2D and 3D Organ Segmentation with Volumetric Fusion Net
|
There has been a debate on whether to use 2D or 3D deep neural networks for
volumetric organ segmentation. Both 2D and 3D models have their advantages and
disadvantages. In this paper, we present an alternative framework, which trains
2D networks on different viewpoints for segmentation, and builds a 3D
Volumetric Fusion Net (VFN) to fuse the 2D segmentation results. VFN is
relatively shallow and contains much fewer parameters than most 3D networks,
making our framework more efficient at integrating 3D information for
segmentation. We train and test the segmentation and fusion modules
individually, and propose a novel strategy, named cross-cross-augmentation, to
make full use of the limited training data. We evaluate our framework on
several challenging abdominal organs, and verify its superiority in
segmentation accuracy and stability over existing 2D and 3D approaches.
| null |
http://arxiv.org/abs/1804.00392v2
|
http://arxiv.org/pdf/1804.00392v2.pdf
| null |
[
"Yingda Xia",
"Lingxi Xie",
"Fengze Liu",
"Zhuotun Zhu",
"Elliot K. Fishman",
"Alan L. Yuille"
] |
[
"Organ Segmentation",
"Segmentation"
] | 2018-04-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/explainable-deterministic-mdps
|
1806.03492
| null | null |
Explainable Deterministic MDPs
|
We present a method for a certain class of Markov Decision Processes (MDPs)
that can relate the optimal policy back to one or more reward sources in the
environment. For a given initial state, without fully computing the value
function, q-value function, or the optimal policy the algorithm can determine
which rewards will and will not be collected, whether a given reward will be
collected only once or continuously, and which local maximum within the value
function the initial state will ultimately lead to. We demonstrate that the
method can be used to map the state space to identify regions that are
dominated by one reward source and can fully analyze the state space to explain
all actions. We provide a mathematical framework to show how all of this is
possible without first computing the optimal policy or value function.
| null |
http://arxiv.org/abs/1806.03492v1
|
http://arxiv.org/pdf/1806.03492v1.pdf
| null |
[
"Josh Bertram",
"Peng Wei"
] |
[] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/robust-lexical-features-for-improved-neural
|
1806.03489
| null | null |
Robust Lexical Features for Improved Neural Network Named-Entity Recognition
|
Neural network approaches to Named-Entity Recognition reduce the need for
carefully hand-crafted features. While some features do remain in
state-of-the-art systems, lexical features have been mostly discarded, with the
exception of gazetteers. In this work, we show that this is unfair: lexical
features are actually quite useful. We propose to embed words and entity types
into a low-dimensional vector space we train from annotated data produced by
distant supervision thanks to Wikipedia. From this, we compute - offline - a
feature vector representing each word. When used with a vanilla recurrent
neural network model, this representation yields substantial improvements. We
establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while
matching state-of-the-art performance with a F1 score of 91.73 on the
over-studied CONLL-2003 dataset.
|
While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers.
|
http://arxiv.org/abs/1806.03489v1
|
http://arxiv.org/pdf/1806.03489v1.pdf
|
COLING 2018 8
|
[
"Abbas Ghaddar",
"Philippe Langlais"
] |
[
"named-entity-recognition",
"Named Entity Recognition",
"Named Entity Recognition (NER)"
] | 2018-06-09T00:00:00 |
https://aclanthology.org/C18-1161
|
https://aclanthology.org/C18-1161.pdf
|
robust-lexical-features-for-improved-neural-1
| null |
[] |
https://paperswithcode.com/paper/learning-to-grasp-from-a-single-demonstration
|
1806.03486
| null | null |
Learning to Grasp from a Single Demonstration
|
Learning-based approaches for robotic grasping using visual sensors typically
require collecting a large size dataset, either manually labeled or by many
trial and errors of a robotic manipulator in the real or simulated world. We
propose a simpler learning-from-demonstration approach that is able to detect
the object to grasp from merely a single demonstration using a convolutional
neural network we call GraspNet. In order to increase robustness and decrease
the training time even further, we leverage data from previous demonstrations
to quickly fine-tune a GrapNet for each new demonstration. We present some
preliminary results on a grasping experiment with the Franka Panda cobot for
which we can train a GraspNet with only hundreds of train iterations.
| null |
http://arxiv.org/abs/1806.03486v1
|
http://arxiv.org/pdf/1806.03486v1.pdf
| null |
[
"Pieter Van Molle",
"Tim Verbelen",
"Elias De Coninck",
"Cedric De Boom",
"Pieter Simoens",
"Bart Dhoedt"
] |
[
"Robotic Grasping"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dir-st2-delineation-of-imprecise-regions
|
1806.03482
| null | null |
DIR-ST$^2$: Delineation of Imprecise Regions Using Spatio--Temporal--Textual Information
|
An imprecise region is referred to as a geographical area without a
clearly-defined boundary in the literature. Previous clustering-based
approaches exploit spatial information to find such regions. However, the prior
studies suffer from the following two problems: the subjectivity in selecting
clustering parameters and the inclusion of a large portion of the undesirable
region (i.e., a large number of noise points). To overcome these problems, we
present DIR-ST$^2$, a novel framework for delineating an imprecise region by
iteratively performing density-based clustering, namely DBSCAN, along with not
only spatio--textual information but also temporal information on social media.
Specifically, we aim at finding a proper radius of a circle used in the
iterative DBSCAN process by gradually reducing the radius for each iteration in
which the temporal information acquired from all resulting clusters are
leveraged. Then, we propose an efficient and automated algorithm delineating
the imprecise region via hierarchical clustering. Experiment results show that
by virtue of the significant noise reduction in the region, our DIR-ST$^2$
method outperforms the state-of-the-art approach employing one-class support
vector machine in terms of the $\mathcal{F}_1$ score from comparison with
precisely-defined regions regarded as a ground truth, and returns apparently
better delineation of imprecise regions. The computational complexity of
DIR-ST$^2$ is also analytically and numerically shown.
| null |
http://arxiv.org/abs/1806.03482v1
|
http://arxiv.org/pdf/1806.03482v1.pdf
| null |
[
"Cong Tran",
"Won-Yong Shin",
"Sang-Il Choi"
] |
[
"Clustering"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/geometry-score-a-method-for-comparing
|
1802.02664
| null | null |
Geometry Score: A Method For Comparing Generative Adversarial Networks
|
One of the biggest challenges in the research of generative adversarial
networks (GANs) is assessing the quality of generated samples and detecting
various levels of mode collapse. In this work, we construct a novel measure of
performance of a GAN by comparing geometrical properties of the underlying data
manifold and the generated one, which provides both qualitative and
quantitative means for evaluation. Our algorithm can be applied to datasets of
an arbitrary nature and is not limited to visual data. We test the obtained
metric on various real-life models and datasets and demonstrate that our method
provides new insights into properties of GANs.
|
One of the biggest challenges in the research of generative adversarial networks (GANs) is assessing the quality of generated samples and detecting various levels of mode collapse.
|
http://arxiv.org/abs/1802.02664v3
|
http://arxiv.org/pdf/1802.02664v3.pdf
|
ICML 2018 7
|
[
"Valentin Khrulkov",
"Ivan Oseledets"
] |
[] | 2018-02-07T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2153
|
http://proceedings.mlr.press/v80/khrulkov18a/khrulkov18a.pdf
|
geometry-score-a-method-for-comparing-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/holographic-automata-for-ambient-immersive-a
|
1806.05108
| null | null |
Holographic Automata for Ambient Immersive A. I. via Reservoir Computing
|
We prove the existence of a semilinear representation of Cellular Automata
(CA) with the introduction of multiple convolution kernels. Examples of the
technique are presented for rules akin to the "edge-of-chaos" including the
Turing universal rule 110 for further utilization in the area of reservoir
computing. We also examine the significance of their dual representation on a
frequency or wavelength domain as a superposition of plane waves for
distributed computing applications including a new proposal for a "Hologrid"
that could be realized with present Wi-Fi,Li-Fi technologies.
| null |
http://arxiv.org/abs/1806.05108v2
|
http://arxiv.org/pdf/1806.05108v2.pdf
| null |
[
"Theophanes E. Raptis"
] |
[
"Distributed Computing"
] | 2018-06-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/orthogonal-random-forest-for-causal-inference
|
1806.03467
| null | null |
Orthogonal Random Forest for Causal Inference
|
We propose the orthogonal random forest, an algorithm that combines Neyman-orthogonality to reduce sensitivity with respect to estimation error of nuisance parameters with generalized random forests (Athey et al., 2017)--a flexible non-parametric method for statistical estimation of conditional moment models using random forests. We provide a consistency rate and establish asymptotic normality for our estimator. We show that under mild assumptions on the consistency rate of the nuisance estimator, we can achieve the same error rate as an oracle with a priori knowledge of these nuisance parameters. We show that when the nuisance functions have a locally sparse parametrization, then a local $\ell_1$-penalized regression achieves the required rate. We apply our method to estimate heterogeneous treatment effects from observational data with discrete treatments or continuous treatments, and we show that, unlike prior work, our method provably allows to control for a high-dimensional set of variables under standard sparsity conditions. We also provide a comprehensive empirical evaluation of our algorithm on both synthetic and real data.
|
We provide a consistency rate and establish asymptotic normality for our estimator.
|
https://arxiv.org/abs/1806.03467v4
|
https://arxiv.org/pdf/1806.03467v4.pdf
| null |
[
"Miruna Oprescu",
"Vasilis Syrgkanis",
"Zhiwei Steven Wu"
] |
[
"Causal Inference"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/robust-semantic-segmentation-with-ladder
|
1806.03465
| null | null |
Robust Semantic Segmentation with Ladder-DenseNet Models
|
We present semantic segmentation experiments with a model capable to perform
predictions on four benchmark datasets: Cityscapes, ScanNet, WildDash and
KITTI. We employ a ladder-style convolutional architecture featuring a modified
DenseNet-169 model in the downsampling datapath, and only one convolution in
each stage of the upsampling datapath. Due to limited computing resources, we
perform the training only on Cityscapes Fine train+val, ScanNet train, WildDash
val and KITTI train. We evaluate the trained model on the test subsets of the
four benchmarks in concordance with the guidelines of the Robust Vision
Challenge ROB 2018. The performed experiments reveal several interesting
findings which we describe and discuss.
| null |
http://arxiv.org/abs/1806.03465v1
|
http://arxiv.org/pdf/1806.03465v1.pdf
| null |
[
"Ivan Krešo",
"Marin Oršić",
"Petra Bevandić",
"Siniša Šegvić"
] |
[
"Semantic Segmentation"
] | 2018-06-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/tapas-tricks-to-accelerate-encrypted
|
1806.03461
| null | null |
TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service
|
Machine learning methods are widely used for a variety of prediction
problems. \emph{Prediction as a service} is a paradigm in which service
providers with technological expertise and computational resources may perform
predictions for clients. However, data privacy severely restricts the
applicability of such services, unless measures to keep client data private
(even from the service provider) are designed. Equally important is to minimize
the amount of computation and communication required between client and server.
Fully homomorphic encryption offers a possible way out, whereby clients may
encrypt their data, and on which the server may perform arithmetic
computations. The main drawback of using fully homomorphic encryption is the
amount of time required to evaluate large machine learning models on encrypted
data. We combine ideas from the machine learning literature, particularly work
on binarization and sparsification of neural networks, together with
algorithmic tools to speed-up and parallelize computation using encrypted data.
|
The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data.
|
http://arxiv.org/abs/1806.03461v1
|
http://arxiv.org/pdf/1806.03461v1.pdf
|
ICML 2018 7
|
[
"Amartya Sanyal",
"Matt J. Kusner",
"Adrià Gascón",
"Varun Kanade"
] |
[
"BIG-bench Machine Learning",
"Binarization",
"Prediction"
] | 2018-06-09T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2140
|
http://proceedings.mlr.press/v80/sanyal18a/sanyal18a.pdf
|
tapas-tricks-to-accelerate-encrypted-1
| null |
[] |
https://paperswithcode.com/paper/a-preliminary-exploration-of-floating-point
|
1806.03455
| null | null |
A Preliminary Exploration of Floating Point Grammatical Evolution
|
Current GP frameworks are highly effective on a range of real and simulated
benchmarks. However, due to the high dimensionality of the genotypes for GP,
the task of visualising the fitness landscape for GP search can be difficult.
This paper describes a new framework: Floating Point Grammatical Evolution
(FP-GE) which uses a single floating point genotype to encode an individual
program. This encoding permits easier visualisation of the fitness landscape
arbitrary problems by providing a way to map fitness against a single
dimension. The new framework also makes it trivially easy to apply continuous
search algorithms, such as Differential Evolution, to the search problem. In
this work, the FP-GE framework is tested against several regression problems,
visualising the search landscape for these and comparing different search
meta-heuristics.
| null |
http://arxiv.org/abs/1806.03455v1
|
http://arxiv.org/pdf/1806.03455v1.pdf
| null |
[
"Brad Alexander"
] |
[
"regression"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-learning-topological-invariants-of-band
|
1805.10503
| null | null |
Deep Learning Topological Invariants of Band Insulators
|
In this work we design and train deep neural networks to predict topological
invariants for one-dimensional four-band insulators in AIII class whose
topological invariant is the winding number, and two-dimensional two-band
insulators in A class whose topological invariant is the Chern number. Given
Hamiltonians in the momentum space as the input, neural networks can predict
topological invariants for both classes with accuracy close to or higher than
90%, even for Hamiltonians whose invariants are beyond the training data set.
Despite the complexity of the neural network, we find that the output of
certain intermediate hidden layers resembles either the winding angle for
models in AIII class or the solid angle (Berry curvature) for models in A
class, indicating that neural networks essentially capture the mathematical
formula of topological invariants. Our work demonstrates the ability of neural
networks to predict topological invariants for complicated models with local
Hamiltonians as the only input, and offers an example that even a deep neural
network is understandable.
| null |
http://arxiv.org/abs/1805.10503v2
|
http://arxiv.org/pdf/1805.10503v2.pdf
| null |
[
"Ning Sun",
"Jinmin Yi",
"Pengfei Zhang",
"Huitao Shen",
"Hui Zhai"
] |
[
"Deep Learning"
] | 2018-05-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bounding-and-counting-linear-regions-of-deep
|
1711.02114
| null |
Sy-tszZRZ
|
Bounding and Counting Linear Regions of Deep Neural Networks
|
We investigate the complexity of deep neural networks (DNN) that represent
piecewise linear (PWL) functions. In particular, we study the number of linear
regions, i.e. pieces, that a PWL function represented by a DNN can attain, both
theoretically and empirically. We present (i) tighter upper and lower bounds
for the maximum number of linear regions on rectifier networks, which are exact
for inputs of dimension one; (ii) a first upper bound for multi-layer maxout
networks; and (iii) a first method to perform exact enumeration or counting of
the number of regions by modeling the DNN with a mixed-integer linear
formulation. These bounds come from leveraging the dimension of the space
defining each linear region. The results also indicate that a deep rectifier
network can only have more linear regions than every shallow counterpart with
same number of neurons if that number exceeds the dimension of the input.
| null |
http://arxiv.org/abs/1711.02114v4
|
http://arxiv.org/pdf/1711.02114v4.pdf
| null |
[
"Thiago Serra",
"Christian Tjandraatmadja",
"Srikumar Ramalingam"
] |
[] | 2017-11-06T00:00:00 |
https://openreview.net/forum?id=Sy-tszZRZ
|
https://openreview.net/pdf?id=Sy-tszZRZ
|
bounding-and-counting-linear-regions-of-deep-2
| null |
[] |
https://paperswithcode.com/paper/a-hybrid-econometric-machine-learning
|
1806.04517
| null | null |
A hybrid econometric-machine learning approach for relative importance analysis: Prioritizing food policy
|
A measure of relative importance of variables is often desired by researchers when the explanatory aspects of econometric methods are of interest. To this end, the author briefly reviews the limitations of conventional econometrics in constructing a reliable measure of variable importance. The author highlights the relative stature of explanatory and predictive analysis in economics and the emergence of fruitful collaborations between econometrics and computer science. Learning lessons from both, the author proposes a hybrid approach based on conventional econometrics and advanced machine learning (ML) algorithms, which are otherwise, used in predictive analytics. The purpose of this article is two-fold, to propose a hybrid approach to assess relative importance and demonstrate its applicability in addressing policy priority issues with an example of food inflation in India, followed by a broader aim to introduce the possibility of conflation of ML and conventional econometrics to an audience of researchers in economics and social sciences, in general.
| null |
https://arxiv.org/abs/1806.04517v3
|
https://arxiv.org/pdf/1806.04517v3.pdf
| null |
[
"Akash Malhotra"
] |
[
"BIG-bench Machine Learning",
"Econometrics"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/abstaining-classification-when-error-costs
|
1806.03445
| null | null |
Abstaining Classification When Error Costs are Unequal and Unknown
|
Abstaining classificaiton aims to reject to classify the easily misclassified
examples, so it is an effective approach to increase the clasificaiton
reliability and reduce the misclassification risk in the cost-sensitive
applications. In such applications, different types of errors (false positive
or false negative) usaully have unequal costs. And the error costs, which
depend on specific applications, are usually unknown. However, current
abstaining classification methods either do not distinguish the error types, or
they need the cost information of misclassification and rejection, which are
realized in the framework of cost-sensitive learning. In this paper, we propose
a bounded-abstention method with two constraints of reject rates (BA2), which
performs abstaining classification when error costs are unequal and unknown.
BA2 aims to obtain the optimal area under the ROC curve (AUC) by constraining
the reject rates of the positive and negative classes respectively.
Specifically, we construct the receiver operating characteristic (ROC) curve,
and stepwise search the optimal reject thresholds from both ends of the curve,
untill the two constraints are satisfied. Experimental results show that BA2
obtains higher AUC and lower total cost than the state-of-the-art abstaining
classification methods. Meanwhile, BA2 achieves controllable reject rates of
the positive and negative classes.
| null |
http://arxiv.org/abs/1806.03445v2
|
http://arxiv.org/pdf/1806.03445v2.pdf
| null |
[
"Hongjiao Guan",
"Yingtao Zhang",
"H. D. Cheng",
"Xianglong Tang"
] |
[
"Classification",
"General Classification"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hierarchical-imitation-and-reinforcement
|
1803.00590
| null | null |
Hierarchical Imitation and Reinforcement Learning
|
We study how to effectively leverage expert feedback to learn sequential
decision-making policies. We focus on problems with sparse rewards and long
time horizons, which typically pose significant challenges in reinforcement
learning. We propose an algorithmic framework, called hierarchical guidance,
that leverages the hierarchical structure of the underlying problem to
integrate different modes of expert interaction. Our framework can incorporate
different combinations of imitation learning (IL) and reinforcement learning
(RL) at different levels, leading to dramatic reductions in both expert effort
and cost of exploration. Using long-horizon benchmarks, including Montezuma's
Revenge, we demonstrate that our approach can learn significantly faster than
hierarchical RL, and be significantly more label-efficient than standard IL. We
also theoretically analyze labeling cost for certain instantiations of our
framework.
| null |
http://arxiv.org/abs/1803.00590v2
|
http://arxiv.org/pdf/1803.00590v2.pdf
|
ICML 2018 7
|
[
"Hoang M. Le",
"Nan Jiang",
"Alekh Agarwal",
"Miroslav Dudík",
"Yisong Yue",
"Hal Daumé III"
] |
[
"Decision Making",
"Imitation Learning",
"Montezuma's Revenge",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Sequential Decision Making"
] | 2018-03-01T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2290
|
http://proceedings.mlr.press/v80/le18a/le18a.pdf
|
hierarchical-imitation-and-reinforcement-1
| null |
[] |
https://paperswithcode.com/paper/towards-multifocal-displays-with-dense-focal
|
1805.10664
| null | null |
Towards Multifocal Displays with Dense Focal Stacks
|
We present a virtual reality display that is capable of generating a dense
collection of depth/focal planes. This is achieved by driving a focus-tunable
lens to sweep a range of focal lengths at a high frequency and, subsequently,
tracking the focal length precisely at microsecond time resolutions using an
optical module. Precise tracking of the focal length, coupled with a high-speed
display, enables our lab prototype to generate 1600 focal planes per second.
This enables a novel first-of-its-kind virtual reality multifocal display that
is capable of resolving the vergence-accommodation conflict endemic to today's
displays.
| null |
http://arxiv.org/abs/1805.10664v3
|
http://arxiv.org/pdf/1805.10664v3.pdf
| null |
[
"Jen-Hao Rick Chang",
"B. V. K. Vijaya Kumar",
"Aswin C. Sankaranarayanan"
] |
[] | 2018-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/algorithmic-causal-deconvolution-of
|
1802.09904
| null | null |
Algorithmic Causal Deconvolution of Intertwined Programs and Networks by Generative Mechanism
|
Complex data usually results from the interaction of objects produced by different generating mechanisms. Here we introduce a universal, unsupervised and parameter-free model-oriented approach, based upon the seminal concept of algorithmic probability, that decomposes an observation into its most likely algorithmic generative sources. Our approach uses a causal calculus to infer model representations. We demonstrate its ability to deconvolve interacting mechanisms regardless of whether the resultant objects are strings, space-time evolution diagrams, images or networks. While this is mostly a conceptual contribution and a novel framework, we provide numerical evidence evaluating the ability of our methods to separate data from observations produced by discrete dynamical systems such as cellular automata and complex networks. We think that these separating techniques can contribute to tackling the challenge of causation, thus complementing other statistically oriented approaches.
| null |
https://arxiv.org/abs/1802.09904v8
|
https://arxiv.org/pdf/1802.09904v8.pdf
| null |
[
"Hector Zenil",
"Narsis A. Kiani",
"Allan A. Zea",
"Jesper Tegnér"
] |
[] | 2018-02-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hierarchical-clustering-with-prior-knowledge
|
1806.03432
| null | null |
Hierarchical Clustering with Prior Knowledge
|
Hierarchical clustering is a class of algorithms that seeks to build a
hierarchy of clusters. It has been the dominant approach to constructing
embedded classification schemes since it outputs dendrograms, which capture the
hierarchical relationship among members at all levels of granularity,
simultaneously. Being greedy in the algorithmic sense, a hierarchical
clustering partitions data at every step solely based on a similarity /
dissimilarity measure. The clustering results oftentimes depend on not only the
distribution of the underlying data, but also the choice of dissimilarity
measure and the clustering algorithm. In this paper, we propose a method to
incorporate prior domain knowledge about entity relationship into the
hierarchical clustering. Specifically, we use a distance function in
ultrametric space to encode the external ontological information. We show that
popular linkage-based algorithms can faithfully recover the encoded structure.
Similar to some regularized machine learning techniques, we add this distance
as a penalty term to the original pairwise distance to regulate the final
structure of the dendrogram. As a case study, we applied this method on real
data in the building of a customer behavior based product taxonomy for an
Amazon service, leveraging the information from a larger Amazon-wide browse
structure. The method is useful when one wants to leverage the relational
information from external sources, or the data used to generate the distance
matrix is noisy and sparse. Our work falls in the category of semi-supervised
or constrained clustering.
| null |
http://arxiv.org/abs/1806.03432v3
|
http://arxiv.org/pdf/1806.03432v3.pdf
| null |
[
"Xiaofei Ma",
"Satya Dhavala"
] |
[
"Clustering",
"Constrained Clustering"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/word-familiarity-and-frequency
|
1806.03431
| null | null |
Word Familiarity and Frequency
|
Word frequency is assumed to correlate with word familiarity, but the
strength of this correlation has not been thoroughly investigated. In this
paper, we report on our analysis of the correlation between a word familiarity
rating list obtained through a psycholinguistic experiment and the
log-frequency obtained from various corpora of different kinds and sizes (up to
the terabyte scale) for English and Japanese. Major findings are threefold:
First, for a given corpus, familiarity is necessary for a word to achieve high
frequency, but familiar words are not necessarily frequent. Second, correlation
increases with the corpus data size. Third, a corpus of spoken language
correlates better than one of written language. These findings suggest that
cognitive familiarity ratings are correlated to frequency, but more highly to
that of spoken rather than written language.
| null |
http://arxiv.org/abs/1806.03431v1
|
http://arxiv.org/pdf/1806.03431v1.pdf
| null |
[
"Kumiko Tanaka-Ishii",
"Hiroshi Terada"
] |
[] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-encoder-decoder-framework-translating
|
1711.06061
| null | null |
An Encoder-Decoder Framework Translating Natural Language to Database Queries
|
Machine translation is going through a radical revolution, driven by the
explosive development of deep learning techniques using Convolutional Neural
Network (CNN) and Recurrent Neural Network (RNN). In this paper, we consider a
special case in machine translation problems, targeting to convert natural
language into Structured Query Language (SQL) for data retrieval over
relational database. Although generic CNN and RNN learn the grammar structure
of SQL when trained with sufficient samples, the accuracy and training
efficiency of the model could be dramatically improved, when the translation
model is deeply integrated with the grammar rules of SQL. We present a new
encoder-decoder framework, with a suite of new approaches, including new
semantic features fed into the encoder, grammar-aware states injected into the
memory of decoder, as well as recursive state management for sub-queries. These
techniques help the neural network better focus on understanding semantics of
operations in natural language and save the efforts on SQL grammar learning.
The empirical evaluation on real world database and queries show that our
approach outperform state-of-the-art solution by a significant margin.
| null |
http://arxiv.org/abs/1711.06061v2
|
http://arxiv.org/pdf/1711.06061v2.pdf
| null |
[
"Ruichu Cai",
"Boyan Xu",
"Xiaoyan Yang",
"Zhenjie Zhang",
"Zijian Li",
"Zhihao Liang"
] |
[
"Decoder",
"Machine Translation",
"Management",
"Retrieval",
"Translation"
] | 2017-11-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/speech2vec-a-sequence-to-sequence-framework
|
1803.08976
| null | null |
Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech
|
In this paper, we propose a novel deep neural network architecture,
Speech2Vec, for learning fixed-length vector representations of audio segments
excised from a speech corpus, where the vectors contain semantic information
pertaining to the underlying spoken words, and are close to other vectors in
the embedding space if their corresponding underlying spoken words are
semantically similar. The proposed model can be viewed as a speech version of
Word2Vec. Its design is based on a RNN Encoder-Decoder framework, and borrows
the methodology of skipgrams or continuous bag-of-words for training. Learning
word embeddings directly from speech enables Speech2Vec to make use of the
semantic information carried by speech that does not exist in plain text. The
learned word embeddings are evaluated and analyzed on 13 widely used word
similarity benchmarks, and outperform word embeddings learned by Word2Vec from
the transcriptions.
|
In this paper, we propose a novel deep neural network architecture, Speech2Vec, for learning fixed-length vector representations of audio segments excised from a speech corpus, where the vectors contain semantic information pertaining to the underlying spoken words, and are close to other vectors in the embedding space if their corresponding underlying spoken words are semantically similar.
|
http://arxiv.org/abs/1803.08976v2
|
http://arxiv.org/pdf/1803.08976v2.pdf
| null |
[
"Yu-An Chung",
"James Glass"
] |
[
"Decoder",
"Learning Word Embeddings",
"Word Embeddings",
"Word Similarity"
] | 2018-03-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/efficient-optimization-algorithms-for-robust
|
1806.03430
| null | null |
Efficient Optimization Algorithms for Robust Principal Component Analysis and Its Variants
|
Robust PCA has drawn significant attention in the last decade due to its
success in numerous application domains, ranging from bio-informatics,
statistics, and machine learning to image and video processing in computer
vision. Robust PCA and its variants such as sparse PCA and stable PCA can be
formulated as optimization problems with exploitable special structures. Many
specialized efficient optimization methods have been proposed to solve robust
PCA and related problems. In this paper we review existing optimization methods
for solving convex and nonconvex relaxations/variants of robust PCA, discuss
their advantages and disadvantages, and elaborate on their convergence
behaviors. We also provide some insights for possible future research
directions including new algorithmic frameworks that might be suitable for
implementing on multi-processor setting to handle large-scale problems.
| null |
http://arxiv.org/abs/1806.03430v1
|
http://arxiv.org/pdf/1806.03430v1.pdf
| null |
[
"Shiqian Ma",
"Necdet Serhat Aybat"
] |
[] | 2018-06-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/efficient-and-accurate-mri-super-resolution
|
1803.01417
| null | null |
Efficient and Accurate MRI Super-Resolution using a Generative Adversarial Network and 3D Multi-Level Densely Connected Network
|
High-resolution (HR) magnetic resonance images (MRI) provide detailed
anatomical information important for clinical application and quantitative
image analysis. However, HR MRI conventionally comes at the cost of longer scan
time, smaller spatial coverage, and lower signal-to-noise ratio (SNR). Recent
studies have shown that single image super-resolution (SISR), a technique to
recover HR details from one single low-resolution (LR) input image, could
provide high-quality image details with the help of advanced deep convolutional
neural networks (CNN). However, deep neural networks consume memory heavily and
run slowly, especially in 3D settings. In this paper, we propose a novel 3D
neural network design, namely a multi-level densely connected super-resolution
network (mDCSRN) with generative adversarial network (GAN)-guided training. The
mDCSRN quickly trains and inferences and the GAN promotes realistic output
hardly distinguishable from original HR images. Our results from experiments on
a dataset with 1,113 subjects show that our new architecture beats other
popular deep learning methods in recovering 4x resolution-downgraded im-ages
and runs 6x faster.
|
High-resolution (HR) magnetic resonance images (MRI) provide detailed anatomical information important for clinical application and quantitative image analysis.
|
http://arxiv.org/abs/1803.01417v3
|
http://arxiv.org/pdf/1803.01417v3.pdf
| null |
[
"Yuhua Chen",
"Feng Shi",
"Anthony G. Christodoulou",
"Zhengwei Zhou",
"Yibin Xie",
"Debiao Li"
] |
[
"Generative Adversarial Network",
"Image Super-Resolution",
"Super-Resolution"
] | 2018-03-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/learning-continuous-hierarchies-in-the
|
1806.03417
| null | null |
Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry
|
We are concerned with the discovery of hierarchical relationships from
large-scale unstructured similarity scores. For this purpose, we study
different models of hyperbolic space and find that learning embeddings in the
Lorentz model is substantially more efficient than in the Poincar\'e-ball
model. We show that the proposed approach allows us to learn high-quality
embeddings of large taxonomies which yield improvements over Poincar\'e
embeddings, especially in low dimensions. Lastly, we apply our model to
discover hierarchies in two real-world datasets: we show that an embedding in
hyperbolic space can reveal important aspects of a company's organizational
structure as well as reveal historical relationships between language families.
|
We are concerned with the discovery of hierarchical relationships from large-scale unstructured similarity scores.
|
http://arxiv.org/abs/1806.03417v2
|
http://arxiv.org/pdf/1806.03417v2.pdf
|
ICML 2018 7
|
[
"Maximilian Nickel",
"Douwe Kiela"
] |
[] | 2018-06-09T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2370
|
http://proceedings.mlr.press/v80/nickel18a/nickel18a.pdf
|
learning-continuous-hierarchies-in-the-1
| null |
[] |
https://paperswithcode.com/paper/analysis-of-minimax-error-rate-for
|
1802.04551
| null | null |
Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model
|
While crowdsourcing has become an important means to label data, there is
great interest in estimating the ground truth from unreliable labels produced
by crowdworkers. The Dawid and Skene (DS) model is one of the most well-known
models in the study of crowdsourcing. Despite its practical popularity,
theoretical error analysis for the DS model has been conducted only under
restrictive assumptions on class priors, confusion matrices, or the number of
labels each worker provides. In this paper, we derive a minimax error rate
under more practical setting for a broader class of crowdsourcing models
including the DS model as a special case. We further propose the worker
clustering model, which is more practical than the DS model under real
crowdsourcing settings. The wide applicability of our theoretical analysis
allows us to immediately investigate the behavior of this proposed model, which
can not be analyzed by existing studies. Experimental results showed that there
is a strong similarity between the lower bound of the minimax error rate
derived by our theoretical analysis and the empirical error of the estimated
value.
|
In this paper, we derive a minimax error rate under more practical setting for a broader class of crowdsourcing models including the DS model as a special case.
|
http://arxiv.org/abs/1802.04551v2
|
http://arxiv.org/pdf/1802.04551v2.pdf
|
ICML 2018 7
|
[
"Hideaki Imamura",
"Issei Sato",
"Masashi Sugiyama"
] |
[
"Clustering"
] | 2018-02-13T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2211
|
http://proceedings.mlr.press/v80/imamura18a/imamura18a.pdf
|
analysis-of-minimax-error-rate-for-1
| null |
[] |
https://paperswithcode.com/paper/joint-stem-detection-and-crop-weed
|
1806.03413
| null | null |
Joint Stem Detection and Crop-Weed Classification for Plant-specific Treatment in Precision Farming
|
Applying agrochemicals is the default procedure for conventional weed control
in crop production, but has negative impacts on the environment. Robots have
the potential to treat every plant in the field individually and thus can
reduce the required use of such chemicals. To achieve that, robots need the
ability to identify crops and weeds in the field and must additionally select
effective treatments. While certain types of weed can be treated mechanically,
other types need to be treated by (selective) spraying. In this paper, we
present an approach that provides the necessary information for effective
plant-specific treatment. It outputs the stem location for weeds, which allows
for mechanical treatments, and the covered area of the weed for selective
spraying. Our approach uses an end-to-end trainable fully convolutional network
that simultaneously estimates stem positions as well as the covered area of
crops and weeds. It jointly learns the class-wise stem detection and the
pixel-wise semantic segmentation. Experimental evaluations on different
real-world datasets show that our approach is able to reliably solve this
problem. Compared to state-of-the-art approaches, our approach not only
substantially improves the stem detection accuracy, i.e., distinguishing crop
and weed stems, but also provides an improvement in the semantic segmentation
performance.
| null |
http://arxiv.org/abs/1806.03413v1
|
http://arxiv.org/pdf/1806.03413v1.pdf
| null |
[
"Philipp Lottes",
"Jens Behley",
"Nived Chebrolu",
"Andres Milioto",
"Cyrill Stachniss"
] |
[
"General Classification",
"Semantic Segmentation"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fully-convolutional-networks-with-sequential
|
1806.03412
| null | null |
Fully Convolutional Networks with Sequential Information for Robust Crop and Weed Detection in Precision Farming
|
Reducing the use of agrochemicals is an important component towards
sustainable agriculture. Robots that can perform targeted weed control offer
the potential to contribute to this goal, for example, through specialized
weeding actions such as selective spraying or mechanical weed removal. A
prerequisite of such systems is a reliable and robust plant classification
system that is able to distinguish crop and weed in the field. A major
challenge in this context is the fact that different fields show a large
variability. Thus, classification systems have to robustly cope with
substantial environmental changes with respect to weed pressure and weed types,
growth stages of the crop, visual appearance, and soil conditions. In this
paper, we propose a novel crop-weed classification system that relies on a
fully convolutional network with an encoder-decoder structure and incorporates
spatial information by considering image sequences. Exploiting the crop
arrangement information that is observable from the image sequences enables our
system to robustly estimate a pixel-wise labeling of the images into crop and
weed, i.e., a semantic segmentation. We provide a thorough experimental
evaluation, which shows that our system generalizes well to previously unseen
fields under varying environmental conditions --- a key capability to actually
use such systems in precision framing. We provide comparisons to other
state-of-the-art approaches and show that our system substantially improves the
accuracy of crop-weed classification without requiring a retraining of the
model.
| null |
http://arxiv.org/abs/1806.03412v1
|
http://arxiv.org/pdf/1806.03412v1.pdf
| null |
[
"Philipp Lottes",
"Jens Behley",
"Andres Milioto",
"Cyrill Stachniss"
] |
[
"Classification",
"General Classification",
"Semantic Segmentation"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-scene-gist-with-convolutional-neural
|
1803.01967
| null | null |
Learning Scene Gist with Convolutional Neural Networks to Improve Object Recognition
|
Advancements in convolutional neural networks (CNNs) have made significant
strides toward achieving high performance levels on multiple object recognition
tasks. While some approaches utilize information from the entire scene to
propose regions of interest, the task of interpreting a particular region or
object is still performed independently of other objects and features in the
image. Here we demonstrate that a scene's 'gist' can significantly contribute
to how well humans can recognize objects. These findings are consistent with
the notion that humans foveate on an object and incorporate information from
the periphery to aid in recognition. We use a biologically inspired two-part
convolutional neural network ('GistNet') that models the fovea and periphery to
provide a proof-of-principle demonstration that computational object
recognition can significantly benefit from the gist of the scene as contextual
information. Our model yields accuracy improvements of up to 50% in certain
object categories when incorporating contextual gist, while only increasing the
original model size by 5%. This proposed model mirrors our intuition about how
the human visual system recognizes objects, suggesting specific biologically
plausible constraints to improve machine vision and building initial steps
towards the challenge of scene understanding.
| null |
http://arxiv.org/abs/1803.01967v2
|
http://arxiv.org/pdf/1803.01967v2.pdf
| null |
[
"Kevin Wu",
"Eric Wu",
"Gabriel Kreiman"
] |
[
"Object",
"Object Recognition",
"Scene Understanding"
] | 2018-03-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deterministic-stretchy-regression
|
1806.03404
| null | null |
Deterministic Stretchy Regression
|
An extension of the regularized least-squares in which the estimation
parameters are stretchable is introduced and studied in this paper. The
solution of this ridge regression with stretchable parameters is given in
primal and dual spaces and in closed-form. Essentially, the proposed solution
stretches the covariance computation by a power term, thereby compressing or
amplifying the estimation parameters. To maintain the computation of power root
terms within the real space, an input transformation is proposed. The results
of an empirical evaluation in both synthetic and real-world data illustrate
that the proposed method is effective for compressive learning with
high-dimensional data.
| null |
http://arxiv.org/abs/1806.03404v1
|
http://arxiv.org/pdf/1806.03404v1.pdf
| null |
[
"Kar-Ann Toh",
"Lei Sun",
"Zhiping Lin"
] |
[
"regression"
] | 2018-06-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-probabilistic-framework-for-multi-view
|
1802.04630
| null | null |
A probabilistic framework for multi-view feature learning with many-to-many associations via neural networks
|
A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is
proposed for multi-view feature learning with many-to-many associations so that
it generalizes various existing multi-view methods. PMvGE is a probabilistic
model for predicting new associations via graph embedding of the nodes of data
vectors with links of their associations. Multi-view data vectors with
many-to-many associations are transformed by neural networks to feature vectors
in a shared space, and the probability of new association between two data
vectors is modeled by the inner product of their feature vectors. While
existing multi-view feature learning techniques can treat only either of
many-to-many association or non-linear transformation, PMvGE can treat both
simultaneously. By combining Mercer's theorem and the universal approximation
theorem, we prove that PMvGE learns a wide class of similarity measures across
views. Our likelihood-based estimator enables efficient computation of
non-linear transformations of data vectors in large-scale datasets by minibatch
SGD, and numerical experiments illustrate that PMvGE outperforms existing
multi-view methods.
| null |
http://arxiv.org/abs/1802.04630v2
|
http://arxiv.org/pdf/1802.04630v2.pdf
|
ICML 2018 7
|
[
"Akifumi Okuno",
"Tetsuya Hada",
"Hidetoshi Shimodaira"
] |
[
"Graph Embedding"
] | 2018-02-13T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2020
|
http://proceedings.mlr.press/v80/okuno18a/okuno18a.pdf
|
a-probabilistic-framework-for-multi-view-1
| null |
[] |
https://paperswithcode.com/paper/unsupervised-learning-of-depth-and-ego-motion-2
|
1802.05522
| null | null |
Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
|
We present a novel approach for unsupervised learning of depth and ego-motion
from monocular video. Unsupervised learning removes the need for separate
supervisory signals (depth or ego-motion ground truth, or multi-view video).
Prior work in unsupervised depth learning uses pixel-wise or gradient-based
losses, which only consider pixels in small local neighborhoods. Our main
contribution is to explicitly consider the inferred 3D geometry of the scene,
enforcing consistency of the estimated 3D point clouds and ego-motion across
consecutive frames. This is a challenging task and is solved by a novel
(approximate) backpropagation algorithm for aligning 3D structures.
We combine this novel 3D-based loss with 2D losses based on photometric
quality of frame reconstructions using estimated depth and ego-motion from
adjacent frames. We also incorporate validity masks to avoid penalizing areas
in which no useful information exists.
We test our algorithm on the KITTI dataset and on a video dataset captured on
an uncalibrated mobile phone camera. Our proposed approach consistently
improves depth estimates on both datasets, and outperforms the state-of-the-art
for both depth and ego-motion. Because we only require a simple video, learning
depth and ego-motion on large and varied datasets becomes possible. We
demonstrate this by training on the low quality uncalibrated video dataset and
evaluating on KITTI, ranking among top performing prior methods which are
trained on KITTI itself.
|
We present a novel approach for unsupervised learning of depth and ego-motion from monocular video.
|
http://arxiv.org/abs/1802.05522v2
|
http://arxiv.org/pdf/1802.05522v2.pdf
|
CVPR 2018 6
|
[
"Reza Mahjourian",
"Martin Wicke",
"Anelia Angelova"
] |
[
"3D geometry",
"Depth And Camera Motion",
"Depth Estimation",
"Monocular Depth Estimation",
"Simultaneous Localization and Mapping"
] | 2018-02-15T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Mahjourian_Unsupervised_Learning_of_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Mahjourian_Unsupervised_Learning_of_CVPR_2018_paper.pdf
|
unsupervised-learning-of-depth-and-ego-motion-3
| null |
[] |
https://paperswithcode.com/paper/going-deeper-in-spiking-neural-networks-vgg
|
1802.02627
| null | null |
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures
|
Over the past few years, Spiking Neural Networks (SNNs) have become popular
as a possible pathway to enable low-power event-driven neuromorphic hardware.
However, their application in machine learning have largely been limited to
very shallow neural network architectures for simple problems. In this paper,
we propose a novel algorithmic technique for generating an SNN with a deep
architecture, and demonstrate its effectiveness on complex visual recognition
problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and
Residual network architectures, with significantly better accuracy than the
state-of-the-art. Finally, we present analysis of the sparse event-driven
computations to demonstrate reduced hardware overhead when operating in the
spiking domain.
|
Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware.
|
http://arxiv.org/abs/1802.02627v4
|
http://arxiv.org/pdf/1802.02627v4.pdf
| null |
[
"Abhronil Sengupta",
"Yuting Ye",
"Robert Wang",
"Chiao Liu",
"Kaushik Roy"
] |
[] | 2018-02-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Ethereum has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Ethereum transaction not confirmed, your Ethereum wallet not showing balance, or you're trying to recover a lost Ethereum wallet, knowing where to get help is essential. That’s why the Ethereum customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Ethereum Customer Support Number +1-833-534-1729\r\nEthereum operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Ethereum Transaction Not Confirmed\r\nOne of the most common concerns is when a Ethereum transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Ethereum Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Ethereum wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Ethereum Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Ethereum wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Ethereum Deposit Not Received\r\nIf someone has sent you Ethereum but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Ethereum deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Ethereum Transaction Stuck or Pending\r\nSometimes your Ethereum transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Ethereum Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Ethereum wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Ethereum Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Ethereum tech.\r\n\r\n24/7 Availability: Ethereum doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Ethereum Support and Wallet Issues\r\nQ1: Can Ethereum support help me recover stolen BTC?\r\nA: While Ethereum transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Ethereum transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Ethereum’s official number (Ethereum is decentralized), it connects you to trained professionals experienced in resolving all major Ethereum issues.\r\n\r\nFinal Thoughts\r\nEthereum is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Ethereum transaction not confirmed, your Ethereum wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Ethereum customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Ethereum Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Ethereum Customer Service Number +1-833-534-1729",
"source_title": "Very Deep Convolutional Networks for Large-Scale Image Recognition",
"source_url": "http://arxiv.org/abs/1409.1556v6"
}
] |
https://paperswithcode.com/paper/cs-vqa-visual-question-answering-with
|
1806.03379
| null | null |
CS-VQA: Visual Question Answering with Compressively Sensed Images
|
Visual Question Answering (VQA) is a complex semantic task requiring both
natural language processing and visual recognition. In this paper, we explore
whether VQA is solvable when images are captured in a sub-Nyquist compressive
paradigm. We develop a series of deep-network architectures that exploit
available compressive data to increasing degrees of accuracy, and show that VQA
is indeed solvable in the compressed domain. Our results show that there is
nominal degradation in VQA performance when using compressive measurements, but
that accuracy can be recovered when VQA pipelines are used in conjunction with
state-of-the-art deep neural networks for CS reconstruction. The results
presented yield important implications for resource-constrained VQA
applications.
| null |
http://arxiv.org/abs/1806.03379v1
|
http://arxiv.org/pdf/1806.03379v1.pdf
| null |
[
"Li-Chi Huang",
"Kuldeep Kulkarni",
"Anik Jha",
"Suhas Lohit",
"Suren Jayasuriya",
"Pavan Turaga"
] |
[
"Question Answering",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/self-supervisory-signals-for-object-discovery
|
1806.03370
| null | null |
Self-supervisory Signals for Object Discovery and Detection
|
In robotic applications, we often face the challenge of discovering new
objects while having very little or no labelled training data. In this paper we
explore the use of self-supervision provided by a robot traversing an
environment to learn representations of encountered objects. Knowledge of
ego-motion and depth perception enables the agent to effectively associate
multiple object proposals, which serve as training data for learning object
representations from unlabelled images. We demonstrate the utility of this
representation in two ways. First, we can automatically discover objects by
performing clustering in the learned embedding space. Each resulting cluster
contains examples of one instance seen from various viewpoints and scales.
Second, given a small number of labeled images, we can efficiently learn
detectors for these labels. In the few-shot regime, these detectors have a
substantially higher mAP of 0.22 compared to 0.12 of off-the-shelf standard
detectors trained on this limited data. Thus, the proposed self-supervision
results in effective environment specific object discovery and detection at no
or very small human labeling cost.
| null |
http://arxiv.org/abs/1806.03370v1
|
http://arxiv.org/pdf/1806.03370v1.pdf
| null |
[
"Etienne Pot",
"Alexander Toshev",
"Jana Kosecka"
] |
[
"Clustering",
"Object",
"Object Discovery"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sarcasmdetection-is-soooo-general-towards-a
|
1806.03369
| null | null |
#SarcasmDetection is soooo general! Towards a Domain-Independent Approach for Detecting Sarcasm
|
Automatic sarcasm detection methods have traditionally been designed for
maximum performance on a specific domain. This poses challenges for those
wishing to transfer those approaches to other existing or novel domains, which
may be typified by very different language characteristics. We develop a
general set of features and evaluate it under different training scenarios
utilizing in-domain and/or out-of-domain training data. The best-performing
scenario, training on both while employing a domain adaptation step, achieves
an F1 of 0.780, which is well above baseline F1-measures of 0.515 and 0.345. We
also show that the approach outperforms the best results from prior work on the
same target domain.
| null |
http://arxiv.org/abs/1806.03369v1
|
http://arxiv.org/pdf/1806.03369v1.pdf
| null |
[
"Natalie Parde",
"Rodney D. Nielsen"
] |
[
"Domain Adaptation",
"Sarcasm Detection"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-models-of-interactions-across-sets
|
1803.02879
| null | null |
Deep Models of Interactions Across Sets
|
We use deep learning to model interactions across two or more sets of
objects, such as user-movie ratings, protein-drug bindings, or ternary
user-item-tag interactions. The canonical representation of such interactions
is a matrix (or a higher-dimensional tensor) with an exchangeability property:
the encoding's meaning is not changed by permuting rows or columns. We argue
that models should hence be Permutation Equivariant (PE): constrained to make
the same predictions across such permutations. We present a parameter-sharing
scheme and prove that it could not be made any more expressive without
violating PE. This scheme yields three benefits. First, we demonstrate
state-of-the-art performance on multiple matrix completion benchmarks. Second,
our models require a number of parameters independent of the numbers of
objects, and thus scale well to large datasets. Third, models can be queried
about new objects that were not available at training time, but for which
interactions have since been observed. In experiments, our models achieved
surprisingly good generalization performance on this matrix extrapolation task,
both within domains (e.g., new users and new movies drawn from the same
distribution used for training) and even across domains (e.g., predicting music
ratings after training on movies).
|
In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e. g., new users and new movies drawn from the same distribution used for training) and even across domains (e. g., predicting music ratings after training on movies).
|
http://arxiv.org/abs/1803.02879v2
|
http://arxiv.org/pdf/1803.02879v2.pdf
|
ICML 2018 7
|
[
"Jason Hartford",
"Devon R Graham",
"Kevin Leyton-Brown",
"Siamak Ravanbakhsh"
] |
[
"Collaborative Filtering",
"Matrix Completion",
"Recommendation Systems",
"TAG"
] | 2018-03-07T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2428
|
http://proceedings.mlr.press/v80/hartford18a/hartford18a.pdf
|
deep-models-of-interactions-across-sets-1
| null |
[] |
https://paperswithcode.com/paper/dynamically-hierarchy-revolution-dirnet-for
|
1806.01248
| null | null |
Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices
|
Recurrent neural networks (RNNs) achieve cutting-edge performance on a
variety of problems. However, due to their high computational and memory
demands, deploying RNNs on resource constrained mobile devices is a challenging
task. To guarantee minimum accuracy loss with higher compression rate and
driven by the mobile resource requirement, we introduce a novel model
compression approach DirNet based on an optimized fast dictionary learning
algorithm, which 1) dynamically mines the dictionary atoms of the projection
dictionary matrix within layer to adjust the compression rate 2) adaptively
changes the sparsity of sparse codes cross the hierarchical layers.
Experimental results on language model and an ASR model trained with a 1000h
speech dataset demonstrate that our method significantly outperforms prior
approaches. Evaluated on off-the-shelf mobile devices, we are able to reduce
the size of original model by eight times with real-time model inference and
negligible accuracy loss.
| null |
http://arxiv.org/abs/1806.01248v2
|
http://arxiv.org/pdf/1806.01248v2.pdf
| null |
[
"Jie Zhang",
"Xiaolong Wang",
"Dawei Li",
"Yalin Wang"
] |
[
"Dictionary Learning",
"Language Modeling",
"Language Modelling",
"Model Compression"
] | 2018-06-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-content-based-late-fusion-approach-applied
|
1806.03361
| null | null |
A Content-Based Late Fusion Approach Applied to Pedestrian Detection
|
The variety of pedestrians detectors proposed in recent years has encouraged
some works to fuse pedestrian detectors to achieve a more accurate detection.
The intuition behind is to combine the detectors based on its spatial
consensus. We propose a novel method called Content-Based Spatial Consensus
(CSBC), which, in addition to relying on spatial consensus, considers the
content of the detection windows to learn a weighted-fusion of pedestrian
detectors. The result is a reduction in false alarms and an enhancement in the
detection. In this work, we also demonstrate that there is small influence of
the feature used to learn the contents of the windows of each detector, which
enables our method to be efficient even employing simple features. The CSBC
overcomes state-of-the-art fusion methods in the ETH dataset and in the Caltech
dataset. Particularly, our method is more efficient since fewer detectors are
necessary to achieve expressive results.
| null |
http://arxiv.org/abs/1806.03361v1
|
http://arxiv.org/pdf/1806.03361v1.pdf
| null |
[
"Jessica Sena",
"Artur Jordao",
"William Robson Schwartz"
] |
[
"Pedestrian Detection"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/situated-mapping-of-sequential-instructions
|
1805.10209
| null | null |
Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation
|
We propose a learning approach for mapping context-dependent sequential
instructions to actions. We address the problem of discourse and state
dependencies with an attention-based model that considers both the history of
the interaction and the state of the world. To train from start and goal states
without access to demonstrations, we propose SESTRA, a learning algorithm that
takes advantage of single-step reward observations and immediate expected
reward maximization. We evaluate on the SCONE domains, and show absolute
accuracy improvements of 9.8%-25.3% across the domains over approaches that use
high-level logical representations.
|
We propose a learning approach for mapping context-dependent sequential instructions to actions.
|
http://arxiv.org/abs/1805.10209v2
|
http://arxiv.org/pdf/1805.10209v2.pdf
|
ACL 2018 7
|
[
"Alane Suhr",
"Yoav Artzi"
] |
[] | 2018-05-25T00:00:00 |
https://aclanthology.org/P18-1193
|
https://aclanthology.org/P18-1193.pdf
|
situated-mapping-of-sequential-instructions-1
| null |
[] |
https://paperswithcode.com/paper/measuring-conversational-productivity-in
|
1806.03357
| null | null |
Measuring Conversational Productivity in Child Forensic Interviews
|
Child Forensic Interviewing (FI) presents a challenge for effective
information retrieval and decision making. The high stakes associated with the
process demand that expert legal interviewers are able to effectively establish
a channel of communication and elicit substantive knowledge from the
child-client while minimizing potential for experiencing trauma. As a first
step toward computationally modeling and producing quality spoken interviewing
strategies and a generalized understanding of interview dynamics, we propose a
novel methodology to computationally model effectiveness criteria, by applying
summarization and topic modeling techniques to objectively measure and rank the
responsiveness and conversational productivity of a child during FI. We score
information retrieval by constructing an agenda to represent general topics of
interest and measuring alignment with a given response and leveraging lexical
entrainment for responsiveness. For comparison, we present our methods along
with traditional metrics of evaluation and discuss the use of prior information
for generating situational awareness.
| null |
http://arxiv.org/abs/1806.03357v1
|
http://arxiv.org/pdf/1806.03357v1.pdf
| null |
[
"Victor Ardulov",
"Manoj Kumar",
"Shanna Williams",
"Thomas Lyon",
"Shrikanth Narayanan"
] |
[
"Decision Making",
"Information Retrieval",
"Retrieval"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semi-amortized-variational-autoencoders
|
1802.02550
| null | null |
Semi-Amortized Variational Autoencoders
|
Amortized variational inference (AVI) replaces instance-specific local
inference with a global inference network. While AVI has enabled efficient
training of deep generative models such as variational autoencoders (VAE),
recent empirical work suggests that inference networks can produce suboptimal
variational parameters. We propose a hybrid approach, to use AVI to initialize
the variational parameters and run stochastic variational inference (SVI) to
refine them. Crucially, the local SVI procedure is itself differentiable, so
the inference network and generative model can be trained end-to-end with
gradient-based optimization. This semi-amortized approach enables the use of
rich generative models without experiencing the posterior-collapse phenomenon
common in training VAEs for problems like text generation. Experiments show
this approach outperforms strong autoregressive and variational baselines on
standard text and image datasets.
|
Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network.
|
http://arxiv.org/abs/1802.02550v7
|
http://arxiv.org/pdf/1802.02550v7.pdf
|
ICML 2018 7
|
[
"Yoon Kim",
"Sam Wiseman",
"Andrew C. Miller",
"David Sontag",
"Alexander M. Rush"
] |
[
"Text Generation",
"Variational Inference"
] | 2018-02-07T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1981
|
http://proceedings.mlr.press/v80/kim18e/kim18e.pdf
|
semi-amortized-variational-autoencoders-1
| null |
[] |
https://paperswithcode.com/paper/the-effect-of-planning-shape-on-dyna-style
|
1806.01825
| null | null |
The Effect of Planning Shape on Dyna-style Planning in High-dimensional State Spaces
|
Dyna is a fundamental approach to model-based reinforcement learning (MBRL)
that interleaves planning, acting, and learning in an online setting. In the
most typical application of Dyna, the dynamics model is used to generate
one-step transitions from selected start states from the agent's history, which
are used to update the agent's value function or policy as if they were real
experiences. In this work, one-step Dyna was applied to several games from the
Arcade Learning Environment (ALE). We found that the model-based updates
offered surprisingly little benefit over simply performing more updates with
the agent's existing experience, even when using a perfect model. We
hypothesize that to get the most from planning, the model must be used to
generate unfamiliar experience. To test this, we experimented with the "shape"
of planning in multiple different concrete instantiations of Dyna, performing
fewer, longer rollouts, rather than many short rollouts. We found that planning
shape has a profound impact on the efficacy of Dyna for both perfect and
learned models. In addition to these findings regarding Dyna in general, our
results represent, to our knowledge, the first time that a learned dynamics
model has been successfully used for planning in the ALE, suggesting that Dyna
may be a viable approach to MBRL in the ALE and other high-dimensional
problems.
| null |
http://arxiv.org/abs/1806.01825v3
|
http://arxiv.org/pdf/1806.01825v3.pdf
| null |
[
"G. Zacharias Holland",
"Erin J. Talvitie",
"Michael Bowling"
] |
[
"Atari Games",
"Model-based Reinforcement Learning",
"Reinforcement Learning"
] | 2018-06-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-optimal-algorithm-for-online-unconstrained
|
1806.03349
| null | null |
An Optimal Algorithm for Online Unconstrained Submodular Maximization
|
We consider a basic problem at the interface of two fundamental fields:
submodular optimization and online learning. In the online unconstrained
submodular maximization (online USM) problem, there is a universe
$[n]=\{1,2,...,n\}$ and a sequence of $T$ nonnegative (not necessarily
monotone) submodular functions arrive over time. The goal is to design a
computationally efficient online algorithm, which chooses a subset of $[n]$ at
each time step as a function only of the past, such that the accumulated value
of the chosen subsets is as close as possible to the maximum total value of a
fixed subset in hindsight. Our main result is a polynomial-time no-$1/2$-regret
algorithm for this problem, meaning that for every sequence of nonnegative
submodular functions, the algorithm's expected total value is at least $1/2$
times that of the best subset in hindsight, up to an error term sublinear in
$T$. The factor of $1/2$ cannot be improved upon by any polynomial-time online
algorithm when the submodular functions are presented as value oracles.
Previous work on the offline problem implies that picking a subset uniformly at
random in each time step achieves zero $1/4$-regret.
A byproduct of our techniques is an explicit subroutine for the two-experts
problem that has an unusually strong regret guarantee: the total value of its
choices is comparable to twice the total value of either expert on rounds it
did not pick that expert. This subroutine may be of independent interest.
| null |
http://arxiv.org/abs/1806.03349v1
|
http://arxiv.org/pdf/1806.03349v1.pdf
| null |
[
"Tim Roughgarden",
"Joshua R. Wang"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dsslic-deep-semantic-segmentation-based
|
1806.03348
| null | null |
DSSLIC: Deep Semantic Segmentation-based Layered Image Compression
|
Deep learning has revolutionized many computer vision fields in the last few
years, including learning-based image compression. In this paper, we propose a
deep semantic segmentation-based layered image compression (DSSLIC) framework
in which the semantic segmentation map of the input image is obtained and
encoded as the base layer of the bit-stream. A compact representation of the
input image is also generated and encoded as the first enhancement layer. The
segmentation map and the compact version of the image are then employed to
obtain a coarse reconstruction of the image. The residual between the input and
the coarse reconstruction is additionally encoded as another enhancement layer.
Experimental results show that the proposed framework outperforms the
H.265/HEVC-based BPG and other codecs in both PSNR and MS-SSIM metrics across a
wide range of bit rates in RGB domain. Besides, since semantic segmentation map
is included in the bit-stream, the proposed scheme can facilitate many other
tasks such as image search and object-based adaptive image compression.
|
A compact representation of the input image is also generated and encoded as the first enhancement layer.
|
http://arxiv.org/abs/1806.03348v3
|
http://arxiv.org/pdf/1806.03348v3.pdf
| null |
[
"Mohammad Akbari",
"Jie Liang",
"Jingning Han"
] |
[
"Image Compression",
"Image Retrieval",
"MS-SSIM",
"Segmentation",
"Semantic Segmentation",
"SSIM"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/asp-learning-to-forget-with-adaptive-synaptic
|
1703.07655
| null | null |
ASP: Learning to Forget with Adaptive Synaptic Plasticity in Spiking Neural Networks
|
A fundamental feature of learning in animals is the "ability to forget" that
allows an organism to perceive, model and make decisions from disparate streams
of information and adapt to changing environments. Against this backdrop, we
present a novel unsupervised learning mechanism ASP (Adaptive Synaptic
Plasticity) for improved recognition with Spiking Neural Networks (SNNs) for
real time on-line learning in a dynamic environment. We incorporate an adaptive
weight decay mechanism with the traditional Spike Timing Dependent Plasticity
(STDP) learning to model adaptivity in SNNs. The leak rate of the synaptic
weights is modulated based on the temporal correlation between the spiking
patterns of the pre- and post-synaptic neurons. This mechanism helps in gradual
forgetting of insignificant data while retaining significant, yet old,
information. ASP, thus, maintains a balance between forgetting and immediate
learning to construct a stable-plastic self-adaptive SNN for continuously
changing inputs. We demonstrate that the proposed learning methodology
addresses catastrophic forgetting while yielding significantly improved
accuracy over the conventional STDP learning method for digit recognition
applications. Additionally, we observe that the proposed learning model
automatically encodes selective attention towards relevant features in the
input data while eliminating the influence of background noise (or denoising)
further improving the robustness of the ASP learning.
| null |
http://arxiv.org/abs/1703.07655v2
|
http://arxiv.org/pdf/1703.07655v2.pdf
| null |
[
"Priyadarshini Panda",
"Jason M. Allred",
"Shriram Ramanathan",
"Kaushik Roy"
] |
[
"Denoising"
] | 2017-03-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/discovering-signals-from-web-sources-to
|
1806.03342
| null | null |
Discovering Signals from Web Sources to Predict Cyber Attacks
|
Cyber attacks are growing in frequency and severity. Over the past year alone
we have witnessed massive data breaches that stole personal information of
millions of people and wide-scale ransomware attacks that paralyzed critical
infrastructure of several countries. Combating the rising cyber threat calls
for a multi-pronged strategy, which includes predicting when these attacks will
occur. The intuition driving our approach is this: during the planning and
preparation stages, hackers leave digital traces of their activities on both
the surface web and dark web in the form of discussions on platforms like
hacker forums, social media, blogs and the like. These data provide predictive
signals that allow anticipating cyber attacks. In this paper, we describe
machine learning techniques based on deep neural networks and autoregressive
time series models that leverage external signals from publicly available Web
sources to forecast cyber attacks. Performance of our framework across ground
truth data over real-world forecasting tasks shows that our methods yield a
significant lift or increase of F1 for the top signals on predicted cyber
attacks. Our results suggest that, when deployed, our system will be able to
provide an effective line of defense against various types of targeted cyber
attacks.
| null |
http://arxiv.org/abs/1806.03342v1
|
http://arxiv.org/pdf/1806.03342v1.pdf
| null |
[
"Palash Goyal",
"KSM Tozammel Hossain",
"Ashok Deb",
"Nazgol Tavabi",
"Nathan Bartley",
"Andr'es Abeliuk",
"Emilio Ferrara",
"Kristina Lerman"
] |
[
"Time Series",
"Time Series Analysis"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-the-reward-function-for-a
|
1801.09624
| null | null |
Learning the Reward Function for a Misspecified Model
|
In model-based reinforcement learning it is typical to decouple the problems
of learning the dynamics model and learning the reward function. However, when
the dynamics model is flawed, it may generate erroneous states that would never
occur in the true environment. It is not clear a priori what value the reward
function should assign to such states. This paper presents a novel error bound
that accounts for the reward model's behavior in states sampled from the model.
This bound is used to extend the existing Hallucinated DAgger-MC algorithm,
which offers theoretical performance guarantees in deterministic MDPs that do
not assume a perfect model can be learned. Empirically, this approach to reward
learning can yield dramatic improvements in control performance when the
dynamics model is flawed.
|
Empirically, this approach to reward learning can yield dramatic improvements in control performance when the dynamics model is flawed.
|
http://arxiv.org/abs/1801.09624v3
|
http://arxiv.org/pdf/1801.09624v3.pdf
|
ICML 2018 7
|
[
"Erik Talvitie"
] |
[
"model",
"Model-based Reinforcement Learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-01-29T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1913
|
http://proceedings.mlr.press/v80/talvitie18a/talvitie18a.pdf
|
learning-the-reward-function-for-a-1
| null |
[] |
https://paperswithcode.com/paper/randomized-prior-functions-for-deep
|
1806.03335
| null | null |
Randomized Prior Functions for Deep Reinforcement Learning
|
Dealing with uncertainty is essential for efficient reinforcement learning.
There is a growing literature on uncertainty estimation for deep learning from
fixed datasets, but many of the most popular approaches are poorly-suited to
sequential decision problems. Other methods, such as bootstrap sampling, have
no mechanism for uncertainty that does not come from the observed data. We
highlight why this can be a crucial shortcoming and propose a simple remedy
through addition of a randomized untrainable `prior' network to each ensemble
member. We prove that this approach is efficient with linear representations,
provide simple illustrations of its efficacy with nonlinear representations and
show that this approach scales to large-scale problems far better than previous
attempts.
|
Dealing with uncertainty is essential for efficient reinforcement learning.
|
http://arxiv.org/abs/1806.03335v2
|
http://arxiv.org/pdf/1806.03335v2.pdf
|
NeurIPS 2018 12
|
[
"Ian Osband",
"John Aslanides",
"Albin Cassirer"
] |
[
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-08T00:00:00 |
http://papers.nips.cc/paper/8080-randomized-prior-functions-for-deep-reinforcement-learning
|
http://papers.nips.cc/paper/8080-randomized-prior-functions-for-deep-reinforcement-learning.pdf
|
randomized-prior-functions-for-deep-1
| null |
[] |
https://paperswithcode.com/paper/securing-distributed-machine-learning-in-high
|
1804.10140
| null | null |
Securing Distributed Gradient Descent in High Dimensional Statistical Learning
|
We consider unreliable distributed learning systems wherein the training data is kept confidential by external workers, and the learner has to interact closely with those workers to train a model. In particular, we assume that there exists a system adversary that can adaptively compromise some workers; the compromised workers deviate from their local designed specifications by sending out arbitrarily malicious messages. We assume in each communication round, up to $q$ out of the $m$ workers suffer Byzantine faults. Each worker keeps a local sample of size $n$ and the total sample size is $N=nm$. We propose a secured variant of the gradient descent method that can tolerate up to a constant fraction of Byzantine workers, i.e., $q/m = O(1)$. Moreover, we show the statistical estimation error of the iterates converges in $O(\log N)$ rounds to $O(\sqrt{q/N} + \sqrt{d/N})$, where $d$ is the model dimension. As long as $q=O(d)$, our proposed algorithm achieves the optimal error rate $O(\sqrt{d/N})$. Our results are obtained under some technical assumptions. Specifically, we assume strongly-convex population risk. Nevertheless, the empirical risk (sample version) is allowed to be non-convex. The core of our method is to robustly aggregate the gradients computed by the workers based on the filtering procedure proposed by Steinhardt et al. On the technical front, deviating from the existing literature on robustly estimating a finite-dimensional mean vector, we establish a {\em uniform} concentration of the sample covariance matrix of gradients, and show that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function. To get a near-optimal uniform concentration bound, we develop a new matrix concentration inequality, which might be of independent interest.
| null |
https://arxiv.org/abs/1804.10140v3
|
https://arxiv.org/pdf/1804.10140v3.pdf
| null |
[
"Lili Su",
"Jiaming Xu"
] |
[
"Vocal Bursts Intensity Prediction"
] | 2018-04-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/provable-defenses-against-adversarial
|
1711.00851
| null | null |
Provable defenses against adversarial examples via the convex outer adversarial polytope
|
We propose a method to learn deep ReLU-based classifiers that are provably
robust against norm-bounded adversarial perturbations on the training data. For
previously unseen examples, the approach is guaranteed to detect all
adversarial examples, though it may flag some non-adversarial examples as well.
The basic idea is to consider a convex outer approximation of the set of
activations reachable through a norm-bounded perturbation, and we develop a
robust optimization procedure that minimizes the worst case loss over this
outer region (via a linear program). Crucially, we show that the dual problem
to this linear program can be represented itself as a deep network similar to
the backpropagation network, leading to very efficient optimization approaches
that produce guaranteed bounds on the robust loss. The end result is that by
executing a few more forward and backward passes through a slightly modified
version of the original network (though possibly with much larger batch sizes),
we can learn a classifier that is provably robust to any norm-bounded
adversarial attack. We illustrate the approach on a number of tasks to train
classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a
convolutional classifier that provably has less than 5.8% test error for any
adversarial attack with bounded $\ell_\infty$ norm less than $\epsilon = 0.1$),
and code for all experiments in the paper is available at
https://github.com/locuslab/convex_adversarial.
|
We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.
|
http://arxiv.org/abs/1711.00851v3
|
http://arxiv.org/pdf/1711.00851v3.pdf
|
ICML 2018 7
|
[
"Eric Wong",
"J. Zico Kolter"
] |
[
"Adversarial Attack"
] | 2017-11-02T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2136
|
http://proceedings.mlr.press/v80/wong18a/wong18a.pdf
|
provable-defenses-against-adversarial-1
| null |
[] |
https://paperswithcode.com/paper/learning-to-rank-for-censored-survival-data
|
1806.01984
| null | null |
Learning to rank for censored survival data
|
Survival analysis is a type of semi-supervised ranking task where the target
output (the survival time) is often right-censored. Utilizing this information
is a challenge because it is not obvious how to correctly incorporate these
censored examples into a model. We study how three categories of loss
functions, namely partial likelihood methods, rank methods, and our
classification method based on a Wasserstein metric (WM) and the non-parametric
Kaplan Meier estimate of the probability density to impute the labels of
censored examples, can take advantage of this information. The proposed method
allows us to have a model that predict the probability distribution of an
event. If a clinician had access to the detailed probability of an event over
time this would help in treatment planning. For example, determining if the
risk of kidney graft rejection is constant or peaked after some time. Also, we
demonstrate that this approach directly optimizes the expected C-index which is
the most common evaluation metric for ranking survival models.
|
Survival analysis is a type of semi-supervised ranking task where the target output (the survival time) is often right-censored.
|
http://arxiv.org/abs/1806.01984v2
|
http://arxiv.org/pdf/1806.01984v2.pdf
| null |
[
"Margaux Luck",
"Tristan Sylvain",
"Joseph Paul Cohen",
"Heloise Cardinal",
"Andrea Lodi",
"Yoshua Bengio"
] |
[
"Learning-To-Rank",
"Survival Analysis"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-learning-for-surgical-motion-by
|
1806.03318
| null | null |
Unsupervised Learning for Surgical Motion by Learning to Predict the Future
|
We show that it is possible to learn meaningful representations of surgical
motion, without supervision, by learning to predict the future. An architecture
that combines an RNN encoder-decoder and mixture density networks (MDNs) is
developed to model the conditional distribution over future motion given past
motion. We show that the learned encodings naturally cluster according to
high-level activities, and we demonstrate the usefulness of these learned
encodings in the context of information retrieval, where a database of surgical
motion is searched for suturing activity using a motion-based query. Future
prediction with MDNs is found to significantly outperform simpler baselines as
well as the best previously-published result for this task, advancing
state-of-the-art performance from an F1 score of 0.60 +- 0.14 to 0.77 +- 0.05.
| null |
http://arxiv.org/abs/1806.03318v1
|
http://arxiv.org/pdf/1806.03318v1.pdf
| null |
[
"Robert DiPietro",
"Gregory D. Hager"
] |
[
"Decoder",
"Future prediction",
"Information Retrieval",
"Retrieval"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-meta-learning
|
1806.03316
| null |
Z_3x5eFk1l-
|
Adversarial Meta-Learning
|
Meta-learning enables a model to learn from very limited data to undertake a new task. In this paper, we study the general meta-learning with adversarial samples. We present a meta-learning algorithm, ADML (ADversarial Meta-Learner), which leverages clean and adversarial samples to optimize the initialization of a learning model in an adversarial manner. ADML leads to the following desirable properties: 1) it turns out to be very effective even in the cases with only clean samples; 2) it is robust to adversarial samples, i.e., unlike other meta-learning algorithms, it only leads to a minor performance degradation when there are adversarial samples; 3) it sheds light on tackling the cases with limited and even contaminated samples. It has been shown by extensive experimental results that ADML consistently outperforms three representative meta-learning algorithms in the cases involving adversarial samples, on two widely-used image datasets, MiniImageNet and CIFAR100, in terms of both accuracy and robustness.
| null |
https://arxiv.org/abs/1806.03316v3
|
https://arxiv.org/pdf/1806.03316v3.pdf
| null |
[
"Chengxiang Yin",
"Jian Tang",
"Zhiyuan Xu",
"Yanzhi Wang"
] |
[
"Meta-Learning"
] | 2018-06-08T00:00:00 |
https://openreview.net/forum?id=Z_3x5eFk1l-
|
https://openreview.net/pdf?id=Z_3x5eFk1l-
| null | null |
[] |
https://paperswithcode.com/paper/stein-points
|
1803.10161
| null | null |
Stein Points
|
An important task in computational statistics and machine learning is to
approximate a posterior distribution $p(x)$ with an empirical measure supported
on a set of representative points $\{x_i\}_{i=1}^n$. This paper focuses on
methods where the selection of points is essentially deterministic, with an
emphasis on achieving accurate approximation when $n$ is small. To this end, we
present `Stein Points'. The idea is to exploit either a greedy or a conditional
gradient method to iteratively minimise a kernel Stein discrepancy between the
empirical measure and $p(x)$. Our empirical results demonstrate that Stein
Points enable accurate approximation of the posterior at modest computational
cost. In addition, theoretical results are provided to establish convergence of
the method.
|
An important task in computational statistics and machine learning is to approximate a posterior distribution $p(x)$ with an empirical measure supported on a set of representative points $\{x_i\}_{i=1}^n$.
|
http://arxiv.org/abs/1803.10161v4
|
http://arxiv.org/pdf/1803.10161v4.pdf
|
ICML 2018 7
|
[
"Wilson Ye Chen",
"Lester Mackey",
"Jackson Gorham",
"François-Xavier Briol",
"Chris. J. Oates"
] |
[] | 2018-03-27T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2275
|
http://proceedings.mlr.press/v80/chen18f/chen18f.pdf
|
stein-points-1
| null |
[] |
https://paperswithcode.com/paper/discriminability-objective-for-training
|
1803.04376
| null | null |
Discriminability objective for training descriptive captions
|
One property that remains lacking in image captions generated by contemporary
methods is discriminability: being able to tell two images apart given the
caption for one of them. We propose a way to improve this aspect of caption
generation. By incorporating into the captioning training objective a loss
component directly related to ability (by a machine) to disambiguate
image/caption matches, we obtain systems that produce much more discriminative
caption, according to human evaluation. Remarkably, our approach leads to
improvement in other aspects of generated captions, reflected by a battery of
standard scores such as BLEU, SPICE etc. Our approach is modular and can be
applied to a variety of model/loss combinations commonly proposed for image
captioning.
|
One property that remains lacking in image captions generated by contemporary methods is discriminability: being able to tell two images apart given the caption for one of them.
|
http://arxiv.org/abs/1803.04376v2
|
http://arxiv.org/pdf/1803.04376v2.pdf
|
CVPR 2018 6
|
[
"Ruotian Luo",
"Brian Price",
"Scott Cohen",
"Gregory Shakhnarovich"
] |
[
"Caption Generation",
"Descriptive",
"Image Captioning"
] | 2018-03-12T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Luo_Discriminability_Objective_for_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Luo_Discriminability_Objective_for_CVPR_2018_paper.pdf
|
discriminability-objective-for-training-1
| null |
[] |
https://paperswithcode.com/paper/curriculum-learning-by-transfer-learning
|
1802.03796
| null | null |
Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks
|
We provide theoretical investigation of curriculum learning in the context of
stochastic gradient descent when optimizing the convex linear regression loss.
We prove that the rate of convergence of an ideal curriculum learning method is
monotonically increasing with the difficulty of the examples. Moreover, among
all equally difficult points, convergence is faster when using points which
incur higher loss with respect to the current hypothesis. We then analyze
curriculum learning in the context of training a CNN. We describe a method
which infers the curriculum by way of transfer learning from another network,
pre-trained on a different task. While this approach can only approximate the
ideal curriculum, we observe empirically similar behavior to the one predicted
by the theory, namely, a significant boost in convergence speed at the
beginning of training. When the task is made more difficult, improvement in
generalization performance is also observed. Finally, curriculum learning
exhibits robustness against unfavorable conditions such as excessive
regularization.
| null |
http://arxiv.org/abs/1802.03796v4
|
http://arxiv.org/pdf/1802.03796v4.pdf
|
ICML 2018 7
|
[
"Daphna Weinshall",
"Gad Cohen",
"Dan Amir"
] |
[
"Learning Theory",
"Transfer Learning"
] | 2018-02-11T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2021
|
http://proceedings.mlr.press/v80/weinshall18a/weinshall18a.pdf
|
curriculum-learning-by-transfer-learning-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/policy-gradient-as-a-proxy-for-dynamic
|
1806.03290
| null | null |
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
|
Dynamic oracles provide strong supervision for training constituency parsers
with exploration, but must be custom defined for a given parser's transition
system. We explore using a policy gradient method as a parser-agnostic
alternative. In addition to directly optimizing for a tree-level metric such as
F1, policy gradient has the potential to reduce exposure bias by allowing
exploration during training; moreover, it does not require a dynamic oracle for
supervision. On four constituency parsers in three languages, the method
substantially outperforms static oracle likelihood training in almost all
settings. For parsers where a dynamic oracle is available (including a novel
oracle which we define for the transition system of Dyer et al. 2016), policy
gradient typically recaptures a substantial fraction of the performance gain
afforded by the dynamic oracle.
| null |
http://arxiv.org/abs/1806.03290v1
|
http://arxiv.org/pdf/1806.03290v1.pdf
|
ACL 2018 7
|
[
"Daniel Fried",
"Dan Klein"
] |
[
"Constituency Parsing"
] | 2018-06-08T00:00:00 |
https://aclanthology.org/P18-2075
|
https://aclanthology.org/P18-2075.pdf
|
policy-gradient-as-a-proxy-for-dynamic-1
| null |
[] |
https://paperswithcode.com/paper/stabiliser-states-are-efficiently-pac
|
1705.00345
| null | null |
Stabiliser states are efficiently PAC-learnable
|
The exponential scaling of the wave function is a fundamental property of
quantum systems with far reaching implications in our ability to process
quantum information. A problem where these are particularly relevant is quantum
state tomography. State tomography, whose objective is to obtain a full
description of a quantum system, can be analysed in the framework of
computational learning theory. In this model, quantum states have been shown to
be Probably Approximately Correct (PAC)-learnable with sample complexity linear
in the number of qubits. However, it is conjectured that in general quantum
states require an exponential amount of computation to be learned. Here, using
results from the literature on the efficient classical simulation of quantum
systems, we show that stabiliser states are efficiently PAC-learnable. Our
results solve an open problem formulated by Aaronson [Proc. R. Soc. A, 2088,
(2007)] and propose learning theory as a tool for exploring the power of
quantum computation.
| null |
http://arxiv.org/abs/1705.00345v2
|
http://arxiv.org/pdf/1705.00345v2.pdf
| null |
[
"Andrea Rocchetto"
] |
[
"Learning Theory",
"Quantum State Tomography"
] | 2017-04-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/slalom-fast-verifiable-and-private-execution
|
1806.03287
| null |
rJVorjCcKQ
|
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
|
As Machine Learning (ML) gets applied to security-critical or sensitive
domains, there is a growing need for integrity and privacy for outsourced ML
computations. A pragmatic solution comes from Trusted Execution Environments
(TEEs), which use hardware and software protections to isolate sensitive
computations from the untrusted software stack. However, these isolation
guarantees come at a price in performance, compared to untrusted alternatives.
This paper initiates the study of high performance execution of Deep Neural
Networks (DNNs) in TEEs by efficiently partitioning DNN computations between
trusted and untrusted devices. Building upon an efficient outsourcing scheme
for matrix multiplication, we propose Slalom, a framework that securely
delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX
or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate
Slalom by running DNNs in an Intel SGX enclave, which selectively delegates
work to an untrusted GPU. For canonical DNNs (VGG16, MobileNet and ResNet
variants) we obtain 6x to 20x increases in throughput for verifiable inference,
and 4x to 11x for verifiable and private inference.
|
As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations.
|
http://arxiv.org/abs/1806.03287v2
|
http://arxiv.org/pdf/1806.03287v2.pdf
|
ICLR 2019 5
|
[
"Florian Tramèr",
"Dan Boneh"
] |
[
"GPU"
] | 2018-06-08T00:00:00 |
https://openreview.net/forum?id=rJVorjCcKQ
|
https://openreview.net/pdf?id=rJVorjCcKQ
|
slalom-fast-verifiable-and-private-execution-1
| null |
[] |
https://paperswithcode.com/paper/nonparametric-regression-with-comparisons
|
1806.03286
| null | null |
Regression with Comparisons: Escaping the Curse of Dimensionality with Ordinal Information
|
In supervised learning, we typically leverage a fully labeled dataset to design methods for function estimation or prediction. In many practical situations, we are able to obtain alternative feedback, possibly at a low cost. A broad goal is to understand the usefulness of, and to design algorithms to exploit, this alternative feedback. In this paper, we consider a semi-supervised regression setting, where we obtain additional ordinal (or comparison) information for the unlabeled samples. We consider ordinal feedback of varying qualities where we have either a perfect ordering of the samples, a noisy ordering of the samples or noisy pairwise comparisons between the samples. We provide a precise quantification of the usefulness of these types of ordinal feedback in both nonparametric and linear regression, showing that in many cases it is possible to accurately estimate an underlying function with a very small labeled set, effectively \emph{escaping the curse of dimensionality}. We also present lower bounds, that establish fundamental limits for the task and show that our algorithms are optimal in a variety of settings. Finally, we present extensive experiments on new datasets that demonstrate the efficacy and practicality of our algorithms and investigate their robustness to various sources of noise and model misspecification.
| null |
https://arxiv.org/abs/1806.03286v2
|
https://arxiv.org/pdf/1806.03286v2.pdf
|
ICML 2018 7
|
[
"Yichong Xu",
"Sivaraman Balakrishnan",
"Aarti Singh",
"Artur Dubrawski"
] |
[
"regression"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/blind-justice-fairness-with-encrypted
|
1806.03281
| null | null |
Blind Justice: Fairness with Encrypted Sensitive Attributes
|
Recent work has explored how to train machine learning models which do not
discriminate against any subgroup of the population as determined by sensitive
attributes such as gender or race. To avoid disparate treatment, sensitive
attributes should not be considered. On the other hand, in order to avoid
disparate impact, sensitive attributes must be examined, e.g., in order to
learn a fair model, or to check if a given model is fair. We introduce methods
from secure multi-party computation which allow us to avoid both. By encrypting
sensitive attributes, we show how an outcome-based fair model may be learned,
checked, or have its outputs verified and held to account, without users
revealing their sensitive attributes.
|
Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race.
|
http://arxiv.org/abs/1806.03281v1
|
http://arxiv.org/pdf/1806.03281v1.pdf
|
ICML 2018 7
|
[
"Niki Kilbertus",
"Adrià Gascón",
"Matt J. Kusner",
"Michael Veale",
"Krishna P. Gummadi",
"Adrian Weller"
] |
[
"Fairness"
] | 2018-06-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1906
|
http://proceedings.mlr.press/v80/kilbertus18a/kilbertus18a.pdf
|
blind-justice-fairness-with-encrypted-1
| null |
[] |
https://paperswithcode.com/paper/multilingual-neural-machine-translation-with
|
1806.03280
| null | null |
Multilingual Neural Machine Translation with Task-Specific Attention
|
Multilingual machine translation addresses the task of translating between
multiple source and target languages. We propose task-specific attention
models, a simple but effective technique for improving the quality of
sequence-to-sequence neural multilingual translation. Our approach seeks to
retain as much of the parameter sharing generalization of NMT models as
possible, while still allowing for language-specific specialization of the
attention model to a particular language-pair or task. Our experiments on four
languages of the Europarl corpus show that using a target-specific model of
attention provides consistent gains in translation quality for all possible
translation directions, compared to a model in which all parameters are shared.
We observe improved translation quality even in the (extreme) low-resource
zero-shot translation directions for which the model never saw explicitly
paired parallel data.
| null |
http://arxiv.org/abs/1806.03280v1
|
http://arxiv.org/pdf/1806.03280v1.pdf
|
COLING 2018 8
|
[
"Graeme Blackwood",
"Miguel Ballesteros",
"Todd Ward"
] |
[
"Machine Translation",
"NMT",
"Translation"
] | 2018-06-08T00:00:00 |
https://aclanthology.org/C18-1263
|
https://aclanthology.org/C18-1263.pdf
|
multilingual-neural-machine-translation-with-7
| null |
[] |
https://paperswithcode.com/paper/towards-dependability-metrics-for-neural
|
1806.02338
| null | null |
Towards Dependability Metrics for Neural Networks
|
Artificial neural networks (NN) are instrumental in realizing
highly-automated driving functionality. An overarching challenge is to identify
best safety engineering practices for NN and other learning-enabled components.
In particular, there is an urgent need for an adequate set of metrics for
measuring all-important NN dependability attributes. We address this challenge
by proposing a number of NN-specific and efficiently computable metrics for
measuring NN dependability attributes including robustness, interpretability,
completeness, and correctness.
| null |
http://arxiv.org/abs/1806.02338v2
|
http://arxiv.org/pdf/1806.02338v2.pdf
| null |
[
"Chih-Hong Cheng",
"Georg Nührenberg",
"Chung-Hao Huang",
"Harald Ruess",
"Hirotoshi Yasuoka"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cuisinenet-food-attributes-classification
|
1805.12081
| null | null |
CuisineNet: Food Attributes Classification using Multi-scale Convolution Network
|
Diversity of food and its attributes represents the culinary habits of
peoples from different countries. Thus, this paper addresses the problem of
identifying food culture of people around the world and its flavor by
classifying two main food attributes, cuisine and flavor. A deep learning model
based on multi-scale convotuional networks is proposed for extracting more
accurate features from input images. The aggregation of multi-scale convolution
layers with different kernel size is also used for weighting the features
results from different scales. In addition, a joint loss function based on
Negative Log Likelihood (NLL) is used to fit the model probability to multi
labeled classes for multi-modal classification task. Furthermore, this work
provides a new dataset for food attributes, so-called Yummly48K, extracted from
the popular food website, Yummly. Our model is assessed on the constructed
Yummly48K dataset. The experimental results show that our proposed method
yields 65% and 62% average F1 score on validation and test set which
outperforming the state-of-the-art models.
| null |
http://arxiv.org/abs/1805.12081v2
|
http://arxiv.org/pdf/1805.12081v2.pdf
| null |
[
"Md. Mostafa Kamal Sarker",
"Mohammed Jabreel",
"Hatem A. Rashwan",
"Syeda Furruka Banu",
"Antonio Moreno",
"Petia Radeva",
"Domenec Puig"
] |
[
"Classification",
"Cultural Vocal Bursts Intensity Prediction",
"Diversity",
"General Classification",
"Multi-modal Classification"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/orbital-petri-nets-a-novel-petri-net-approach
|
1806.03267
| null | null |
Orbital Petri Nets: A Novel Petri Net Approach
|
Petri Nets is very interesting tool for studying and simulating different
behaviors of information systems. It can be used in different applications
based on the appropriate class of Petri Nets whereas it is classical, colored
or timed Petri Nets. In this paper we introduce a new approach of Petri Nets
called orbital Petri Nets (OPN) for studying the orbital rotating systems
within a specific domain. The study investigated and analyzed OPN with
highlighting the problem of space debris collision problem as a case study. The
mathematical investigation results of two OPN models proved that space debris
collision problem can be prevented based on the new method of firing sequence
in OPN. By this study, new smart algorithms can be implemented and simulated by
orbital Petri Nets for mitigating the space debris collision problem as a next
work.
| null |
http://arxiv.org/abs/1806.03267v1
|
http://arxiv.org/pdf/1806.03267v1.pdf
| null |
[
"Mohamed Yorky",
"Aboul Ella Hassanien"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sheep-identity-recognition-age-and-weight
|
1806.04017
| null | null |
Sheep identity recognition, age and weight estimation datasets
|
Increased interest of scientists, producers and consumers in sheep
identification has been stimulated by the dramatic increase in population and
the urge to increase productivity. The world population is expected to exceed
9.6 million in 2050. For this reason, awareness is raised towards the necessity
of effective livestock production. Sheep is considered as one of the main of
food resources. Most of the research now is directed towards developing real
time applications that facilitate sheep identification for breed management and
gathering related information like weight and age. Weight and age are key
matrices in assessing the effectiveness of production. For this reason, visual
analysis proved recently its significant success over other approaches. Visual
analysis techniques need enough images for testing and study completion. For
this reason, collecting sheep images database is a vital step to fulfill such
objective. We provide here datasets for testing and comparing such algorithms
which are under development. Our collected dataset consists of 416 color images
for different features of sheep in different postures. Images were collected
fifty two sheep at a range of year from three months to six years. For each
sheep, two images were captured for both sides of the body, two images for both
sides of the face, one image from the top view, one image for the hip and one
image for the teeth. The collected images cover different illumination, quality
levels and angle of rotation. The allocated data set can be used to test sheep
identification, weigh estimation, and age detection algorithms. Such algorithms
are crucial for disease management, animal assessment and ownership.
| null |
http://arxiv.org/abs/1806.04017v1
|
http://arxiv.org/pdf/1806.04017v1.pdf
| null |
[
"Aya Salama Abdelhady",
"Aboul Ella Hassanenin",
"Aly Fahmy"
] |
[
"Management"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/patchfcn-for-intracranial-hemorrhage
|
1806.03265
| null | null |
PatchFCN for Intracranial Hemorrhage Detection
|
This paper studies the problem of detecting and segmenting acute intracranial
hemorrhage on head computed tomography (CT) scans. We propose to solve both
tasks as a semantic segmentation problem using a patch-based fully
convolutional network (PatchFCN). This formulation allows us to accurately
localize hemorrhages while bypassing the complexity of object detection. Our
system demonstrates competitive performance with a human expert and the
state-of-the-art on classification tasks (0.976, 0.966 AUC of ROC on
retrospective and prospective test sets) and on segmentation tasks (0.785 pixel
AP, 0.766 Dice score), while using much less data and a simpler system. In
addition, we conduct a series of controlled experiments to understand "why"
PatchFCN outperforms standard FCN. Our studies show that PatchFCN finds a good
trade-off between batch diversity and the amount of context during training.
These findings may also apply to other medical segmentation tasks.
| null |
http://arxiv.org/abs/1806.03265v2
|
http://arxiv.org/pdf/1806.03265v2.pdf
| null |
[
"Wei-cheng Kuo",
"Christian Häne",
"Esther Yuh",
"Pratik Mukherjee",
"Jitendra Malik"
] |
[
"Computed Tomography (CT)",
"Diversity",
"object-detection",
"Object Detection",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/Jackey9797/FCN",
"description": "**Fully Convolutional Networks**, or **FCNs**, are an architecture used mainly for semantic segmentation. They employ solely locally connected layers, such as [convolution](https://paperswithcode.com/method/convolution), pooling and upsampling. Avoiding the use of dense layers means less parameters (making the networks faster to train). It also means an FCN can work for variable image sizes given all connections are local.\r\n\r\nThe network consists of a downsampling path, used to extract and interpret the context, and an upsampling path, which allows for localization. \r\n\r\nFCNs also employ skip connections to recover the fine-grained spatial information lost in the downsampling path.",
"full_name": "Fully Convolutional Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "FCN",
"source_title": "Fully Convolutional Networks for Semantic Segmentation",
"source_url": "http://arxiv.org/abs/1605.06211v1"
}
] |
https://paperswithcode.com/paper/information-based-inference-for-singular
|
1506.05855
| null | null |
Information-based inference for singular models and finite sample sizes: A frequentist information criterion
|
In the information-based paradigm of inference, model selection is performed
by selecting the candidate model with the best estimated predictive
performance. The success of this approach depends on the accuracy of the
estimate of the predictive complexity. In the large-sample-size limit of a
regular model, the predictive performance is well estimated by the Akaike
Information Criterion (AIC). However, this approximation can either
significantly under or over-estimating the complexity in a wide range of
important applications where models are either non-regular or
finite-sample-size corrections are significant. We introduce an improved
approximation for the complexity that is used to define a new information
criterion: the Frequentist Information Criterion (QIC). QIC extends the
applicability of information-based inference to the finite-sample-size regime
of regular models and to singular models. We demonstrate the power and the
comparative advantage of QIC in a number of example analyses.
| null |
http://arxiv.org/abs/1506.05855v5
|
http://arxiv.org/pdf/1506.05855v5.pdf
| null |
[
"Colin H. LaMont",
"Paul A. Wiggins"
] |
[
"Model Selection"
] | 2015-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-learning-with-convolutional-neural
|
1703.05051
| null | null |
Deep learning with convolutional neural networks for EEG decoding and visualization
|
PLEASE READ AND CITE THE REVISED VERSION at Human Brain Mapping:
http://onlinelibrary.wiley.com/doi/10.1002/hbm.23730/full
Code available here: https://github.com/robintibor/braindecode
|
PLEASE READ AND CITE THE REVISED VERSION at Human Brain Mapping: http://onlinelibrary. wiley. com/doi/10. 1002/hbm. 23730/full Code available here: https://github. com/robintibor/braindecode
|
http://arxiv.org/abs/1703.05051v5
|
http://arxiv.org/pdf/1703.05051v5.pdf
| null |
[
"Robin Tibor Schirrmeister",
"Jost Tobias Springenberg",
"Lukas Dominique Josef Fiederer",
"Martin Glasstetter",
"Katharina Eggensperger",
"Michael Tangermann",
"Frank Hutter",
"Wolfram Burgard",
"Tonio Ball"
] |
[
"EEG",
"Eeg Decoding",
"Electroencephalogram (EEG)"
] | 2017-03-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automatic-view-planning-with-multi-scale-deep
|
1806.03228
| null | null |
Automatic View Planning with Multi-scale Deep Reinforcement Learning Agents
|
We propose a fully automatic method to find standardized view planes in 3D
image acquisitions. Standard view images are important in clinical practice as
they provide a means to perform biometric measurements from similar anatomical
regions. These views are often constrained to the native orientation of a 3D
image acquisition. Navigating through target anatomy to find the required view
plane is tedious and operator-dependent. For this task, we employ a multi-scale
reinforcement learning (RL) agent framework and extensively evaluate several
Deep Q-Network (DQN) based strategies. RL enables a natural learning paradigm
by interaction with the environment, which can be used to mimic experienced
operators. We evaluate our results using the distance between the anatomical
landmarks and detected planes, and the angles between their normal vector and
target. The proposed algorithm is assessed on the mid-sagittal and
anterior-posterior commissure planes of brain MRI, and the 4-chamber long-axis
plane commonly used in cardiac MRI, achieving accuracy of 1.53mm, 1.98mm and
4.84mm, respectively.
| null |
http://arxiv.org/abs/1806.03228v1
|
http://arxiv.org/pdf/1806.03228v1.pdf
| null |
[
"Amir Alansary",
"Loic Le Folgoc",
"Ghislain Vaillant",
"Ozan Oktay",
"Yuanwei Li",
"Wenjia Bai",
"Jonathan Passerat-Palmbach",
"Ricardo Guerrero",
"Konstantinos Kamnitsas",
"Benjamin Hou",
"Steven McDonagh",
"Ben Glocker",
"Bernhard Kainz",
"Daniel Rueckert"
] |
[
"Anatomy",
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.