pdf
stringlengths 49
199
⌀ | aff
stringlengths 1
1.36k
⌀ | year
stringclasses 19
values | technical_novelty_avg
float64 0
4
⌀ | video
stringlengths 21
47
⌀ | doi
stringlengths 31
63
⌀ | presentation_avg
float64 0
4
⌀ | proceeding
stringlengths 43
129
⌀ | presentation
stringclasses 796
values | sess
stringclasses 576
values | technical_novelty
stringclasses 700
values | arxiv
stringlengths 10
16
⌀ | author
stringlengths 1
1.96k
⌀ | site
stringlengths 37
191
⌀ | keywords
stringlengths 2
582
⌀ | oa
stringlengths 86
198
⌀ | empirical_novelty_avg
float64 0
4
⌀ | poster
stringlengths 57
95
⌀ | openreview
stringlengths 41
45
⌀ | conference
stringclasses 11
values | corr_rating_confidence
float64 -1
1
⌀ | corr_rating_correctness
float64 -1
1
⌀ | project
stringlengths 1
162
⌀ | track
stringclasses 3
values | rating_avg
float64 0
10
⌀ | rating
stringlengths 1
17
⌀ | correctness
stringclasses 809
values | slides
stringlengths 32
41
⌀ | title
stringlengths 2
192
⌀ | github
stringlengths 3
165
⌀ | authors
stringlengths 7
161
⌀ | correctness_avg
float64 0
5
⌀ | confidence_avg
float64 0
5
⌀ | status
stringclasses 22
values | confidence
stringlengths 1
17
⌀ | empirical_novelty
stringclasses 763
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
multi-entity sequential data;hidden markov models
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3
| null | null |
SpaMHMM: Sparse Mixture of Hidden Markov Models for Graph Connected Entities
| null | null | 0 | 4 |
Withdraw
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Statistical Relational Learning;Knowledge Graphs;Knowledge Extraction;Latent Feature Models;Variational Inference.
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Neural Variational Inference For Embedding Knowledge Graphs
|
https://github.com/anonymousauthors/neural_variational_inference_knowledge_graphs (assuming this is the correct repository, as no specific link is provided)
| null | 0 | 3.666667 |
Reject
|
3;5;3
| null |
null |
Microsoft Research, Redmond, WA 98052; Department of Computer Science, Stanford University, Stanford, CA 94305
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Daniel Selsam, Matthew Lamm, Benedikt B\"{u}nz, Percy Liang, Leonardo Moura, David L Dill
|
https://iclr.cc/virtual/2019/poster/726
|
sat;search;graph neural network;theorem proving;proof
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7
| null | null |
Learning a SAT Solver from Single-Bit Supervision
| null | null | 0 | 3.333333 |
Poster
|
3;4;3
| null |
null |
Department of Electronics and Computer Science, University of Southampton
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Yan Zhang, Jonathon Hare, Adam Prugel-Bennett
|
https://iclr.cc/virtual/2019/poster/692
|
sets;representation learning;permutation invariance
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 5 |
3;6;6
| null | null |
Learning Representations of Sets through Optimized Permutations
| null | null | 0 | 3.333333 |
Poster
|
2;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Time series;feature engineering;period detection;machine learning
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 3.666667 |
3;3;5
| null | null |
A fully automated periodicity detection in time series
| null | null | 0 | 2.333333 |
Reject
|
2;3;2
| null |
null |
IBM T.J. Watson Research Center, Yorktown Heights, NY 10598
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Chun-Fu (Richard) Chen, Quanfu Fan, Neil Mallinar, Tom Sercu, Rogerio Feris
|
https://iclr.cc/virtual/2019/poster/856
|
CNN;multi-scale;efficiency;object recognition;speech recognition
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition
| null | null | 0 | 4.333333 |
Poster
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Nesterov's method;convex optimization;first-order methods;stochastic gradient descent;differential equations;Liapunov's method
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Nesterov's method is the discretization of a differential equation with Hessian damping
| null | null | 0 | 4.666667 |
Withdraw
|
5;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;dynamic pricing;e-commerce;revenue management;field experiment
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
4;4;4
| null | null |
Dynamic Pricing on E-commerce Platform with Deep Reinforcement Learning
| null | null | 0 | 4 |
Reject
|
3;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Bayesian Optimization;Generative Models
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Constrained Bayesian Optimization for Automatic Chemical Design
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
sparse convolutional neural networks;regularized dual averaging
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3
| null | null |
iRDA Method for Sparse Convolutional Neural Networks
| null | null | 0 | 4.666667 |
Reject
|
5;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
model comparison;semantic similarity;STS;von Mises-Fisher;Information Theoretic Criteria
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
Model Comparison for Semantic Grouping
| null | null | 0 | 2.333333 |
Reject
|
1;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Imitation Learning;Deep Learning
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.5 |
4;4;5;5
| null | null |
One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL
| null | null | 0 | 3.5 |
Reject
|
4;4;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
intuitive physics;probabilistic programming;computational cognitive science;probabilistic models
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 3 |
2;3;4
| null | null |
Probabilistic Program Induction for Intuitive Physics Game Play
| null | null | 0 | 3.333333 |
Reject
|
4;4;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
image;captioning;captions;vision;language
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
Engaging Image Captioning Via Personality
| null | null | 0 | 5 |
Withdraw
|
5;5;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Convolutional Neural Networks combined with Runge-Kutta Methods
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
joint distribution matching;image-to-image translation;video-to-video synthesis;Wasserstein distance
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Learning Joint Wasserstein Auto-Encoders for Joint Distribution Matching
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null |
Jean Alaux-Lorain, Edouard Grave, marco cuturi, Armand Joulin
|
https://iclr.cc/virtual/2019/poster/999
| null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
5;6;7
| null | null |
Unsupervised Hyper-alignment for Multilingual Word Embeddings
| null | null | 0 | 3.333333 |
Poster
|
3;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Generative Adversarial Self-Imitation Learning
| null | null | 0 | 4.666667 |
Reject
|
4;5;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Memorization;Generalization;ReLU;Non-negative matrix factorization
| null | 0 | null | null |
iclr
| 0.970725 | 0 | null |
main
| 6.666667 |
5;6;9
| null | null |
Detecting Memorization in ReLU Networks
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
recurrent neural networks;partial differential equation;domain decomposition;consistency constraints;advection;diffusion
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 0 | null | null | null |
Scaling up Deep Learning for PDE-based Models
| null | null | 0 | 0 |
Withdraw
| null | null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Network Embedding;Graph Convolutional Networks;Deep Learning
| null | 0 | null | null |
iclr
| -0.327327 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
MILE: A Multi-Level Framework for Scalable Graph Embedding
| null | null | 0 | 4 |
Reject
|
4;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 3.666667 |
3;4;4
| null | null |
Image Score: how to select useful samples
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Knowledge-guided learning;Human advice;Column Networks;Knowledge-based relational deep model;Collective classification
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Human-Guided Column Networks: Augmenting Deep Learning with Advice
| null | null | 0 | 4 |
Reject
|
4;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph;pooling;unpooling;U-Net
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6 |
4;7;7
| null | null |
Graph U-Net
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Language Modeling;Self-Attention
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Transformer-XL: Language Modeling with Longer-Term Dependency
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
SGD;learning rate;step size schedules;stochastic approximation;stochastic optimization;deep learning;non-convex optimization;stochastic gradient descent
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Rethinking learning rate schedules for stochastic optimization
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
representation learning;information theory;information bottleneck;thermodynamics;predictive information
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5 |
3;5;7
| null | null |
TherML: The Thermodynamics of Machine Learning
| null | null | 0 | 3.333333 |
Reject
|
4;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
information theory;deep learning;generalization;information bottleneck;variational inference;approximate inference
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 5 |
3;5;7
| null | null |
Noisy Information Bottlenecks for Generalization
| null | null | 0 | 3 |
Reject
|
4;3;2
| null |
null |
University of Washington; Carnegie Mellon University; Allen Institute for AI; The Chinese University of Hong Kong
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Wei Yang, Xiaolong Wang, Ali Farhadi, Abhinav Gupta, Roozbeh Mottaghi
|
https://iclr.cc/virtual/2019/poster/820
|
Visual Navigation;Scene Prior;Knowledge Graph;Graph Convolution Networks;Deep Reinforcement Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7
| null | null |
Visual Semantic Navigation using Scene Priors
| null | null | 0 | 2.666667 |
Poster
|
4;3;1
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
generative models;GAN;VAE;Real NVP
| null | 0 | null | null |
iclr
| 0.5 | 0 |
Available at: coming soon.
|
main
| 5.333333 |
5;5;6
| null | null |
GenEval: A Benchmark Suite for Evaluating Generative Models
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Distributional Semantics;word embeddings;cnns;interpretability
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 3.666667 |
3;4;4
| null | null |
Using Word Embeddings to Explore the Learned Representations of Convolutional Neural Networks
| null | null | 0 | 3.333333 |
Reject
|
4;2;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Inference of unobserved event streams with neural Hawkes particle smoothing
| null | null | 0 | 4 |
Reject
|
5;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph neural networks;transformer;attention
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
| null | null |
Graph Transformer
| null | null | 0 | 4.333333 |
Reject
|
3;5;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
theory;length map;initialization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
On the effect of the activation function on the distribution of hidden nodes in a deep network
| null | null | 0 | 3 |
Reject
|
3;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
meta-learning;few-shot learning;visual segmentation
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.333333 |
3;3;7
| null | null |
Meta-Learning to Guide Segmentation
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
easy examples;hard example;CNN
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 3.333333 |
3;3;4
| null | null |
Empirical Study of Easy and Hard Examples in CNN Training
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
latent-tree-learning;unsupervised-parsing
| null | 0 | null | null |
iclr
| -0.693375 | 0 | null |
main
| 4.333333 |
2;5;6
| null | null |
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
| null | null | 0 | 3.666667 |
Withdraw
|
4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Optimization;Multiple constraints;Manifold
| null | 0 | null | null |
iclr
| -0.981981 | 0 | null |
main
| 3.666667 |
1;3;7
| null | null |
Optimization on Multiple Manifolds
| null | null | 0 | 4 |
Reject
|
5;4;3
| null |
null |
Dept. of Computer Science, Princeton University, Princeton, NJ, USA; Dept. of Electrical and Computer Engineering, University of Minnesota – Twin Cities, USA
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Sirisha Rambhatla, Xingguo Li, Jarvis Haupt
|
https://iclr.cc/virtual/2019/poster/847
|
dictionary learning;provable dictionary learning;online dictionary learning;sparse coding;support recovery;iterative hard thresholding;matrix factorization;neural architectures;neural networks;noodl
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
NOODL: Provable Online Dictionary Learning and Sparse Coding
| null | null | 0 | 2 |
Poster
|
2;2;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Convolutional Neural Networks;Deformation Stability;Pooling;Transformation Invariance
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 4.75 |
4;5;5;5
| null | null |
Pooling Is Neither Necessary nor Sufficient for Appropriate Deformation Stability in CNNs
| null | null | 0 | 3.25 |
Reject
|
4;2;2;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Anomaly Detection;Active Learning;Unsupervised Learning
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4 |
3;4;5
| null | null |
UaiNets: From Unsupervised to Active Deep Anomaly Detection
| null | null | 0 | 3.333333 |
Reject
|
4;4;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
recurrent network;finite state machines;state-regularized;interpretability and explainability
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
State-Regularized Recurrent Networks
| null | null | 0 | 4.666667 |
Reject
|
5;5;4
| null |
null |
California Institute of Technology, Pasadena, CA 91125
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Navid Azizan, Babak Hassibi
|
https://iclr.cc/virtual/2019/poster/688
|
optimization;stochastic gradient descent;mirror descent;implicit regularization;deep learning theory
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 5.666667 |
5;5;7
| null | null |
Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization
| null | null | 0 | 3.333333 |
Poster
|
3;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
learning dynamics;gradient descent;classification;optimization;cross-entropy;hinge loss;implicit regularization;gradient starvation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
Convergence Properties of Deep Neural Networks on Separable Data
| null | null | 0 | 3.666667 |
Reject
|
3;4;4
| null |
null |
California Institute of Technology and Amazon AI; Carnegie Mellon University; Carnegie Mellon University and Amazon AI
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Peiyun Hu, Zachary Lipton, Anima Anandkumar, Deva Ramanan
|
https://iclr.cc/virtual/2019/poster/794
|
Active Learning;Learning from Partial Feedback
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Active Learning with Partial Feedback
| null | null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null |
Department of Computer Science, University of Illinois at Urbana-Champaign
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ziwei Ji, Matus Telgarsky
|
https://iclr.cc/virtual/2019/poster/951
|
implicit regularization;alignment of layers;deep linear networks;gradient descent;separable data
| null | 0 | null | null |
iclr
| -0.755929 | 0 | null |
main
| 7.333333 |
6;7;9
| null | null |
Gradient descent aligns the layers of deep linear networks
| null | null | 0 | 4.333333 |
Poster
|
5;4;4
| null |
null |
Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology; Robotics and Big Data Laboratory, University of Haifa
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
|
https://iclr.cc/virtual/2019/poster/868
|
coresets;neural network compression;generalization bounds;matrix sparsification
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.333333 |
6;6;7
| null | null |
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
| null | null | 0 | 3.666667 |
Poster
|
4;3;4
| null |
null |
Paper under double-blind review
|
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Reinforcement Learning;Imitation Learning;Atari;A3C;GA3C
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Mimicking actions is a good strategy for beginners: Fast Reinforcement Learning with Expert Action Sequences
| null | null | 0 | 3 |
Reject
|
2;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Generative Adversarial Network;Divergence
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Spread Divergences
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
data poisoning;backdoor attacks;clean labels;adversarial examples;generative adversarial networks
| null | 0 | null | null |
iclr
| -0.944911 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
Clean-Label Backdoor Attacks
| null | null | 0 | 2.666667 |
Reject
|
4;2;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 3.666667 |
3;4;4
| null | null |
NA
| null | null | 0 | 4.333333 |
Withdraw
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Negative Transfer;Adversarial Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
2;4;6
| null | null |
REVISTING NEGATIVE TRANSFER USING ADVERSARIAL LEARNING
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
stochastic optimization;neural network;preconditioned accelerated stochastic gradient descent
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
A preconditioned accelerated stochastic gradient descent algorithm
| null | null | 0 | 3.666667 |
Reject
|
5;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Musical Timbre;Instrument Translation;Domain Translation;Style Transfer;Sound Synthesis;Musical Information;Deep Learning;Variational Auto-Encoder;Generative Models;Network Conditioning
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Modulated Variational Auto-Encoders for Many-to-Many Musical Timbre Transfer
|
https://github.com/anonymous124/iclr2019MoVE
| null | 0 | 3.333333 |
Reject
|
4;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
image to image translation;image translation;exemplar;mutlimodal
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization
|
https://github.com/AnonymousIclrAuthor/Highway-Adaptive-Instance-Normalization
| null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Genetic Evolutionary Network;Deep Learning;Genetic Algorithm;Ensemble Learning;Representation Learning
| null | 0 | null | null |
iclr
| -0.188982 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
SEGEN: SAMPLE-ENSEMBLE GENETIC EVOLUTIONARY NETWORK MODEL
| null | null | 0 | 3.666667 |
Reject
|
4;5;2
| null |
null |
Saarland University, Germany; University of Tübingen, Germany
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein
|
https://iclr.cc/virtual/2019/poster/766
|
loss landscape;local minima;deep neural networks
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 7 |
6;7;8
| null | null |
On the loss landscape of a class of deep neural networks with no bad local valleys
| null | null | 0 | 4.333333 |
Poster
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 |
https://youtu.be/zBmpf3Yz8tc
|
main
| 4.5 |
4;5
| null | null |
Improving On-policy Learning with Statistical Reward Accumulation
| null | null | 0 | 3 |
Reject
|
3;3
| null |
null |
Google Brain; University of Toronto, Vector Institute
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Sheng Jia, Jamie Kiros, Jimmy Ba
|
https://iclr.cc/virtual/2019/poster/844
|
Reinforcement Learning;Web Navigation;Graph Neural Networks
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
DOM-Q-NET: Grounded RL on Structured Language
| null | null | 0 | 2.333333 |
Poster
|
3;1;3
| null |
null |
Department of Computer Science, ETH Zurich, Switzerland
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
|
https://iclr.cc/virtual/2019/poster/818
|
Robustness certification;Adversarial Attacks;Abstract Interpretation;MILP Solvers;Verification of Neural Networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Boosting Robustness Certification of Neural Networks
| null | null | 0 | 3.333333 |
Poster
|
3;4;3
| null |
null |
Department of Computer Science, Boston University; NEC Laboratories America; NEC Laboratories America, UC San Diego
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Nataniel Ruiz, Samuel Schulter, Manmohan Chandraker
|
https://iclr.cc/virtual/2019/poster/924
|
Simulation in machine learning;reinforcement learning;policy gradients;image rendering
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 6.333333 |
6;6;7
| null | null |
Learning To Simulate
| null | null | 0 | 4.333333 |
Poster
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
noisy samples;deep learning;generative adversarial network
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5 |
3;6;6
| null | null |
Iteratively Learning from the Best
| null | null | 0 | 4 |
Reject
|
5;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial examples;adversarial robustness;visualisation;ensembles
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
recurrent nets;attractor nets;denoising;sequence processing
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
State-Denoised Recurrent Neural Networks
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
active variable selection;missing data;amortized inference
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE
| null | null | 0 | 3.333333 |
Reject
|
4;4;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Anomaly detection;one-class model;GAN
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;4;5
| null | null |
A Multi-modal one-class generative adversarial network for anomaly detection in manufacturing
| null | null | 0 | 4.333333 |
Reject
|
4;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
convex;GAN;autoencoder;interpolation;stimuli generation;adversarial;latent distribution
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourages convex latent distributions
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial images;Boltzmann machine;mean field approximation
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 4.666667 |
4;4;6
| null | null |
Improved resistance of neural networks to adversarial images through generative pre-training
| null | null | 0 | 3.666667 |
Reject
|
3;4;4
| null |
null |
SenseTime Research; The Chinese University of Hong Kong; The Chinese University of Hong Kong, The University of Hong Kong
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ping Luo, Xinjiang Wang, wenqi shao, Zhanglin Peng
|
https://iclr.cc/virtual/2019/poster/835
|
batch normalization;regularization;deep learning
| null | 0 | null | null |
iclr
| 0.188982 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Towards Understanding Regularization in Batch Normalization
| null | null | 0 | 3.333333 |
Poster
|
3;2;5
| null |
null |
Carnegie Mellon University; Google Brain
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Yifan Wu, George Tucker, Ofir Nachum
|
https://iclr.cc/virtual/2019/poster/1003
|
Laplacian;reinforcement learning;representation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7
| null | null |
The Laplacian in RL: Learning Representations with Efficient Approximations
| null | null | 0 | 3.333333 |
Poster
|
4;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 0 | null | null | null |
ISONETRY : GEOMETRY OF CRITICAL INITIALIZATIONS AND TRAINING
| null | null | 0 | 0 |
Withdraw
| null | null |
null |
Google AI
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
YiDing Jiang, Dilip Krishnan, Hossein Mobahi, Samy Bengio
|
https://iclr.cc/virtual/2019/poster/897
|
Deep learning;large margin;generalization bounds;generalization gap.
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.666667 |
5;6;9
| null | null |
Predicting the Generalization Gap in Deep Networks with Margin Distributions
|
https://github.com/google-research/google-research/tree/master/demogen
| null | 0 | 4 |
Poster
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
exploration;deep reinforcement learning;intrinsic motivation;unsupervised learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
Learning to Control Visual Abstractions for Structured Exploration in Deep Reinforcement Learning
| null | null | 0 | 3 |
Reject
|
3;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Improving Sample-based Evaluation for Generative Adversarial Networks
| null | null | 0 | 4 |
Reject
|
5;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Neural Network;Language Modeling;Convolution
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
4;4;4
| null | null |
Language Modeling with Graph Temporal Convolutional Networks
| null | null | 0 | 4.333333 |
Reject
|
5;3;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
model-based reinforcement learning;deep learning;generative agents;policy gradient;imitation learning
| null | 0 | null | null |
iclr
| -0.654654 | 0 | null |
main
| 3.333333 |
2;3;5
| null | null |
Learning powerful policies and better dynamics models by encouraging consistency
| null | null | 0 | 4 |
Reject
|
4;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
cross entropy;neural networks;parameter recovery
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null |
College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA; Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ahmed Qureshi, Byron Boots, Michael C Yip
|
https://iclr.cc/virtual/2019/poster/1137
|
Inverse Reinforcement Learning;Imitation learning;Variational lnference;Learning from demonstrations
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
| null | null |
Adversarial Imitation via Variational Inverse Reinforcement Learning
| null | null | 0 | 3.666667 |
Poster
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Energy based model;Generative models;MCMC;GANs
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based Models
| null | null | 0 | 4.333333 |
Reject
|
4;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial examples;Feature smoothing;Data augmentation;Decision boundary
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Theoretical and Empirical Study of Adversarial Examples
| null | null | 0 | 3.333333 |
Reject
|
4;4;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
domain generalization;adversarial learning;invariant feature learning
| null | 0 | null | null |
iclr
| 0.188982 | 0 | null |
main
| 5.333333 |
4;5;7
| null | null |
Domain Generalization via Invariant Representation under Domain-Class Dependency
| null | null | 0 | 4.666667 |
Reject
|
5;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Markov chain Monte Carlo;variational inference;deep generative models
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Ergodic Measure Preserving Flows
| null | null | 0 | 4 |
Reject
|
5;3;4
| null |
null |
Cognitive Computing Lab (CCL), Baidu Research, Bellevue, WA 98004, USA; Stanford University, Stanford, CA 94305, USA. Work performed at Baidu Research as Summer Intern in 2018.
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ping Li, Phan-Minh Nguyen
|
https://iclr.cc/virtual/2019/poster/949
|
Random Deep Autoencoders;Exact Asymptotic Analysis;Phase Transitions
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 8.333333 |
8;8;9
| null | null |
On Random Deep Weight-Tied Autoencoders: Exact Asymptotic Analysis, Phase Transitions, and Implications to Training
| null | null | 0 | 4 |
Oral
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
multi-agent;reinforcement learning;attention;actor-critic
| null | 0 | null | null |
iclr
| -0.944911 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
Actor-Attention-Critic for Multi-Agent Reinforcement Learning
| null | null | 0 | 3.333333 |
Reject
|
4;3;3
| null |
null |
Massachusetts Institute of Technology; University of California, Berkeley
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Michael Janner, Sergey Levine, William Freeman, Joshua B Tenenbaum, Chelsea Finn, Jiajun Wu
|
https://iclr.cc/virtual/2019/poster/727
|
structured scene representation;predictive models;intuitive physics;self-supervised learning
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 7 |
5;7;9
| null | null |
Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
| null | null | 0 | 4.333333 |
Poster
|
5;4;4
| null |
null |
Beijing Institute of Technology; Adobe Research
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Jianan Li, Jimei Yang, Aaron Hertzmann, Jianming Zhang, Tingfa Xu
|
https://iclr.cc/virtual/2019/poster/701
| null | null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
LayoutGAN: Generating Graphic Layouts with Wireframe Discriminators
| null | null | 0 | 3.666667 |
Poster
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adaptive moment estimation;SGD;AMSGrad
| null | 0 | null | null |
iclr
| 0.27735 | 0 | null |
main
| 4.666667 |
3;4;7
| null | null |
GENERALIZED ADAPTIVE MOMENT ESTIMATION
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Binarization;Convolutional Neural Networks;Deep Learning;Deep Neural Networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
Self-Binarizing Networks
| null | null | 0 | 4 |
Withdraw
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
explainable AI;interpretability;deep learning;decision tree;zero-shot learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
4;4;4
| null | null |
Iterative Binary Decisions
| null | null | 0 | 3.666667 |
Withdraw
|
3;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 4.666667 |
4;4;6
| null | null |
Expressiveness in Deep Reinforcement Learning
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Convolutional neural networks;The sampling theorem;Sensitivity to small image transformations;Dataset bias;Shiftability
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.333333 |
5;7;7
| null | null |
Why do deep convolutional networks generalize so poorly to small image transformations?
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
Characterizing Malicious Edges targeting on Graph Neural Networks
| null | null | 0 | 3.666667 |
Reject
|
3;3;5
| null |
null |
Computer Science Department, Stanford University, Stanford, CA 94305
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Albert Gu, Frederic Sala, Beliz Gunel, Christopher Re
|
https://iclr.cc/virtual/2019/poster/848
|
embeddings;non-Euclidean geometry;manifolds;geometry of data
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7
| null | null |
Learning Mixed-Curvature Representations in Product Spaces
| null | null | 0 | 3.333333 |
Poster
|
3;5;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
active learning;meta learning;reinforcement learning
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Discovering General-Purpose Active Learning Strategies
| null | null | 0 | 4.5 |
Reject
|
4;5;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
visual speech recognition;speech recognition;lipreading
| null | 0 | null | null |
iclr
| -0.628619 | 0 | null |
main
| 5.333333 |
3;4;9
| null | null |
Large-Scale Visual Speech Recognition
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
end-to-end ASR;multi-lingual ASR;multi-speaker ASR;code-switching;encoder-decoder;connectionist temporal classification
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3
| null | null |
End-to-End Multi-Lingual Multi-Speaker Speech Recognition
| null | null | 0 | 4.333333 |
Reject
|
4;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Learned Optimizers;Meta-Learning
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Learned optimizers that outperform on wall-clock and validation loss
| null | null | 0 | 4 |
Reject
|
5;4;3
| null |
null |
School of Data Science and Engineering, East China Normal University
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ningyuan Zheng, Yf Jiang, Dingjiang Huang
|
https://iclr.cc/virtual/2019/poster/1098
|
image generation;differentiable model;reinforcement learning;deep learning;model based
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 7 |
6;7;8
| null | null |
StrokeNet: A Neural Painting Environment
|
https://github.com/vexilligera/strokenet
| null | 0 | 4.333333 |
Poster
|
4;4;5
| null |
null |
Department of Computer Science and Engineering, Seoul National University, Seoul, Korea
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Soochan Lee, Junsoo Ha, Gunhee Kim
|
https://iclr.cc/virtual/2019/poster/939
|
conditional GANs;conditional image generation;multimodal generation;reconstruction loss;maximum likelihood estimation;moment matching
| null | 0 | null | null |
iclr
| -0.720577 | 0 |
http://vision.snu.ac.kr/projects/mr-gan
|
main
| 6.333333 |
4;7;8
| null | null |
Harmonizing Maximum Likelihood with GANs for Multimodal Conditional Generation
| null | null | 0 | 4 |
Poster
|
5;3;4
| null |
null |
Computer Science Division, University of California, Berkeley
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Jacob Andreas
|
https://iclr.cc/virtual/2019/poster/1055
|
compositionality;representation learning;evaluation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.333333 |
6;6;7
| null | null |
Measuring Compositionality in Representation Learning
| null | null | 0 | 4 |
Poster
|
4;4;4
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.