pdf
stringlengths 49
199
⌀ | aff
stringlengths 1
1.36k
⌀ | year
stringclasses 19
values | technical_novelty_avg
float64 0
4
⌀ | video
stringlengths 21
47
⌀ | doi
stringlengths 31
63
⌀ | presentation_avg
float64 0
4
⌀ | proceeding
stringlengths 43
129
⌀ | presentation
stringclasses 796
values | sess
stringclasses 576
values | technical_novelty
stringclasses 700
values | arxiv
stringlengths 10
16
⌀ | author
stringlengths 1
1.96k
⌀ | site
stringlengths 37
191
⌀ | keywords
stringlengths 2
582
⌀ | oa
stringlengths 86
198
⌀ | empirical_novelty_avg
float64 0
4
⌀ | poster
stringlengths 57
95
⌀ | openreview
stringlengths 41
45
⌀ | conference
stringclasses 11
values | corr_rating_confidence
float64 -1
1
⌀ | corr_rating_correctness
float64 -1
1
⌀ | project
stringlengths 1
162
⌀ | track
stringclasses 3
values | rating_avg
float64 0
10
⌀ | rating
stringlengths 1
17
⌀ | correctness
stringclasses 809
values | slides
stringlengths 32
41
⌀ | title
stringlengths 2
192
⌀ | github
stringlengths 3
165
⌀ | authors
stringlengths 7
161
⌀ | correctness_avg
float64 0
5
⌀ | confidence_avg
float64 0
5
⌀ | status
stringclasses 22
values | confidence
stringlengths 1
17
⌀ | empirical_novelty
stringclasses 763
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Variance-Weighted Confidence-Integrated loss;Confidence Calibration;Stochastic Regularization;Stochastic Inferences
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Confidence Calibration in Deep Neural Networks through Stochastic Inferences
| null | null | 0 | 3.333333 |
Withdraw
|
2;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep reinforcement learning;self-play;real-time strategic game;multi-agent
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 2.333333 |
2;2;3
| null | null |
Hierarchical Deep Reinforcement Learning Agent with Counter Self-play on Competitive Games
| null | null | 0 | 3.333333 |
Withdraw
|
3;4;3
| null |
null |
Oregon State University; University of California, Berkeley
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Dan Hendrycks and Thomas Dietterich
|
https://iclr.cc/virtual/2019/poster/731
|
robustness;benchmark;convnets;perturbations
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 8.333333 |
7;9;9
| null | null |
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
| null | null | 0 | 4 |
Poster
|
3;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep learning;model compression;pruning;quantization;SVD;regularization;framework
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
DeepTwist: Learning Model Compression via Occasional Weight Distortion
| null | null | 0 | 3.333333 |
Reject
|
4;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Natural Language Processing;Text Generation;Variational Autoencoders
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.666667 |
5;5;7
| null | null |
Hierarchically-Structured Variational Autoencoders for Long Text Generation
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null |
Department of Mathematics, ETH Zurich; Department of Mathematics, University of Genoa
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Rima Alaifari, Giovanni S Alberti, Tandri Gauksson
|
https://iclr.cc/virtual/2019/poster/706
|
Adversarial examples;deformations;deep neural networks;computer vision
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
ADef: an Iterative Algorithm to Construct Adversarial Deformations
| null | null | 0 | 3.333333 |
Poster
|
3;3;4
| null |
null |
Courant Institute of Mathematical Sciences, New York University, New York, NY; Google Brain, Mountain View, CA
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ilya Kostrikov, Kumar Agrawal, Debidatta Dwibedi, Sergey Levine, Jonathan Tompson
|
https://iclr.cc/virtual/2019/poster/836
|
deep learning;reinforcement learning;imitation learning;adversarial learning
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 7 |
6;7;8
| null | null |
Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning
|
https://github.com/google-research/google-research/tree/master/dac
| null | 0 | 3 |
Poster
|
4;3;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
permutation phase defense;adversarial attacks;deep learning
| null | 0 | null | null |
iclr
| -0.654654 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning
| null | null | 0 | 4 |
Reject
|
5;3;4
| null |
null |
New York University; Google Brain; University of Oxford, DeepMind
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
George Tucker, Dieterich Lawson, Shixiang Gu, Chris J Maddison
|
https://iclr.cc/virtual/2019/poster/755
|
variational autoencoder;reparameterization trick;IWAE;VAE;RWS;JVI
| null | 0 | null | null |
iclr
| 0 | 0 |
https://sites.google.com/view/dregs
|
main
| 6.666667 |
6;7;7
| null | null |
Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives
| null | null | 0 | 4 |
Poster
|
4;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
machine translation;vector quantized autoencoders;non-autoregressive;NMT
| null | 0 | null | null |
iclr
| -0.29277 | 0 | null |
main
| 5.25 |
3;5;6;7
| null | null |
Towards a better understanding of Vector Quantized Autoencoders
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3
| null | null |
An Analysis of Composite Neural Network Performance from Function Composition Perspective
| null | null | 0 | 3 |
Reject
|
4;3;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
changepoint detection;multivariate time series data;multiscale RNN
| null | 0 | null | null |
iclr
| -0.654654 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
Pyramid Recurrent Neural Networks for Multi-Scale Change-Point Detection
| null | null | 0 | 4 |
Reject
|
5;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null |
Aaron Chadha
| null | null | null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 3.666667 |
3;4;4
| null | null |
withdrawn
| null | null | 0 | 4.666667 |
Withdraw
|
5;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
generalization;reinforcement learning;dqn;regularization;transfer learning;multitask
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Generalization and Regularization in DQN
| null | null | 0 | 4.333333 |
Reject
|
5;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
batch normalization;acceleration;correlation;sampling
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Effective and Efficient Batch Normalization Using Few Uncorrelated Data for Statistics' Estimation
| null | null | 0 | 3.666667 |
Reject
|
3;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Low-resource;Named Entity Recognition
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
| null | null |
DATNet: Dual Adversarial Transfer for Low-resource Named Entity Recognition
|
https://github.com/ (to be available after acceptance)
| null | 0 | 4.333333 |
Reject
|
4;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Human pose estimation;Hourglass network;Multi-scale analysis
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 3.333333 |
3;3;4
| null | null |
Multi-Scale Stacked Hourglass Network for Human Pose Estimation
| null | null | 0 | 4.666667 |
Reject
|
5;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
regularization;robustness;deep learning;convolutional networks;kernel methods
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 5 |
4;5;6
| null | null |
On Regularization and Robustness of Deep Neural Networks
| null | null | 0 | 3 |
Reject
|
4;3;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null |
Daniel Bienstock, Gonzalo Muñoz, Sebastian Pokutta
| null |
deep learning theory;neural network training;empirical risk minimization;non-convex optimization;treewidth
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 6.666667 |
6;6;8
| null | null |
Principled Deep Neural Network Training through Linear Programming
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null |
University of California, Berkeley; Google Brain
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein
|
https://iclr.cc/virtual/2019/poster/809
|
Meta-learning;unsupervised learning;representation learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 8 |
8;8;8
| null | null |
Meta-Learning Update Rules for Unsupervised Representation Learning
| null | null | 0 | 3.333333 |
Oral
|
3;3;4
| null |
null |
McGill University, Montreal, Canada; Department of Electrical and Computer Engineering, McGill University, Montreal, Canada
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Arash Ardakani, Zhengyun Ji, Sean Smithson, Brett Meyer, Warren Gross
|
https://iclr.cc/virtual/2019/poster/652
|
Quantized Recurrent Neural Network;Hardware Implementation;Deep Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
6;7;8
| null | null |
Learning Recurrent Binary/Ternary Weights
| null | null | 0 | 3.333333 |
Poster
|
3;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.944911 | 0 | null |
main
| 3.333333 |
2;3;5
| null | null |
Associate Normalization
| null | null | 0 | 4.666667 |
Withdraw
|
5;5;4
| null |
null |
Institute for Biomedical Informatics, University of Pennsylvania
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
William La Cava, Tilak Raj Singh, Srinivas Suri, Srinivas Suri
|
https://iclr.cc/virtual/2019/poster/881
|
regression;stochastic optimization;evolutionary compution;feature engineering
| null | 0 | null | null |
iclr
| 0.327327 | 0 | null |
main
| 7 |
6;7;8
| null | null |
Learning concise representations for regression by evolving networks of trees
| null | null | 0 | 2.666667 |
Poster
|
3;1;4
| null |
null |
Google
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Walid Krichene, Nicolas Mayoraz, Steffen Rendle, Li Zhang, Xinyang Yi, Lichan Hong, Ed H. Chi, John Anderson
|
https://iclr.cc/virtual/2019/poster/890
|
similarity learning;pairwise learning;matrix factorization;Gramian estimation;variance reduction;neural embedding models;recommender systems
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 7.333333 |
7;7;8
| null | null |
Efficient Training on Very Large Corpora via Gramian Estimation
| null | null | 0 | 3.333333 |
Poster
|
2;4;4
| null |
null |
Carnegie Mellon University
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Xuezhe Ma, Chunting Zhou, Eduard Hovy
|
https://iclr.cc/virtual/2019/poster/725
|
VAE;regularization;auto-regressive
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.333333 |
6;6;7
| null | null |
MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders
| null | null | 0 | 4.333333 |
Poster
|
4;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
structured data;representation learning;deep neural networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
4;4;4
| null | null |
Deep processing of structured data
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null |
Department of ECE, Northeastern University, Boston, MA 02115, USA; School of Computer Science and Technology, Huaqiao University, Xiamen 362100, China; College of CIS, Northeastern University, Boston, MA 02115, USA
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, Yun Fu
|
https://iclr.cc/virtual/2019/poster/795
|
Non-local network;attention network;image restoration;residual learning
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Residual Non-local Attention Networks for Image Restoration
| null | null | 0 | 3.666667 |
Poster
|
3;3;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
generalization;optimization;vanishing gradients;experimental;fundamental research
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
An experimental study of layer-level training speed and its impact on generalization
| null | null | 0 | 3 |
Reject
|
2;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Multi-Grained Entity Proposal Network for Named Entity Recognition
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Provable Defenses against Spatially Transformed Adversarial Inputs: Impossibility and Possibility Results
| null | null | 0 | 3.333333 |
Reject
|
4;3;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
meta-learning;memory;one-shot learning;bloom filter;set membership;familiarity;compression
| null | 0 | null | null |
iclr
| 0.838628 | 0 | null |
main
| 5.333333 |
3;6;7
| null | null |
Meta-Learning Neural Bloom Filters
| null | null | 0 | 2.666667 |
Reject
|
1;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Interpretability;Attribution Method;Attribution Map
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Rectified Gradient: Layer-wise Thresholding for Sharp and Coherent Attribution Maps
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null |
University of Cambridge and Microsoft Research Cambridge; University of Cambridge; Microsoft Research Cambridge
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Wenbo Gong, Yingzhen Li, José Miguel Hernández Lobato
|
https://iclr.cc/virtual/2019/poster/1074
|
Meta Learning;MCMC
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Meta-Learning For Stochastic Gradient MCMC
| null | null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph classification;Deep Learning;Graph pooling;Embedding
| null | 0 | null | null |
iclr
| 0.944911 | 0 | null |
main
| 4.333333 |
3;4;6
| null | null |
DEEP GEOMETRICAL GRAPH CLASSIFICATION
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
zero-shot learning;variational autoencoders
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Learning shared manifold representation of images and attributes for generalized zero-shot learning
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Reinforcement Learning;Rewards
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
2;3;4
| null | null |
Hybrid Policies Using Inverse Rewards for Reinforcement Learning
| null | null | 0 | 4.666667 |
Reject
|
5;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;deep learning;logics;formal methods;automated reasoning;backtracking search;satisfiability;quantified Boolean formulas
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 6 |
5;6;7
| null | null |
Learning Heuristics for Automated Reasoning through Reinforcement Learning
| null | null | 0 | 3.666667 |
Reject
|
3;4;4
| null |
null |
Mila, Université de Montréal; Mila, Université de Montréal and CIFAR Fellow; University of Oregon; Mila, Université de Montréal and AdeptMind Scholar
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Dzmitry Bahdanau, Shikhar Murty, Mikhail Noukhovitch, Thien H Nguyen, Harm de Vries, Aaron Courville
|
https://iclr.cc/virtual/2019/poster/777
|
systematic generalization;language understanding;visual questions answering;neural module networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Systematic Generalization: What Is Required and Can It Be Learned?
| null | null | 0 | 4 |
Poster
|
4;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial examples;information bottleneck;robustness
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 3 |
2;3;4
| null | null |
A Rate-Distortion Theory of Adversarial Examples
| null | null | 0 | 3.333333 |
Reject
|
3;3;4
| null |
null |
Facebook AI Research; University of Oxford
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, Mohamed Elhoseiny
|
https://iclr.cc/virtual/2019/poster/715
|
Lifelong Learning;Continual Learning;Catastrophic Forgetting;Few-shot Transfer
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Efficient Lifelong Learning with A-GEM
|
https://github.com/facebookresearch/agem
| null | 0 | 4 |
Poster
|
4;4;4
| null |
null |
Microsoft Research, Montréal; Google AI, Mountain View; University of Massachusetts, Amherst
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Andrew McCallum
|
https://iclr.cc/virtual/2019/poster/1001
|
Open domain Question Answering;Reinforcement Learning;Query reformulation
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 6.333333 |
6;6;7
| null | null |
Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering
|
https://github.com/rajarshd/Multi-Step-Reasoning
| null | 0 | 4.333333 |
Poster
|
4;5;4
| null |
null |
POSTECH, Department of Creative IT Engineering, Korea; Samsung Research, Korea
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Daehyun Ahn, Dongsoo Lee, Taesu Kim, Jae-Joon Kim
|
https://iclr.cc/virtual/2019/poster/1103
|
quantization;pruning;memory footprint;model compression;sparse matrix
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 6.333333 |
6;6;7
| null | null |
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network
| null | null | 0 | 3 |
Poster
|
3;4;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Skill composition;temporal logic;finite state automata
| null | 0 | null | null |
iclr
| 0.636364 | 0 | null |
main
| 5.75 |
5;5;6;7
| null | null |
Automata Guided Skill Composition
| null | null | 0 | 2.75 |
Reject
|
2;2;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 6 |
5;6;7
| null | null |
Learning Implicit Generative Models by Teaching Explicit Ones
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
optimisation;large-scale;stochastic
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
A fast quasi-Newton-type method for large-scale stochastic optimisation
| null | null | 0 | 4.666667 |
Reject
|
4;5;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
MCMC;GANs;Variational Inference
| null | 0 | null | null |
iclr
| 0.693375 | 0 | null |
main
| 6.666667 |
5;6;9
| null | null |
Metropolis-Hastings view on variational inference and adversarial training
| null | null | 0 | 3.666667 |
Reject
|
3;4;4
| null |
null |
Universidad de la República, Uruguay
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
José Lezama
|
https://iclr.cc/virtual/2019/poster/1110
|
disentangling;autoencoders;jacobian;face manipulation
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.333333 |
5;7;7
| null | null |
Overcoming the Disentanglement vs Reconstruction Trade-off via Jacobian Supervision
|
https://github.com/jlezama/disentangling-jacobian
| null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
knowledge distillation;few-sample learning;network compression
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Knowledge Distillation from Few Samples
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null |
Mila-Quebec Institute for Learning Algorithms, Canada; HEC Montr ´eal, Canada; CIFAR AI Research Chair; Peking University, China; Universit ´e de Montr ´eal, Canada
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang
|
https://iclr.cc/virtual/2019/poster/870
|
knowledge graph embedding;knowledge graph completion;adversarial sampling
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7
| null | null |
RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space
| null | null | 0 | 3.666667 |
Poster
|
4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Recurrent neural networks;learning to rank;pointer networks
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.333333 |
6;6;7
| null | null |
Seq2Slate: Re-ranking and Slate Optimization with RNNs
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null |
University of California, Berkeley
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
John Co-Reyes, Abhishek Gupta, Suvansh Q Sanjeev, Nicholas Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine
|
https://iclr.cc/virtual/2019/poster/1041
|
meta-learning;language grounding;interactive
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
| null | null |
Guiding Policies with Language via Meta-Learning
| null | null | 0 | 3.666667 |
Poster
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
optimization;distributed;large scale;deep learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
| null | null |
Stochastic Gradient Push for Distributed Deep Learning
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null |
Shanghai Jiao Tong University
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Zhiming Zhou, Qingru Zhang, Guansong Lu, Hongwei Wang, Weinan Zhang, Yong Yu
|
https://iclr.cc/virtual/2019/poster/718
|
optimizer;Adam;convergence;decorrelation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
6;6;9
| null | null |
AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods
| null | null | 0 | 4 |
Poster
|
4;4;4
| null |
null |
Nat’l Eng. Lab. for Video Technology, Key Lab. of Machine Perception (MoE), Computer Science Dept., Peking University, Cooperative Medianet Innovation Center, Peng Cheng Lab, Deepwise AI Lab; Nat’l Eng. Lab. for Video Technology, Key Lab. of Machine Perception (MoE), Computer Science Dept., Peking University; Tencent AI Lab
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Fangwei Zhong, peng sun, Wenhan Luo, Tingyun Yan, Yizhou Wang
|
https://iclr.cc/virtual/2019/poster/1082
|
Active tracking;reinforcement learning;adversarial learning;multi agent
| null | 0 | null | null |
iclr
| 0.866025 | 0 |
https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS
|
main
| 5 |
4;5;6
| null | null |
AD-VAT: An Asymmetric Dueling mechanism for learning Visual Active Tracking
| null | null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
optimization;Adam;AMSGrad
| null | 0 | null | null |
iclr
| -0.816497 | 0 | null |
main
| 5 |
4;5;5;6
| null | null |
Optimistic Acceleration for Optimization
| null | null | 0 | 3.5 |
Reject
|
4;4;4;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
bias-variance tradeoff;deep learning theory;generalization;concentration
| null | 0 | null | null |
iclr
| 0.188982 | 0 | null |
main
| 5.333333 |
4;5;7
| null | null |
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
attention;meta-learning;set-input neural networks;permutation invariant modeling
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Set Transformer
| null | null | 0 | 4 |
Reject
|
4;5;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.333333 |
3;3;7
| null | null |
Recycling the discriminator for improving the inference mapping of GAN
| null | null | 0 | 4.333333 |
Reject
|
4;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null |
Carson Eisenach, Haichuan Yang, Ji Liu, Han Liu
|
https://iclr.cc/virtual/2019/poster/947
|
reinforcement learning;policy gradient;MOBA games
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Marginal Policy Gradients: A Unified Family of Estimators for Bounded Action Spaces with Applications
| null | null | 0 | 3.333333 |
Poster
|
3;4;3
| null |
null |
Paper under double-blind review
|
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
uncertainty estimates;out of distribution;bayesian neural network;neural network priors;regression;active learning
| null | 0 | null | null |
iclr
| -0.755929 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;state representation learning;feature extraction;robotics;deep learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null |
University of California, Los Angeles; Google Brain
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ting Chen, Mario Lucic, Neil Houlsby, Sylvain Gelly
|
https://iclr.cc/virtual/2019/poster/981
|
unsupervised learning;generative adversarial networks;deep generative modelling
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 6.333333 |
5;7;7
| null | null |
On Self Modulation for Generative Adversarial Networks
|
https://github.com/google/compare_gan
| null | 0 | 4.333333 |
Poster
|
5;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Residual Networks;Dynamical Systems;Classification
| null | 0 | null | null |
iclr
| 0.755929 | 0 | null |
main
| 3.666667 |
2;4;5
| null | null |
RESIDUAL NETWORKS CLASSIFY INPUTS BASED ON THEIR NEURAL TRANSIENT DYNAMICS
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Question Generation;Natural Language Generation;Scratchpad Encoder;Sequence to Sequence
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 3.666667 |
3;4;4
| null | null |
Question Generation using a Scratchpad Encoder
| null | null | 0 | 4.666667 |
Reject
|
5;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
multi-instance learning;hierarchical models;universal approximation theorem
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Approximation capability of neural networks on sets of probability measures and tree-structured data
| null | null | 0 | 4.333333 |
Reject
|
5;5;3
| null |
null |
N/A
|
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial attacks;Robustness;CW;I-FGSM
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
How Training Data Affect the Accuracy and Robustness of Neural Networks for Image Classification
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null |
Indiana University Bloomington; Northwestern University; University of Illinois at Urbana-Champaign; The University of Texas at Austin
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Yuan Xie, Boyi Liu, Qiang Liu, Zhaoran Wang, Yuan Zhou, Jian Peng
|
https://iclr.cc/virtual/2019/poster/873
|
Causal inference;Policy Optimization;Non-asymptotic analysis
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.666667 |
6;6;8
| null | null |
Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy
| null | null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
community detection;deep learning for graphs
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Overlapping Community Detection with Graph Neural Networks
| null | null | 0 | 4.666667 |
Reject
|
5;5;4
| null |
null |
Stanford University
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Yu Bai, Qijia Jiang, Ju Sun
|
https://iclr.cc/virtual/2019/poster/904
|
Dictionary learning;Sparse coding;Non-convex optimization;Theory
| null | 0 | null | null |
iclr
| 0.784465 | 0 | null |
main
| 6.8 |
6;7;7;7;7
| null | null |
Subgradient Descent Learns Orthogonal Dictionaries
| null | null | 0 | 2.6 |
Poster
|
1;3;4;3;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Convolutional Networks;GCN;Confidence;Semi-Supervised Learning;Deep Learning;Neural Networks
| null | 0 | null | null |
iclr
| 0 | 0 |
Not provided
|
main
| 0 | null | null | null |
Confidence-based Graph Convolutional Networks for Semi-Supervised Learning
|
Not provided
| null | 0 | 0 |
Withdraw
| null | null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Interpretability;recurrent neural network;attention
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Exploring the interpretability of LSTM neural networks over multi-variable data
| null | null | 0 | 4.333333 |
Reject
|
3;5;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 3.333333 |
2;4;4
| null | null |
Detecting Topological Defects in 2D Active Nematics Using Convolutional Neural Networks
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null |
Baidu Research
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Wei Ping, Kainan Peng, Jitong Chen
|
https://iclr.cc/virtual/2019/poster/680
|
text-to-speech;deep generative models;end-to-end training;text to waveform
| null | 0 | null | null |
iclr
| 0.755929 | 0 |
https://clarinet-demo.github.io/
|
main
| 7.333333 |
6;7;9
| null | null |
ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech
| null | null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Learning;Information Bottleneck;Residual Neural Networks;Information Theory
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 4.666667 |
4;4;6
| null | null |
What Information Does a ResNet Compress?
| null | null | 0 | 4 |
Reject
|
3;4;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
nonlinear dimensionality reduction;missing data;manifold learning;co-clustering;optimization
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5 |
4;4;7
| null | null |
Co-manifold learning with missing data
| null | null | 0 | 3.666667 |
Reject
|
3;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Radial basis feature transformation;convolutional neural networks;adversarial defense
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 3.666667 |
3;4;4
| null | null |
Radial Basis Feature Transformation to Arm CNNs Against Adversarial Attacks
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null |
Hong Kong University of Science and Technology; University of Technology Sydney; Ecole des Ponts ParisTech
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Yuan Yuan, YUEMING LYU, Xi SHEN, Ivor Wai-Hung Tsang, Dit-Yan Yeung
|
https://iclr.cc/virtual/2019/poster/1079
|
feature aggregation;weakly supervised learning;temporal action localization
| null | 0 | null | null |
iclr
| 0.755929 | 0 | null |
main
| 4.666667 |
3;5;6
| null | null |
MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING
| null | null | 0 | 3.333333 |
Poster
|
3;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Frame Interpolation;Frame Rate Up Conversion;Convolutional Neural Networks;CNN;Unsupervised learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;4;5
| null | null |
UNSUPERVISED CONVOLUTIONAL NEURAL NETWORKS FOR ACCURATE VIDEO FRAME INTERPOLATION WITH INTEGRATION OF MOTION COMPONENTS
| null | null | 0 | 4.333333 |
Withdraw
|
4;5;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
CNN;ResNet;learning theory;approximation theory;non-parametric estimation;block-sparse
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.666667 |
4;4;6
| null | null |
Approximation and non-parametric estimation of ResNet-type convolutional neural networks via block-sparse fully-connected neural networks
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
super-resolution
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Super-Resolution via Conditional Implicit Maximum Likelihood Estimation
| null | null | 0 | 4.333333 |
Reject
|
3;5;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
policy gradient;combinatorial optimization;blackbox optimization;stochastic optimization;reinforcement learning
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
The Cakewalk Method
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial training;adversarial examples;deep neural networks;regularization;Lipschitz constant
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Improved robustness to adversarial examples using Lipschitz regularization of the loss
| null | null | 0 | 2.333333 |
Reject
|
3;3;1
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Optimal transportation;Mean-field optimal control;Wasserstein gradient flow;Markov-chain Monte-Carlo
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Accelerated Gradient Flow for Probability Distributions
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null |
()
| null |
Representation learning;variational model
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Variational recurrent models for representation learning
| null | null | 0 | 3.666667 |
Reject
|
5;3;3
| null |
null |
Google Brain
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Ishaan Gulrajani, Colin Raffel, Luke Metz
|
https://iclr.cc/virtual/2019/poster/1123
|
evaluation;generative adversarial networks;adversarial divergences
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.333333 |
3;6;7
| null | null |
Towards GAN Benchmarks Which Require Generalization
| null | null | 0 | 4 |
Poster
|
4;4;4
| null |
null |
National Taiwan University; Carnegie Mellon University; Virginia Tech; Georgia Tech
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang
|
https://iclr.cc/virtual/2019/poster/980
|
few shot classification;meta-learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
| null | null |
A Closer Look at Few-shot Classification
| null | null | 0 | 3.666667 |
Poster
|
4;5;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
segmentation evaluation;shape feature;variational auto-encoder
| null | 0 | null | null |
iclr
| -0.956183 | 0 | null |
main
| 5.25 |
3;5;6;7
| null | null |
An Alarm System for Segmentation Algorithm Based on Shape Model
| null | null | 0 | 4 |
Reject
|
5;4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
information theory;representation learning;deep learning;differential entropy estimation
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 6 |
4;7;7
| null | null |
Estimating Information Flow in DNNs
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null |
Google AI Berlin; University of Cambridge, Max Planck Institute for Intelligent Systems; University of Cambridge; University of Cambridge, Microsoft Research
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard E Turner
|
https://iclr.cc/virtual/2019/poster/1071
|
probabilistic models;approximate inference;few-shot learning;meta-learning
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 7 |
6;7;8
| null | null |
Meta-Learning Probabilistic Inference for Prediction
| null | null | 0 | 3.333333 |
Poster
|
2;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep learning;theory
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Provable Guarantees on Learning Hierarchical Generative Models with Deep CNNs
| null | null | 0 | 3.333333 |
Reject
|
4;3;3
| null |
null |
DeepMind, London, UK
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David P Reichert, Timothy Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, Peter Battaglia
|
https://iclr.cc/virtual/2019/poster/995
|
relational reasoning;reinforcement learning;graph neural networks;starcraft;generalization;inductive bias
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Deep reinforcement learning with relational inductive biases
| null | null | 0 | 3.666667 |
Poster
|
4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
sparse;recurrent;asynchronous;time;series
| null | 0 | null | null |
iclr
| -0.98644 | 0 | null |
main
| 5.25 |
4;4;6;7
| null | null |
Unified recurrent network for many feature types
| null | null | 0 | 3.25 |
Reject
|
4;4;3;2
| null |
null |
Qualcomm AI Research; University of Amsterdam, Qualcomm; QUV A Lab, University of Amsterdam; University of Amsterdam, TNO Intelligent Imaging
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, Max Welling
|
https://iclr.cc/virtual/2019/poster/855
|
Quantization;Compression;Neural Networks;Efficiency
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7
| null | null |
Relaxed Quantization for Discretized Neural Networks
| null | null | 0 | 3.666667 |
Poster
|
4;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
An investigation of model-free planning
| null | null | 0 | 4 |
Reject
|
5;3;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Face Completion;progressive GANs;Attribute Control;Frequency-oriented Attention
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
High Resolution and Fast Face Completion via Progressively Attentive GANs
| null | null | 0 | 4 |
Reject
|
5;2;5
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
impact noise;noise type classification;noise position classification;convolutional neural networks;transfer learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
4;4;4
| null | null |
Classification of Building Noise Type/Position via Supervised Learning
| null | null | 0 | 3.333333 |
Withdraw
|
4;4;2
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
communication;language;representation learning;autoencoders
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Shaping representations through communication
| null | null | 0 | 4 |
Withdraw
|
4;4;4
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Data compression;Image compression;Deep Learning;Convolutional neural networks
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 5.666667 |
5;5;7
| null | null |
Adaptive Sample-space & Adaptive Probability coding: a neural-network based approach for compression
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2019
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Meta-learning;gradient-based meta-learning;model-based meta-learning
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Model-Agnostic Meta-Learning for Multimodal Task Distributions
| null | null | 0 | 4 |
Reject
|
5;3;4
| null |
null |
Department of Electrical & Computer Engineering, Stony Brook University, Stony Brook, NY 11794; Department of Statistics, Columbia University, New York, NY 10027; Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY 11794
|
2019
| 0 | null | null | 0 | null | null | null | null | null |
Josue Nassar, Scott W Linderman, Monica Bugallo, Il Memming Park
|
https://iclr.cc/virtual/2019/poster/888
|
machine learning;bayesian statistics;dynamical systems
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 6.666667 |
6;7;7
| null | null |
Tree-Structured Recurrent Switching Linear Dynamical Systems for Multi-Scale Modeling
| null | null | 0 | 2.666667 |
Poster
|
4;2;2
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.