pdf
stringlengths 49
199
⌀ | aff
stringlengths 1
1.36k
⌀ | year
stringclasses 19
values | technical_novelty_avg
float64 0
4
⌀ | video
stringlengths 21
47
⌀ | doi
stringlengths 31
63
⌀ | presentation_avg
float64 0
4
⌀ | proceeding
stringlengths 43
129
⌀ | presentation
stringclasses 796
values | sess
stringclasses 576
values | technical_novelty
stringclasses 700
values | arxiv
stringlengths 10
16
⌀ | author
stringlengths 1
1.96k
⌀ | site
stringlengths 37
191
⌀ | keywords
stringlengths 2
582
⌀ | oa
stringlengths 86
198
⌀ | empirical_novelty_avg
float64 0
4
⌀ | poster
stringlengths 57
95
⌀ | openreview
stringlengths 41
45
⌀ | conference
stringclasses 11
values | corr_rating_confidence
float64 -1
1
⌀ | corr_rating_correctness
float64 -1
1
⌀ | project
stringlengths 1
162
⌀ | track
stringclasses 3
values | rating_avg
float64 0
10
⌀ | rating
stringlengths 1
17
⌀ | correctness
stringclasses 809
values | slides
stringlengths 32
41
⌀ | title
stringlengths 2
192
⌀ | github
stringlengths 3
165
⌀ | authors
stringlengths 7
161
⌀ | correctness_avg
float64 0
5
⌀ | confidence_avg
float64 0
5
⌀ | status
stringclasses 22
values | confidence
stringlengths 1
17
⌀ | empirical_novelty
stringclasses 763
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.188982 | 0 | null |
main
| 4.333333 |
3;4;6
| null | null |
Hypersphere Face Uncertainty Learning
| null | null | 0 | 4.333333 |
Withdraw
|
4;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
mathematical word problem;natural language generation;graph neural network
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.5 |
3;5;5;5
| null | null |
Mathematical Word Problem Generation from Commonsense Knowledge Graph and Equations
| null | null | 0 | 4.5 |
Withdraw
|
5;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
small data;deep learning;ensembles;classification
| null | 0 | null | null |
iclr
| 0.904534 | 0 | null |
main
| 4.25 |
3;4;5;5
| null | null |
On the Effectiveness of Deep Ensembles for Small Data Tasks
| null | null | 0 | 4.5 |
Reject
|
4;4;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.522233 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
VortexNet: Learning Complex Dynamic Systems with Physics-Embedded Networks
| null | null | 0 | 3.75 |
Withdraw
|
5;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
dynamic system identification;recurrent networks;explainable AI;time series modelling
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;3;4;6
| null | null |
Recurrent Neural Network Architecture based on Dynamic Systems Theory for Data Driven Modelling of Complex Physical Systems
| null | null | 0 | 4.25 |
Reject
|
5;3;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
meta-learning;loss function learning
| null | 0 | null | null |
iclr
| -0.102062 | 0 | null |
main
| 4.6 |
4;4;5;5;5
| null | null |
Searching for Robustness: Loss Learning for Noisy Classification Tasks
| null | null | 0 | 3.4 |
Withdraw
|
4;3;4;2;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
one-to-many prediction;generative models
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
Generating Unobserved Alternatives: A Case Study through Super-Resolution and Decompression
| null | null | 0 | 3.666667 |
Withdraw
|
3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
Consistent Instance Classification for Unsupervised Representation Learning
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
parallel Monte Carlo Tree Search (MCTS);Upper Confidence bound for Trees (UCT);Reinforcement Learning (RL)
| null | 0 | null | null |
iclr
| -0.688247 | 0 | null |
main
| 6.5 |
6;6;7;7
| null | null |
On Effective Parallelization of Monte Carlo Tree Search
| null | null | 0 | 2.75 |
Reject
|
3;4;3;1
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Meta Learning;Deep Metric Learning;Transfer Learning
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 5.75 |
5;6;6;6
| null | null |
Uniform Priors for Data-Efficient Transfer
| null | null | 0 | 3.5 |
Reject
|
4;4;3;3
| null |
null |
University of Cambridge
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3146; None
| null | 0 | null | null | null | null | null |
Noel Loo, Siddharth Swaroop, Richard E Turner
|
https://iclr.cc/virtual/2021/poster/3146
| null | null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 6.5 |
4;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3146
|
Generalized Variational Continual Learning
| null | null | 0 | 4.5 |
Poster
|
5;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
COVID-19 Diagnosis;COVID-19 Prognosis;GCN
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Beyond COVID-19 Diagnosis: Prognosis with Hierarchical Graph Representation Learning
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
quantized neural networks;network architecture search;compact models
| null | 0 | null | null |
iclr
| -0.818182 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
Once Quantized for All: Progressively Searching for Quantized Compact Models
| null | null | 0 | 3.25 |
Reject
|
4;4;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
One-Shot Learning;Few-Shot Learning;Object Detection;One-Shot Object Detection;Generalization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
5;6;6;7
| null | null |
Closing the Generalization Gap in One-Shot Object Detection
| null | null | 0 | 4 |
Reject
|
4;3;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
maximum entropy;Bayesian statistics
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.25 |
3;4;4;6
| null | null |
Maximum Entropy competes with Maximum Likelihood
| null | null | 0 | 4 |
Reject
|
4;3;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep learning;Deep generative model;Unsupervised learning;Gradient estimator;Reparameterization trick;Discrete distribution;Gumbel-Softmax
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Generalized Gumbel-Softmax Gradient Estimator for Generic Discrete Random Variables
| null | null | 0 | 3.5 |
Reject
|
5;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 3.75 |
3;4;4;4
| null | null |
Sequential Normalization: an improvement over Ghost Normalization
| null | null | 0 | 4.5 |
Withdraw
|
4;5;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.207514 | 0 | null |
main
| 3.75 |
3;3;4;5
| null | null |
Generating universal language adversarial examples by understanding and enhancing the transferability across neural models
| null | null | 0 | 3.25 |
Withdraw
|
3;3;5;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Reinforcement Learning;Sample Efficiency
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
2;4;5;5
| null | null |
Measuring Progress in Deep Reinforcement Learning Sample Efficiency
| null | null | 0 | 4.75 |
Reject
|
5;4;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.720577 | 0 | null |
main
| 6.333333 |
4;7;8
| null | null |
On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes
| null | null | 0 | 4 |
Reject
|
3;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;representation learning;unsupervised learning
| null | 0 | null | null |
iclr
| -0.301511 | 0 | null |
main
| 5.75 |
5;5;6;7
| null | null |
Decoupling Representation Learning from Reinforcement Learning
|
hiddenurl
| null | 0 | 3.5 |
Reject
|
3;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
few-shot learning;meta-learning;metric learning;deep learning
| null | 0 | null | null |
iclr
| 0.207514 | 0 | null |
main
| 5.25 |
4;5;5;7
| null | null |
On Episodes, Prototypical Networks, and Few-Shot Learning
| null | null | 0 | 4.25 |
Reject
|
5;4;3;5
| null |
null |
University of Science and Technology of China; IIIS, Tsinghua University; Microsoft Research
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3151; None
| null | 0 | null | null | null | null | null |
Guoqing Liu, Chuheng Zhang, Li Zhao, Tao Qin, Jinhua Zhu, Li Jian, Nenghai Yu, Tie-Yan Liu
|
https://iclr.cc/virtual/2021/poster/3151
|
reinforcement learning;auxiliary task;representation learning;contrastive learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.5 |
6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3151
|
Return-Based Contrastive Representation Learning for Reinforcement Learning
| null | null | 0 | 3.5 |
Poster
|
3;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
machine learning;synthetic data;few-shot learning;domain adaptation
| null | 0 | null | null |
iclr
| 0.174078 | 0 | null |
main
| 5.75 |
5;5;6;7
| null | null |
Context-Agnostic Learning Using Synthetic Data
| null | null | 0 | 3.25 |
Reject
|
3;3;4;3
| null |
null |
Division of Biostatistics, University of California, Berkeley, Berkeley, CA 94720, USA; Instacart, San Francisco, CA 94107, USA; Walmart Labs, Sunnyvale, CA 94086, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2669; None
| null | 0 | null | null | null | null | null |
Da Xu, Yuting Ye, Chuanwei Ruan
|
https://iclr.cc/virtual/2021/poster/2669
|
Importance Weighting;Deep Learning;Implicit Bias;Gradient Descent;Learning Theory
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2669
|
Understanding the role of importance weighting for deep learning
| null | null | 0 | 3.25 |
Spotlight
|
4;4;1;4
| null |
null |
Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University & Faculty of Physics and Astronomy, Heidelberg University & Bernstein Center Computational Neuroscience; Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University; Department of Theoretical Neuroscience, Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3338; None
| null | 0 | null | null | null | null | null |
Dominik Schmidt, Georgia Koppe, Zahra Monfared, Max Beutelspacher, Daniel Durstewitz
|
https://iclr.cc/virtual/2021/poster/3338
|
nonlinear dynamical systems;recurrent neural networks;attractors;computational neuroscience;vanishing gradient problem;LSTM
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
6;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3338
|
Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies
| null | null | 0 | 4.25 |
Spotlight
|
4;4;5;4
| null |
null |
UC San Diego; CMU; UC Berkeley
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2767; None
| null | 0 | null | null | null | null | null |
Haozhi Qi, Xiaolong Wang, Deepak Pathak, Yi Ma, Jitendra Malik
|
https://iclr.cc/virtual/2021/poster/2767
|
dynamics prediction;interaction networks;physical reasoning
| null | 0 | null | null |
iclr
| 0.904534 | 0 |
Website (as mentioned in the abstract)
|
main
| 6.5 |
6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/2767
|
Learning Long-term Visual Dynamics with Region Proposal Interaction Networks
| null | null | 0 | 3.25 |
Poster
|
3;2;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
transformer;language model;memory networks
| null | 0 | null | null |
iclr
| -0.894427 | 0 | null |
main
| 4.5 |
3;4;5;6
| null | null |
Memformer: The Memory-Augmented Transformer
| null | null | 0 | 3.5 |
Reject
|
4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial attack;relevance map;object detection;transferability;black-box attack
| null | 0 | null | null |
iclr
| 0.301511 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
Relevance Attack on Detectors
| null | null | 0 | 3.5 |
Reject
|
4;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
learning with noise;robust task loss;consistency regularization
| null | 0 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Robust Temporal Ensembling
| null | null | 0 | 4 |
Reject
|
5;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
self-supervised pretraining;zero-shot;few-shot;text-to-text;contrastive self-supervised learning;small data;long-tail;multi-label classification;NLP
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Self-supervised Contrastive Zero to Few-shot Learning from Small, Long-tailed Text data
| null | null | 0 | 4 |
Reject
|
5;3;4
| null |
null |
Department of Electrical and Computer Engineering, University of California, San Diego; Department of Computer Science and Engineering, University of California, San Diego; The Halıcıo ˘glu Data Science Institute, University of California, San Diego
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2996; None
| null | 0 | null | null | null | null | null |
Changhao Shi, Chester Holtz, Gal Mishne
|
https://iclr.cc/virtual/2021/poster/2996
|
Adversarial Robustness;Self-Supervised Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.666667 |
6;7;7
| null |
https://iclr.cc/virtual/2021/poster/2996
|
Online Adversarial Purification based on Self-supervised Learning
| null | null | 0 | 4 |
Poster
|
4;5;3
| null |
null |
School for Engineering of Matter, Transport, and Energy, Arizona State University; School of Computing, Informatics, and Decision Systems Engineering, Arizona State University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3321; None
| null | 0 | null | null | null | null | null |
Changhoon Kim, Yi Ren, 'YZ' Yezhou Yang
|
https://iclr.cc/virtual/2021/poster/3321
|
GANs;Generative Model;Deepfake;Model Attribution
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5.666667 |
5;6;6
| null |
https://iclr.cc/virtual/2021/poster/3321
|
Decentralized Attribution of Generative Models
|
https://github.com/ASU-Active-Perception-Group/decentralized_attribution_of_generative_models
| null | 0 | 3 |
Poster
|
4;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.25 |
3;4;4;6
| null | null |
Model-based Navigation in Environments with Novel Layouts Using Abstract $2$-D Maps
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null |
Google Research, Brain team; Inria, Scool team, Univ. Lille, CRIStAL, CNRS; Google Research, Brain team, Inria, Scool team, Univ. Lille, CRIStAL, CNRS
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2640; None
| null | 0 | null | null | null | null | null |
Yannis Flet-Berliac, Johan Ferret, Olivier Pietquin, philippe preux, Matthieu Geist
|
https://iclr.cc/virtual/2021/poster/2640
| null | null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 6.333333 |
5;7;7
| null |
https://iclr.cc/virtual/2021/poster/2640
|
Adversarially Guided Actor-Critic
| null | null | 0 | 2.333333 |
Poster
|
3;2;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Object-Centric Representation Learning;Concept Learning
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 4.5 |
4;4;5;5
| null | null |
Language-Mediated, Object-Centric Representation Learning
| null | null | 0 | 4.25 |
Reject
|
4;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;exploration
| null | 0 | null | null |
iclr
| 0.377415 | 0 | null |
main
| 6.6 |
4;5;7;8;9
| null | null |
BeBold: Exploration Beyond the Boundary of Explored Regions
| null | null | 0 | 4.2 |
Reject
|
4;4;4;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
DNN Compression;Loading time
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 3 |
2;3;3;4
| null | null |
Reinforcement Learning Based Asymmetrical DNN Modularization for Optimal Loading
| null | null | 0 | 4 |
Reject
|
5;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Learning;Graph Convolution;Ricci Flow;Robustness
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Ricci-GNN: Defending Against Structural Attacks Through a Geometric Approach
| null | null | 0 | 4.666667 |
Reject
|
4;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.894737 | 0 | null |
main
| 4.75 |
3;5;5;6
| null | null |
Diffeomorphic Template Transformers
| null | null | 0 | 3.75 |
Reject
|
5;4;4;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.927173 | 0 | null |
main
| 4.75 |
3;5;5;6
| null | null |
Data-aware Low-Rank Compression for Large NLP Models
| null | null | 0 | 3.75 |
Reject
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Multi-agent reinforcement learning;coordination;mutual information
| null | 0 | null | null |
iclr
| -0.471405 | 0 | null |
main
| 5 |
3;5;6;6
| null | null |
A Maximum Mutual Information Framework for Multi-Agent Reinforcement Learning
| null | null | 0 | 3.75 |
Reject
|
4;4;4;3
| null |
null |
Blueshift, Alphabet, Mountain View, CA; Perimeter Institute for Theoretical Physics, Waterloo, Canada
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2643; None
| null | 0 | null | null | null | null | null |
Anna Golubeva, Guy Gur-Ari, Behnam Neyshabur
|
https://iclr.cc/virtual/2021/poster/2643
|
network width;over-parametrization;understanding deep learning
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 5 |
4;5;6
| null |
https://iclr.cc/virtual/2021/poster/2643
|
Are wider nets better given the same number of parameters?
|
https://github.com/google-research/wide-sparse-nets
| null | 0 | 3 |
Poster
|
4;3;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
variational autoencoder;multimodal data;product-of-experts;semi-supervised learning
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts
| null | null | 0 | 3.25 |
Reject
|
4;4;3;2
| null |
null |
Baidu Research; NVIDIA; Computer Science and Engineering, UCSD
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2979; None
| null | 0 | null | null | null | null | null |
Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, Bryan Catanzaro
|
https://iclr.cc/virtual/2021/poster/2979
|
diffusion probabilistic models;audio synthesis;speech synthesis;generative models
| null | 0 | null | null |
iclr
| -0.279508 | 0 |
https://diffwave-demo.github.io/
|
main
| 7.6 |
7;7;7;8;9
| null |
https://iclr.cc/virtual/2021/poster/2979
|
DiffWave: A Versatile Diffusion Model for Audio Synthesis
| null | null | 0 | 4 |
Oral
|
5;5;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
disentanglement;representation learning;unsupervised;inductive bias
| null | 0 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 4 |
2;3;4;5;6
| null | null |
Disentangling Action Sequences: Discovering Correlated Samples
| null | null | 0 | 4.2 |
Reject
|
5;4;4;4;4
| null |
null |
Machine Learning Research Lab, Volkswagen Group, Munich, Germany
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2610; None
| null | 0 | null | null | null | null | null |
Justin Bayer, Maximilian Soelch, Atanas Mirchev, Baris Kayalibay, Patrick van der Smagt
|
https://iclr.cc/virtual/2021/poster/2610
|
variational inference;state-space models;amortized inference;recurrent networks
| null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 6.75 |
6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2610
|
Mind the Gap when Conditioning Amortised Inference in Sequential Latent-Variable Models
| null | null | 0 | 4.25 |
Poster
|
4;5;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Bayesian Optimization;AutoML;Hyperparameter Optimization;Neural Architecture Search
| null | 0 | null | null |
iclr
| -0.686406 | 0 | null |
main
| 5.8 |
5;6;6;6;6
| null | null |
Model-based Asynchronous Hyperparameter and Neural Architecture Search
| null | null | 0 | 3.6 |
Reject
|
5;4;3;4;2
| null |
null |
Department of Computer Science, Royal Holloway University of London, Egham, UK; Department of Mathematics, London School of Economics, London, UK
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3191; None
| null | 0 | null | null | null | null | null |
Yun Kuen Cheung, Yixin Tao
|
https://iclr.cc/virtual/2021/poster/3191
|
Learning in Games;Lyapunov Chaos;Game Decomposition;Multiplicative Weights Update;Follow-the-Regularized-Leader;Volume Analysis;Dynamical Systems
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 6.5 |
5;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/3191
|
Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions
| null | null | 0 | 3.5 |
Poster
|
3;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 4.5 |
4;4;4;6
| null | null |
Optimal allocation of data across training tasks in meta-learning
| null | null | 0 | 3.5 |
Reject
|
3;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Machine Learning;AlphaZero;Information Theory;Inductive Bias;MCTS;Monte Carlo Tree Search
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 3.75 |
3;4;4;4
| null | null |
Domain Knowledge in Exploration Noise in AlphaZero
| null | null | 0 | 4.5 |
Withdraw
|
5;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Neural Networks;Graph Representation Learning;Graph Structure Learning;Self-supervision
| null | 0 | null | null |
iclr
| -0.612372 | 0 | null |
main
| 5.4 |
5;5;5;5;7
| null | null |
SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks
| null | null | 0 | 3.6 |
Reject
|
4;4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Causal Discovery;Future Prediction
| null | 0 | null | null |
iclr
| -0.790569 | 0 | null |
main
| 3 |
2;3;3;3;4
| null | null |
Causal Future Prediction in a Minkowski Space-Time
| null | null | 0 | 3.6 |
Withdraw
|
5;4;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial examples;autonomous driving
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing
| null | null | 0 | 3.5 |
Reject
|
4;2;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.816497 | 0 | null |
main
| 5 |
4;5;5;6
| null | null |
ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution
| null | null | 0 | 3.75 |
Withdraw
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Natural Language Processing;Representation Learning
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Deepening Hidden Representations from Pre-trained Language Models
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
sequential latent variable models
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6 |
4;7;7
| null | null |
Variational Dynamic Mixtures
| null | null | 0 | 3.333333 |
Reject
|
3;3;4
| null |
null |
; Alibaba Group; Sun Yat-sen University; The University of Edinburgh; Tsinghua University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3107; None
| null | 0 | null | null | null | null | null |
Ning Ding, Xiaobin Wang, Yao Fu, Guangwei Xu, Rui Wang, Pengjun Xie, Ying Shen, Fei Huang, Hai-Tao Zheng, Rui Zhang
|
https://iclr.cc/virtual/2021/poster/3107
|
NLP;Relation Extraction;Representation Learning
| null | 0 | null | null |
iclr
| -0.632456 | 0 | null |
main
| 5.5 |
4;5;6;7
| null |
https://iclr.cc/virtual/2021/poster/3107
|
Prototypical Representation Learning for Relation Extraction
| null | null | 0 | 4 |
Poster
|
4;5;4;3
| null |
null |
Cognitive Computing Lab, Baidu Research, 10900 NE 8th St. Bellevue, WA 98004, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2704; None
| null | 0 | null | null | null | null | null |
Yang Zhao, Jianwen Xie, Ping Li
|
https://iclr.cc/virtual/2021/poster/2704
|
Energy-based model;generative model;image translation;Langevin dynamics
| null | 0 | null | null |
iclr
| -0.774597 | 0 | null |
main
| 5.5 |
4;5;6;7
| null |
https://iclr.cc/virtual/2021/poster/2704
|
Learning Energy-Based Generative Models via Coarse-to-Fine Expanding and Sampling
| null | null | 0 | 4.25 |
Poster
|
5;4;4;4
| null |
null |
Inria‡; NYU†
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2645; None
| null | 0 | null | null | null | null | null |
Alberto Bietti, Francis Bach
|
https://iclr.cc/virtual/2021/poster/2645
|
deep learning;kernels;approximation;neural tangent kernels
| null | 0 | null | null |
iclr
| 0.738549 | 0 | null |
main
| 7 |
6;6;7;9
| null |
https://iclr.cc/virtual/2021/poster/2645
|
Deep Equals Shallow for ReLU Networks in Kernel Regimes
| null | null | 0 | 4.25 |
Poster
|
4;3;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep generative models;variational inference;approximate inference;variational auto encoder
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5 |
3;5;7
| null | null |
Self-Reflective Variational Autoencoder
| null | null | 0 | 4 |
Reject
|
4;5;3
| null |
null |
Dept. of Comp. Sci. & Tech., BNRist Center, THU-Bosch Joint ML Center, Tsinghua University; Data Science Institute, University of Technology Sydney
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2648; None
| null | 0 | null | null | null | null | null |
Feng Zhou, Yixuan Zhang, Jun Zhu
|
https://iclr.cc/virtual/2021/poster/2648
|
neural spike train;nonlinear Hawkes process;auxiliary latent variable;conjugacy
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.25 |
6;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2648
|
Efficient Inference of Flexible Interaction in Spiking-neuron Networks
| null | null | 0 | 3.25 |
Poster
|
3;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Neural Networks;Adversarial Machine Learning;Security
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
5;5;6;8
| null | null |
Luring of transferable adversarial perturbations in the black-box paradigm
| null | null | 0 | 3.25 |
Reject
|
2;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Program Synthesis;Synthetic Data;Evolutionary Algorithm
| null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 5.25 |
3;6;6;6
| null | null |
Adversarial Synthetic Datasets for Neural Program Synthesis
| null | null | 0 | 3.25 |
Reject
|
3;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
representation learning;spurious correlations;deep learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.5 |
4;4;5;5
| null | null |
Learning Task-Relevant Features via Contrastive Input Morphing
| null | null | 0 | 4.5 |
Withdraw
|
4;5;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
collaborative filtering;recommender system;sampling theory
| null | 0 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 3.5 |
3;3;4;4
| null | null |
Collaborative Filtering with Smooth Reconstruction of the Preference Function
| null | null | 0 | 4 |
Reject
|
5;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Neural Architecture Search;Model Parallel
| null | 0 | null | null |
iclr
| 0.333333 | 0 |
link1
|
main
| 5.25 |
5;5;5;6
| null | null |
Efficient Differentiable Neural Architecture Search with Model Parallelism
| null | null | 0 | 3.75 |
Reject
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
self-supervised learning;contrastive learning
| null | 0 | null | null |
iclr
| 0 | 0 |
Not provided
|
main
| 0 | null | null | null |
A Framework For Contrastive Self-Supervised Learning And Designing A New Approach
|
Not provided
| null | 0 | 0 |
Withdraw
| null | null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
self-supervised learning;contrastive learning;unsupervised classification
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 6.75 |
6;7;7;7
| null | null |
Self-supervised representation learning via adaptive hard-positive mining
| null | null | 0 | 3.5 |
Withdraw
|
3;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Quickest Change detection;Parametric approach;Multi-task
| null | 0 | null | null |
iclr
| 0.174078 | 0 | null |
main
| 6.75 |
6;7;7;7
| null | null |
Quickest change detection for multi-task problems under unknown parameters
| null | null | 0 | 3.25 |
Reject
|
3;4;4;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Attention-on-label;meta training;noisy data;multi-observer
| null | 0 | null | null |
iclr
| 0.327327 | 0 | null |
main
| 5.333333 |
4;5;7
| null | null |
Learning Image Labels On-the-fly for Training Robust Classification Models
| null | null | 0 | 3 |
Withdraw
|
2;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Language Models;Knowledge Graphs;Information Extraction
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Language Models are Open Knowledge Graphs
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Open-set domain adaptation;Unknown sample generation;Distribution Alignment
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5
| null | null |
Learning to Generate the Unknowns for Open-set Domain Adaptation
| null | null | 0 | 4 |
Withdraw
|
4;5;3
| null |
null |
Department of Computer Science, Aalto University, Helsinki, Finland
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3028; None
| null | 0 | null | null | null | null | null |
Valerii Iakovlev, Markus Heinonen, Harri Lähdesmäki
|
https://iclr.cc/virtual/2021/poster/3028
|
dynamical systems;partial differential equations;PDEs;graph neural networks;continuous time
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.5 |
6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3028
|
Learning continuous-time PDEs from sparse data with graph neural networks
| null | null | 0 | 3.5 |
Poster
|
3;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Color representation;VAE;Color space;Unsupervised learning
| null | 0 | null | null |
iclr
| -0.188982 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
Learning Representation in Colour Conversion
| null | null | 0 | 4.333333 |
Reject
|
5;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
vqa;clevr;contrastive learning;3d;inverse graphics
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 4.5 |
4;4;4;6
| null | null |
Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests
| null | null | 0 | 3.5 |
Reject
|
4;3;3;4
| null |
null |
Département d’Informatique de l’ENS, ENS, CNRS, PSL University, Paris, France; Concordia University and Mila, Montreal, Canada; Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom; CNRS, LIP6, Sorbonne University, Paris, France
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2616; None
| null | 0 | null | null | null | null | null |
Louis THIRY, Michael Arbel, Eugene Belilovsky, Edouard Oyallon
|
https://iclr.cc/virtual/2021/poster/2616
|
convolutional kernel methods;image classification
| null | 0 | null | null |
iclr
| 0.942809 | 0 | null |
main
| 6.25 |
6;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2616
|
The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods
| null | null | 0 | 3 |
Poster
|
2;2;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Continuous Control;Reinforcement Learning
| null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 5.75 |
5;6;6;6
| null | null |
Measuring Visual Generalization in Continuous Control from Pixels
| null | null | 0 | 3.25 |
Reject
|
3;4;3;3
| null |
null |
Stanford University; University of Chicago; Caltech
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3298; None
| null | 0 | null | null | null | null | null |
Ayya Alieva, Aiden Aceves, Jialin Song, Stephen Mayo, Yisong Yue, Yuxin Chen
|
https://iclr.cc/virtual/2021/poster/3298
| null | null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3298
|
Learning to Make Decisions via Submodular Regularization
| null | null | 0 | 3.666667 |
Poster
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Algorithmic fairness;missing data analysis;domain adaptation
| null | 0 | null | null |
iclr
| -0.285714 | 0 | null |
main
| 4.8 |
4;4;5;5;6
| null | null |
Fairness guarantee in analysis of incomplete data
| null | null | 0 | 3.2 |
Withdraw
|
4;3;4;2;3
| null |
null |
Center for Systems Biology Dresden, Max-Planck Institute (CBG), Dresden, Germany; Fondazione Human Technopole, Milano, Italy; School of Computer Science, University of Birmingham, Birmingham, UK; Center for Systems Biology Dresden, Max-Planck Institute (CBG), Dresden, Germany
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2637; None
| null | 0 | null | null | null | null | null |
Mangal Prakash, Alexander Krull, Florian Jug
|
https://iclr.cc/virtual/2021/poster/2637
|
Diversity denoising;Unsupervised denoising;Variational Autoencoders;Noise model
| null | 0 | null | null |
iclr
| -0.301511 | 0 | null |
main
| 6.5 |
6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/2637
|
Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders
| null | null | 0 | 3.75 |
Poster
|
3;5;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
disentanglement;causality;representation learning;generative model
| null | 0 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Disentangled Generative Causal Representation Learning
| null | null | 0 | 4 |
Reject
|
4;5;3;4
| null |
null |
Department of Computer Science, National Taiwan University, Taiwan; Institute of Information Science, Academia Sinica, Taiwan; Institute of Information Science, Academia Sinica, Taiwan2Taiwan AI Labs, Taiwan
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3297; None
| null | 0 | null | null | null | null | null |
Yu-Ying Chou, Hsuan-Tien Lin, Tyng-Luh Liu
|
https://iclr.cc/virtual/2021/poster/3297
|
Generalized zero-shot learning;mixup
| null | 0 | null | null |
iclr
| -0.133631 | 0 | null |
main
| 6.2 |
5;6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3297
|
Adaptive and Generative Zero-Shot Learning
| null | null | 0 | 4.4 |
Poster
|
5;3;5;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
AI application in Earth Science;Convolutional Neural Network;3D;Wildfire Spread Model
| null | 0 | null | null |
iclr
| -0.408248 | 0 | null |
main
| 2.8 |
2;3;3;3;3
| null | null |
A 3D Convolutional Neural Network for Predicting Wildfire Profiles
| null | null | 0 | 4.6 |
Withdraw
|
5;5;4;5;4
| null |
null |
AMLab, CSL, University of Amsterdam; AMLab, QUV A Lab, University of Amsterdam
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2543; None
| null | 0 | null | null | null | null | null |
Leon Lang, Maurice Weiler
|
https://iclr.cc/virtual/2021/poster/2543
|
Group Equivariant Convolution;Steerable Kernel;Quantum Mechanics;Wigner-Eckart Theorem;Representation Theory;Harmonic Analysis;Peter-Weyl Theorem
| null | 0 | null | null |
iclr
| 0.816497 | 0 | null |
main
| 7 |
6;6;8;8
| null |
https://iclr.cc/virtual/2021/poster/2543
|
A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels
| null | null | 0 | 3 |
Poster
|
3;1;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Constrained reinforcement learning;constrain inference;safe reinforcement learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.5 |
4;5;6;7
| null | null |
Inverse Constrained Reinforcement Learning
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Reinforcement Learning;Episodic Memory;Sample Efficiency
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 4.75 |
4;5;5;5
| null | null |
Regioned Episodic Reinforcement Learning
| null | null | 0 | 3.75 |
Reject
|
4;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
transfer learning;fine-tuning;supervised transfer learning
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Meta-learning Transferable Representations with a Single Target Domain
| null | null | 0 | 3.333333 |
Reject
|
3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
program synthesis;flow chart;specification;graph recognition;CNN
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 0 | null | null | null |
FCR: Flow Chart Recognition Network for Program Synthesis
| null | null | 0 | 0 |
Desk Reject
| null | null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
robustness;adversarial robustness;corruptions;class selectivity;deep learning
| null | 0 | null | null |
iclr
| 0.688247 | 0 | null |
main
| 5.75 |
4;6;6;7
| null | null |
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality
| null | null | 0 | 3.5 |
Reject
|
3;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
lifelong learning;continual learning;architecture search
| null | 0 | null | null |
iclr
| -0.512989 | 0 | null |
main
| 5.75 |
4;6;6;7
| null | null |
Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer
| null | null | 0 | 3.5 |
Reject
|
5;3;2;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Model-based reinforcement learning;posterior sampling;Bayesian reinforcement learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.25 |
5;5;5;6
| null | null |
Efficient Exploration for Model-based Reinforcement Learning with Continuous States and Actions
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
neural architecture search;automated machine learning;convolutional neural networks
| null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 4.75 |
4;5;5;5
| null | null |
Searching for Convolutions and a More Ambitious NAS
| null | null | 0 | 4.25 |
Reject
|
4;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial Robustness;Saliency Maps;Interpretability
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
SAD: Saliency Adversarial Defense without Adversarial Training
| null | null | 0 | 5 |
Withdraw
|
5;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
continual learning;semi-supervised learning
| null | 0 | null | null |
iclr
| -0.13484 | 0 | null |
main
| 5.5 |
4;5;6;7
| null | null |
Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer
| null | null | 0 | 4.25 |
Reject
|
5;3;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
theory;learning dynamics;temperature;theory of deep learning;generalization
| null | 0 | null | null |
iclr
| -0.492366 | 0 | null |
main
| 5 |
3;5;6;6
| null | null |
Temperature check: theory and practice for training models with softmax-cross-entropy losses
| null | null | 0 | 2.25 |
Reject
|
3;2;3;1
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep density-ratio estimation;forward Euler method;Mckean-Vlasov equation;Monge-Ampere equation;residual map
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Generative Learning With Euler Particle Transport
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
robustness;covariance;invariance;convolutional neural nets;PDEs;segmentation
| null | 0 | null | null |
iclr
| 0.612372 | 0 | null |
main
| 5.8 |
5;6;6;6;6
| null | null |
Shape-Tailored Deep Neural Networks Using PDEs for Segmentation
| null | null | 0 | 3.6 |
Reject
|
3;4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.985527 | 0 | null |
main
| 5.4 |
3;4;4;8;8
| null | null |
Learning to Share in Multi-Agent Reinforcement Learning
| null | null | 0 | 3.6 |
Reject
|
4;4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Reinforcement learning;Benchmarks;Efficiency;Reproducibility;Core issues;Algorithm analysis;Dimensions of hardness;OpenAI Gym
| null | 0 | null | null |
iclr
| -0.522233 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
MDP Playground: Controlling Orthogonal Dimensions of Hardness in Toy Environments
| null | null | 0 | 4.25 |
Reject
|
4;5;4;4
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.