pdf
stringlengths 49
199
⌀ | aff
stringlengths 1
1.36k
⌀ | year
stringclasses 19
values | technical_novelty_avg
float64 0
4
⌀ | video
stringlengths 21
47
⌀ | doi
stringlengths 31
63
⌀ | presentation_avg
float64 0
4
⌀ | proceeding
stringlengths 43
129
⌀ | presentation
stringclasses 796
values | sess
stringclasses 576
values | technical_novelty
stringclasses 700
values | arxiv
stringlengths 10
16
⌀ | author
stringlengths 1
1.96k
⌀ | site
stringlengths 37
191
⌀ | keywords
stringlengths 2
582
⌀ | oa
stringlengths 86
198
⌀ | empirical_novelty_avg
float64 0
4
⌀ | poster
stringlengths 57
95
⌀ | openreview
stringlengths 41
45
⌀ | conference
stringclasses 11
values | corr_rating_confidence
float64 -1
1
⌀ | corr_rating_correctness
float64 -1
1
⌀ | project
stringlengths 1
162
⌀ | track
stringclasses 3
values | rating_avg
float64 0
10
⌀ | rating
stringlengths 1
17
⌀ | correctness
stringclasses 809
values | slides
stringlengths 32
41
⌀ | title
stringlengths 2
192
⌀ | github
stringlengths 3
165
⌀ | authors
stringlengths 7
161
⌀ | correctness_avg
float64 0
5
⌀ | confidence_avg
float64 0
5
⌀ | status
stringclasses 22
values | confidence
stringlengths 1
17
⌀ | empirical_novelty
stringclasses 763
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
University of Washington; CMU; Shanghai Qi Zhi Institute; Tsinghua University; UCSD; UC Berkeley
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2775; None
| null | 0 | null | null | null | null | null |
Zhenggang Tang, Chao Yu, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Du, Yu Wang, Yi Wu
|
https://iclr.cc/virtual/2021/poster/2775
|
strategic behavior;multi-agent reinforcement learning;reward randomization;diverse strategies
| null | 0 | null | null |
iclr
| -0.5 | 0 |
https://sites.google.com/view/staghuntrpg
|
main
| 6 |
5;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2775
|
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
| null | null | 0 | 3 |
Poster
|
3;4;3;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial Examples;Certifiable Robustness;Certifiable Training;Loss Landscape;Deep Learning;Security
| null | 0 | null | null |
iclr
| -0.970725 | 0 | null |
main
| 4.666667 |
3;4;7
| null | null |
Loss Landscape Matters: Training Certifiably Robust Models with Favorable Loss Landscape
| null | null | 0 | 4.333333 |
Reject
|
5;5;3
| null |
null |
ENS, PSL University, Paris, France; INRIA & ENS, PSL University, Paris, France
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2559; None
| null | 0 | null | null | null | null | null |
Waïss Azizian, marc lelarge
|
https://iclr.cc/virtual/2021/poster/2559
|
Graph Neural Network;Universality;Approximation
| null | 0 | null | null |
iclr
| -0.132453 | 0 | null |
main
| 7.75 |
6;8;8;9
| null |
https://iclr.cc/virtual/2021/poster/2559
|
Expressive Power of Invariant and Equivariant Graph Neural Networks
| null | null | 0 | 3.75 |
Spotlight
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
uncertainty calibration;differential privacy
| null | 0 | null | null |
iclr
| -0.174078 | 0 | null |
main
| 5.75 |
5;5;6;7
| null | null |
Privacy Preserving Recalibration under Domain Shift
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
semi-supervised learning;contrastive learning;self-supervised learning;deep learning;representation learning;metric learning;visual representations
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.5 |
4;4;4;6
| null | null |
Supervision Accelerates Pre-training in Contrastive Semi-Supervised Learning of Visual Representations
| null | null | 0 | 4 |
Reject
|
5;5;3;3
| null |
null |
Facebook AI; NYU
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3295; None
| null | 0 | null | null | null | null | null |
Bilal Alsallakh, Narine Kokhlikyan, Vivek Miglani, Jun Yuan, Orion Reblitz-Richardson
|
https://iclr.cc/virtual/2021/poster/3295
|
CNN;convolution;spatial bias;blind spots;foveation;padding;exposition;debugging;visualization
| null | 0 | null | null |
iclr
| 0.904534 | 0 | null |
main
| 7.25 |
6;7;8;8
| null |
https://iclr.cc/virtual/2021/poster/3295
|
Mind the Pad -- CNNs Can Develop Blind Spots
| null | null | 0 | 3.5 |
Spotlight
|
3;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Meta-Learning with Implicit Processes
| null | null | 0 | 3.666667 |
Reject
|
3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 5.5 |
4;6;6;6
| null | null |
L2E: Learning to Exploit Your Opponent
| null | null | 0 | 3.25 |
Reject
|
3;3;3;4
| null |
null |
Department of Computer Science, Stanford University; Microsoft Research, New England
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3318; None
| null | 0 | null | null | null | null | null |
Tri Dao, Govinda Kamath, Vasilis Syrgkanis, Lester Mackey
|
https://iclr.cc/virtual/2021/poster/3318
|
knowledge distillation;semiparametric inference;generalization bounds;model compression;cross-fitting;orthogonal machine learning;loss correction
| null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 6.5 |
6;6;6;8
| null |
https://iclr.cc/virtual/2021/poster/3318
|
Knowledge Distillation as Semiparametric Inference
| null | null | 0 | 3.5 |
Poster
|
4;2;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Eigenvalue problem;Unsupervised learning;Laplacian operator
| null | 0 | null | null |
iclr
| -0.435194 | 0 | null |
main
| 5 |
3;4;4;9
| null | null |
Deep Learning Solution of the Eigenvalue Problem for Differential Operators
| null | null | 0 | 3 |
Reject
|
3;5;2;2
| null |
null |
Lawrence Livermore National Laboratory, Livermore, CA, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2578; None
| null | 0 | null | null | null | null | null |
Brenden Petersen, Mikel Landajuela Larma, Terrell N Mundhenk, Claudio Santiago, Soo Kim, Joanne Kim
|
https://iclr.cc/virtual/2021/poster/2578
|
symbolic regression;reinforcement learning;automated machine learning
| null | 0 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 8 |
7;8;8;9
| null |
https://iclr.cc/virtual/2021/poster/2578
|
Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients
| null | null | 0 | 3.5 |
Oral
|
4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Image Classification;Convolutional Neural Network;Attention Mechanisms;Feature Reuse
| null | 0 | null | null |
iclr
| -0.258199 | 0 | null |
main
| 4.75 |
3;4;6;6
| null | null |
AFINets: Attentive Feature Integration Networks for Image Classification
| null | null | 0 | 3.5 |
Withdraw
|
3;5;2;4
| null |
null |
Boston University; UC Berkeley
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3310; None
| null | 0 | null | null | null | null | null |
Alvin Wan, Lisa Dunlap, Daniel Ho, Jihan Yin, Scott Lee, Suzanne Petryk, Sarah A Bargal, Joseph E Gonzalez
|
https://iclr.cc/virtual/2021/poster/3310
|
explainability;computer vision;interpretability
| null | 0 | null | null |
iclr
| -0.196116 | 0 | null |
main
| 6.6 |
6;6;6;7;8
| null |
https://iclr.cc/virtual/2021/poster/3310
|
NBDT: Neural-Backed Decision Tree
|
github.com/alvinwan/neural-backed-decision-trees
| null | 0 | 3.6 |
Poster
|
4;5;2;4;3
| null |
null |
Massachusetts Institute of Technology
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3184; None
| null | 0 | null | null | null | null | null |
Maxwell Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Joshua B Tenenbaum, Armando Solar-Lezama
|
https://iclr.cc/virtual/2021/poster/3184
|
program synthesis;representation learning;abstract interpretation;modular neural networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.75 |
6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/3184
|
Representing Partial Programs with Blended Abstract Semantics
| null | null | 0 | 4 |
Poster
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.471405 | 0 | null |
main
| 5 |
3;5;6;6
| null | null |
Towards Robust and Efficient Contrastive Textual Representation Learning
| null | null | 0 | 3.25 |
Reject
|
3;3;3;4
| null |
null |
Stanford University; Harvard University; Toyota Research Institute
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2712; None
| null | 0 | null | null | null | null | null |
Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma
|
https://iclr.cc/virtual/2021/poster/2712
|
deep learning;noise robust learning;imbalanced learning
| null | 0 | null | null |
iclr
| 0.68313 | 0 | null |
main
| 6.75 |
5;6;7;9
| null |
https://iclr.cc/virtual/2021/poster/2712
|
Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
|
https://github.com/kaidic/HAR
| null | 0 | 3.75 |
Poster
|
3;4;4;4
| null |
null |
Mila, University of Montreal; Harvard University; MPI for Intelligent Systems, Tübingen; University of California, Berkeley
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3224; None
| null | 0 | null | null | null | null | null |
Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schoelkopf
|
https://iclr.cc/virtual/2021/poster/3224
|
modular representations;better generalization;learning mechanisms
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 7.5 |
7;7;7;9
| null |
https://iclr.cc/virtual/2021/poster/3224
|
Recurrent Independent Mechanisms
| null | null | 0 | 3.25 |
Spotlight
|
3;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
neural networks;generalization;Gaussian mixture model;sample complexity;learning algorithm
| null | 0 | null | null |
iclr
| -0.132453 | 0 | null |
main
| 5.75 |
4;6;6;7
| null | null |
Learning One-hidden-layer Neural Networks on Gaussian Mixture Models with Guaranteed Generalizability
| null | null | 0 | 3.75 |
Reject
|
4;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Learning;Data Augmentation;Mixup
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Bypassing the Random Input Mixing in Mixup
| null | null | 0 | 3.5 |
Reject
|
4;3;4;3
| null |
null |
Brown University, Department of Computer Science; New York University, Department of Linguistics and Center for Data Science
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3098; None
| null | 0 | null | null | null | null | null |
Charles Lovering, Rohan Jha, Tal Linzen, Ellie Pavlick
|
https://iclr.cc/virtual/2021/poster/3098
|
information-theoretical probing;probing;challenge sets;natural language processing
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
6;7;8
| null |
https://iclr.cc/virtual/2021/poster/3098
|
Predicting Inductive Biases of Pre-Trained Models
| null | null | 0 | 3.333333 |
Poster
|
4;2;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial Attacks;Adversarial Defense;Robustness;Convolutional Neural Network;Feature Compactness
| null | 0 | null | null |
iclr
| -0.583333 | 0 | null |
main
| 4.4 |
1;4;5;5;7
| null | null |
Manifold-aware Training: Increase Adversarial Robustness with Feature Clustering
| null | null | 0 | 3.8 |
Reject
|
5;3;5;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
dynamic feature selection;human activity recognition;sparse monitoring
| null | 0 | null | null |
iclr
| 0.385695 | 0 | null |
main
| 5 |
3;4;4;9
| null | null |
Dynamic Feature Selection for Efficient and Interpretable Human Activity Recognition
| null | null | 0 | 4.25 |
Reject
|
5;3;4;5
| null |
null |
MIT
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3265; None
| null | 0 | null | null | null | null | null |
Shibani Santurkar, Dimitris Tsipras, Aleksander Madry
|
https://iclr.cc/virtual/2021/poster/3265
|
benchmarks;distribution shift;hierarchy;robustness
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.333333 |
6;6;7
| null |
https://iclr.cc/virtual/2021/poster/3265
|
BREEDS: Benchmarks for Subpopulation Shift
|
https://github.com/MadryLab/BREEDS-Benchmarks
| null | 0 | 3.333333 |
Poster
|
3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial defense;out-of-distribution detection
| null | 0 | null | null |
iclr
| 0.316228 | 0 | null |
main
| 6 |
5;6;6;7
| null | null |
Exploiting Safe Spots in Neural Networks for Preemptive Robustness and Out-of-Distribution Detection
| null | null | 0 | 3.5 |
Reject
|
2;5;4;3
| null |
null |
School of Informatics, The University of Edinburgh
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2784; None
| null | 0 | null | null | null | null | null |
Bo ZHAO, Konda Reddy Mopuri, Hakan Bilen
|
https://iclr.cc/virtual/2021/poster/2784
|
dataset condensation;data-efficient learning;image generation
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 8.333333 |
8;8;9
| null |
https://iclr.cc/virtual/2021/poster/2784
|
Dataset Condensation with Gradient Matching
|
https://github.com/VICO-UoE/DatasetCondensation
| null | 0 | 3.333333 |
Oral
|
3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Artificial Intelligence;Convolutional Neural Network;Extreme Classifications
| null | 0 | null | null |
iclr
| -0.755929 | 0 | null |
main
| 3.333333 |
2;3;5
| null | null |
Towards Generalized Artificial Intelligence by Assessment Aggregation with Applications to Standard and Extreme Classifications
| null | null | 0 | 2.666667 |
Withdraw
|
4;2;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
indirect supervision;compositional model;question answering;neural module networks
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
Paired Examples as Indirect Supervision in Latent Decision Models
| null | null | 0 | 3.25 |
Withdraw
|
4;4;3;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
flow-based mode;generative model;intrinsic dimension;manifold learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;5;5;6
| null | null |
On the Latent Space of Flow-based Models
| null | null | 0 | 4 |
Reject
|
4;5;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Reinforcement learning;Policy architectures
| null | 0 | null | null |
iclr
| -0.968496 | 0 |
https://sites.google.com/view/d2rl-anonymous/home
|
main
| 5.25 |
4;4;5;8
| null | null |
D2RL: Deep Dense Architectures in Reinforcement Learning
| null | null | 0 | 3.75 |
Reject
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.5 |
4;4;5;5
| null | null |
Deep Gated Canonical Correlation Analysis
| null | null | 0 | 3.5 |
Reject
|
4;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Gradient descent;neural networks;implicit regularization;quenching-activation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5;5
| null | null |
The Quenching-Activation Behavior of the Gradient Descent Dynamics for Two-layer Neural Network Models
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
ResPerfNet: Deep Residual Learning for Regressional Performance Modeling of Deep Neural Networks
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Near-term Quantum Algorithm;Quantum Neural Network;Trainability;Hierarchical Structure
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Toward Trainability of Quantum Neural Networks
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
world models;model based reinforcement learning;latent planning;model-based reinforcement learning;model predictive control;video prediction
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
3;4;6;7
| null | null |
On Trade-offs of Image Prediction in Visual Model-Based Reinforcement Learning
| null | null | 0 | 3.25 |
Reject
|
3;4;2;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Reachable set;state constraints;safety verification;model-based reinforcement learning
| null | 0 | null | null |
iclr
| -0.454545 | 0 | null |
main
| 5.5 |
3;5;7;7
| null | null |
Safety Verification of Model Based Reinforcement Learning Controllers
| null | null | 0 | 3.25 |
Reject
|
4;3;2;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
video representation learning;multi-modal learning;self-supervised learning;audio-visual learning;noise-contrastive learning
| null | 0 | null | null |
iclr
| 0.942809 | 0 | null |
main
| 6 |
4;6;7;7
| null | null |
Multi-modal Self-Supervision from Generalized Data Transformations
| null | null | 0 | 3.75 |
Reject
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
XLNet;Word Embedding;Machine Translation;Low resource
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 3.5 |
3;3;3;5
| null | null |
Syntactic Relevance XLNet Word Embedding Generation in Low-Resource Machine Translation
| null | null | 0 | 3.75 |
Withdraw
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
bertology;interpretability;computational neuroscience;population coding
| null | 0 | null | null |
iclr
| -0.763763 | 0 | null |
main
| 5.6 |
5;5;6;6;6
| null | null |
Representational correlates of hierarchical phrase structure in deep language models
| null | null | 0 | 3.8 |
Reject
|
4;5;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Neural Networks;Data Augmentation;Adversarial Training
| null | 0 | null | null |
iclr
| 0.426401 | 0 | null |
main
| 6 |
5;6;6;7
| null | null |
FLAG: Adversarial Data Augmentation for Graph Neural Networks
| null | null | 0 | 3.75 |
Reject
|
4;3;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5 |
4;4;7
| null | null |
Topic-aware Contextualized Transformers
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep neural networks;first order logic;neuro-symbolic computing;knowledge;commonsense
| null | 0 | null | null |
iclr
| -0.760886 | 0 | null |
main
| 4.75 |
3;5;5;6
| null | null |
DEEP ADAPTIVE SEMANTIC LOGIC (DASL): COMPILING DECLARATIVE KNOWLEDGE INTO DEEP NEURAL NETWORKS
| null | null | 0 | 4.25 |
Reject
|
5;4;5;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Object Detection;Neural Architecture Search
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;4;4;5
| null | null |
Multi-scale Network Architecture Search for Object Detection
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.327327 | 0 | null |
main
| 5.2 |
4;5;5;6;6
| null | null |
Semi-supervised Domain Adaptation with Prototypical Alignment and Consistency Learning
| null | null | 0 | 4.6 |
Withdraw
|
5;4;5;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
class-incremental learning;catastrophic forgetting
| null | 0 | null | null |
iclr
| 0.471405 | 0 | null |
main
| 5 |
4;4;5;7
| null | null |
Essentials for Class Incremental Learning
| null | null | 0 | 4.5 |
Withdraw
|
5;3;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.904534 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Multi-hop Attention Graph Neural Network
| null | null | 0 | 4.25 |
Reject
|
5;5;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Neural Ordinary Differential Equations;Cubic Spline Interpolation;Irregular Time Series
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
5;5;7;7
| null | null |
Cubic Spline Smoothing Compensation for Irregularly Sampled Sequences
| null | null | 0 | 4 |
Reject
|
5;3;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 4.5 |
4;4;4;6
| null | null |
Continual learning with neural activation importance
| null | null | 0 | 3.75 |
Reject
|
4;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
ecological inference;representation learning;multi-task learning;bayesian deep learning
| null | 0 | null | null |
iclr
| -0.440225 | 0 | null |
main
| 4.25 |
3;3;4;7
| null | null |
Deep Ecological Inference
| null | null | 0 | 3.25 |
Reject
|
4;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 3.5 |
3;3;3;5
| null | null |
Adaptive Spatial-Temporal Inception Graph Convolutional Networks for Multi-step Spatial-Temporal Network Data Forecasting
| null | null | 0 | 4.75 |
Reject
|
5;4;5;5
| null |
null |
NLP2CT Lab, Department of Computer and Information Science, University of Macau; The University of Sydney; Tencent AI Lab
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3132; None
| null | 0 | null | null | null | null | null |
Xuebo Liu, Longyue Wang, Derek Wong, Liang Ding, Lidia Chao, Zhaopeng Tu
|
https://iclr.cc/virtual/2021/poster/3132
|
Encoder layer fusion;Transformer;Sequence-to-sequence learning;Machine translation;Summarization;Grammatical error correction
| null | 0 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 6 |
5;5;7;7
| null |
https://iclr.cc/virtual/2021/poster/3132
|
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning
|
https://github.com/SunbowLiu/SurfaceFusion
| null | 0 | 4 |
Poster
|
5;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Column Subset Selection;Distributed Learning
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 6 |
5;6;7
| null | null |
An Efficient Protocol for Distributed Column Subset Selection in the Entrywise $\ell_p$ Norm
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Planning;Spatial planning;Path planning;Navigation;Manipulation;Robotics
| null | 0 | null | null |
iclr
| -0.94388 | 0 | null |
main
| 5.5 |
4;5;6;7
| null | null |
Differentiable Spatial Planning using Transformers
| null | null | 0 | 3.75 |
Reject
|
5;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep generative models;manifold learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.75 |
4;5;6;8
| null | null |
Deep Quotient Manifold Modeling
| null | null | 0 | 4 |
Reject
|
5;2;5;4
| null |
null |
Paper under double-blind review
|
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Marked temporal point process;Stochastic process;Adversarial autoencoder;Incomplete data generation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Adversarial Data Generation of Multi-category Marked Temporal Point Processes with Sparse, Incomplete, and Small Training Samples
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;actor-critic;function approximation;approximation error;KL divergence
| null | 0 | null | null |
iclr
| -0.96225 | 0 | null |
main
| 4.25 |
3;3;5;6
| null | null |
Error Controlled Actor-Critic Method to Reinforcement Learning
| null | null | 0 | 4.5 |
Reject
|
5;5;4;4
| null |
null |
Department of Computer Science, Purdue University, USA; Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2940; None
| null | 0 | null | null | null | null | null |
Eli Chien, Jianhao Peng, Pan Li, Olgica Milenkovic
|
https://iclr.cc/virtual/2021/poster/2940
|
Graph Neural Networks;Generalized PageRank;Heterophily;Homophily;Over-smoothing
| null | 0 | null | null |
iclr
| 0.083624 | 0 | null |
main
| 6.5 |
4;6;7;9
| null |
https://iclr.cc/virtual/2021/poster/2940
|
Adaptive Universal Generalized PageRank Graph Neural Network
|
https://github.com/jianhao2016/GPRGNN
| null | 0 | 3.25 |
Poster
|
4;2;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Prediction bias;bias mitigation;transfer learning;natural language processing
| null | 0 | null | null |
iclr
| -0.852803 | 0 |
https://anonymous.4open.science/r/7e1ac8a0-d89a-4dca-8da8-30c03490fa42/
|
main
| 3.75 |
3;3;4;5
| null | null |
Efficient Learning of Less Biased Models with Transfer Learning
| null | null | 0 | 4 |
Withdraw
|
5;4;4;3
| null |
null |
Bar-Ilan University, Israel; NVIDIA, Israel; NVIDIA, Israel; Bar-Ilan University, Israel
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2638; None
| null | 0 | null | null | null | null | null |
Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya
|
https://iclr.cc/virtual/2021/poster/2638
|
Auxiliary Learning;Multi-task Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.4 |
6;6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/2638
|
Auxiliary Learning by Implicit Differentiation
| null | null | 0 | 3 |
Poster
|
3;3;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.174078 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
Automated Concatenation of Embeddings for Structured Prediction
| null | null | 0 | 3.75 |
Reject
|
4;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial examples;adversarial robustness;adversarial accuracy;nearest neighbor classifiers
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3;3
| null | null |
Proper Measure for Adversarial Robustness
| null | null | 0 | 4.5 |
Reject
|
4;5;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Healthcare;Meta Learning;Computer Vision
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 5 |
4;5;6
| null | null |
MetaPhys: Few-Shot Adaptation for Non-Contact Physiological Measurement
| null | null | 0 | 3.666667 |
Reject
|
3;3;5
| null |
null |
Google Research, Brain Team
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2527; None
| null | 0 | null | null | null | null | null |
Marcin Andrychowicz, Anton Raichuk, Piotr Stanczyk, Manu Orsini, Sertan Girgin, Raphaël Marinier, Léonard Hussenot-Desenonges, Matthieu Geist, Olivier Pietquin, Marcin Michalski, Sylvain Gelly, Olivier Bachem
|
https://iclr.cc/virtual/2021/poster/2527
|
Reinforcement learning;continuous control
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 8 |
7;7;9;9
| null |
https://iclr.cc/virtual/2021/poster/2527
|
What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
|
https://github.com/google-research/seed_rl
| null | 0 | 3.5 |
Oral
|
4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
metric learning;model collapse;feature embedding;neural network regularizer
| null | 0 | null | null |
iclr
| 0.301511 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
SVMax: A Feature Embedding Regularizer
| null | null | 0 | 4 |
Reject
|
3;5;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
optimization;federated learning;personalization;local SGD
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 4.5 |
4;4;4;6
| null | null |
Federated Learning of a Mixture of Global and Local Models
| null | null | 0 | 4.25 |
Reject
|
5;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Collaborative Normalization for Unsupervised Domain Adaptation
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
neuro-symbolic;hybrid;interpretability
| null | 0 | null | null |
iclr
| -0.760886 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
Neural Disjunctive Normal Form: Vertically Integrating Logic With Deep Learning For Classification
| null | null | 0 | 3.25 |
Withdraw
|
3;5;3;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
machine;learning;adversarial;robustness;neural;networks;image;classification;computer;vision
| null | 0 | null | null |
iclr
| -0.968496 | 0 | null |
main
| 4.25 |
3;3;4;7
| null | null |
Adversarial Boot Camp: label free certified robustness in one epoch
| null | null | 0 | 3.75 |
Reject
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
multiple descent;interpolation;overparametrization
| null | 0 | null | null |
iclr
| -0.818182 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
Multiple Descent: Design Your Own Generalization Curve
| null | null | 0 | 3.25 |
Reject
|
4;4;3;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Fast pathway;Slow pathway;Interplay;Robustness;Visual backward masking;Biological visual systems;Biological inspried model
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;3;6
| null | null |
Vision at A Glance: Interplay between Fine and Coarse Information Processing Pathways
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null |
Northeastern University; Columbia University; UCLA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3365; None
| null | 0 | null | null | null | null | null |
Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, Cho-Jui Hsieh
|
https://iclr.cc/virtual/2021/poster/3365
|
neural network verification;branch and bound
| null | 0 | null | null |
iclr
| 0.927173 | 0 | null |
main
| 5.5 |
5;5;5;7
| null |
https://iclr.cc/virtual/2021/poster/3365
|
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers
| null | null | 0 | 2.25 |
Poster
|
2;1;2;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Stein variational inference;variational inference;probabilistic programming;Pyro;deep probabilistic programming;deep learning
| null | 0 | null | null |
iclr
| -0.522233 | 0 | null |
main
| 4.25 |
3;4;5;5
| null | null |
Einstein VI: General and Integrated Stein Variational Inference in NumPyro
| null | null | 0 | 3.75 |
Reject
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Reinforcement Learning;Large action spaces;Parameterized action spaces;Multi-Agent;Continuous Control
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
Factored Action Spaces in Deep Reinforcement Learning
| null | null | 0 | 4 |
Reject
|
5;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
activation function;attention mechanism;rectified linear unit
| null | 0 | null | null |
iclr
| -0.560612 | 0 | null |
main
| 5.25 |
3;5;6;7
| null | null |
ARELU: ATTENTION-BASED RECTIFIED LINEAR UNIT
| null | null | 0 | 3.75 |
Reject
|
4;5;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Poisson Process;Log-Linear Model;Energy-Based Model;Generalized Additive Models;Information Geometry
| null | 0 | null | null |
iclr
| -0.944911 | 0 | null |
main
| 4.333333 |
3;4;6
| null | null |
Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Stochastic Processes
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Reversible computing;automatic differentiation;Julia
| null | 0 | null | null |
iclr
| -0.428571 | 0 | null |
main
| 5.2 |
4;5;5;6;6
| null | null |
Differentiate Everything with a Reversible Domain-Specific Language
| null | null | 0 | 3.2 |
Reject
|
4;3;3;2;4
| null |
null | null |
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2542; None
| null | 0 | null | null | null | null | null |
Jiayi Shen, Xiaohan Chen, Howard Heaton, Tianlong Chen, Jialin Liu, Wotao Yin, Zhangyang Wang
|
https://iclr.cc/virtual/2021/poster/2542
|
Learning to Optimize;Minimax Optimization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.75 |
6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2542
|
Learning A Minimax Optimizer: A Pilot Study
| null | null | 0 | 4 |
Poster
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
variational autoencoders;β-VAE;representation learning
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Simple and Effective VAE Training with Calibrated Decoders
| null | null | 0 | 4.333333 |
Reject
|
4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
model-based reinforcement learning;visual control;sample efficiency
| null | 0 | null | null |
iclr
| 0.636364 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
ReaPER: Improving Sample Efficiency in Model-Based Latent Imagination
| null | null | 0 | 3.75 |
Reject
|
3;3;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.188982 | 0 | null |
main
| 4.666667 |
3;5;6
| null | null |
Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Generative autoencoder;disentangled representation learning;attribute controllable synthesis
| null | 0 | null | null |
iclr
| -0.301511 | 0 | null |
main
| 3.75 |
3;3;4;5
| null | null |
Generative Auto-Encoder: Controllable Synthesis with Disentangled Exploration
| null | null | 0 | 4.5 |
Reject
|
5;4;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep reinforcement learning;ensemble learning;Q-learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.25 |
3;5;5;8
| null | null |
Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Bayesian Deep Learning;Uncertainty;NMT;Transformer
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.75 |
3;5;5;6
| null | null |
Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
4;4;4
| null | null |
Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Learning;Incremental Learning;Growing Graph;Dynamic Graph
| null | 0 | null | null |
iclr
| -0.254824 | 0 | null |
main
| 4.75 |
3;4;5;7
| null | null |
Incremental Learning on Growing Graphs
| null | null | 0 | 3.75 |
Withdraw
|
5;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Feature Attribution;Graph Neural Networks;Explainable Methods;Causal Effect
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 6 |
5;5;7;7
| null | null |
Causal Screening to Interpret Graph Neural Networks
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
vqa;visual question answering;neural modules;probabilistic logic
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
How to Design Sample and Computationally Efficient VQA Models
| null | null | 0 | 5 |
Reject
|
5;5;5
| null |
null |
Spoken Language Systems (LSV), Saarland Informatics Campus, Saarland University; Theory of Machine Learning Lab, École polytechnique fédérale de Lausanne
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2558; None
| null | 0 | null | null | null | null | null |
Marius Mosbach, Maksym Andriushchenko, Dietrich Klakow
|
https://iclr.cc/virtual/2021/poster/2558
|
fine-tuning stability;transfer learning;pretrained language model;BERT
| null | 0 | null | null |
iclr
| -0.426401 | 0 | null |
main
| 6 |
4;6;6;8
| null |
https://iclr.cc/virtual/2021/poster/2558
|
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
|
https://github.com/uds-lsv/bert-stable-fine-tuning
| null | 0 | 3.75 |
Poster
|
5;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph neural networks;optimal transport;molecular representations;molecular property prediction
| null | 0 | null | null |
iclr
| 0.648886 | 0 | null |
main
| 5.25 |
4;5;5;7
| null | null |
Optimal Transport Graph Neural Networks
| null | null | 0 | 4 |
Reject
|
4;4;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
distribution shift;ood
| null | 0 | null | null |
iclr
| -0.316228 | 0 | null |
main
| 6 |
4;5;7;8
| null | null |
A Critical Analysis of Distribution Shift
| null | null | 0 | 4.5 |
Reject
|
5;4;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
robustness;domain adaptation;spurious correlation;dataset bias
| null | 0 | null | null |
iclr
| 0.13484 | 0 | null |
main
| 4.5 |
3;4;5;6
| null | null |
Learning Robust Models by Countering Spurious Correlations
| null | null | 0 | 3.25 |
Reject
|
4;2;3;4
| null |
null |
The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213
|
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Neural Network;Continual Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;5;6
| null | null |
Bridging Graph Network to Lifelong Learning with Feature Interaction
|
https://github.com/wang-chen/LGL
| null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Interpolation;autoencoder;reconstruction;few-shot learning;few-shot image generation;generalization;augmentation
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4.333333 |
4;4;5
| null | null |
Augmentation-Interpolative AutoEncoders for Unsupervised Few-Shot Image Generation
| null | null | 0 | 4.333333 |
Reject
|
4;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
5;6;7
| null | null |
Group-Connected Multilayer Perceptron Networks
| null | null | 0 | 4.666667 |
Reject
|
5;4;5
| null |
null |
Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3115; None
| null | 0 | null | null | null | null | null |
Yoonhyung Lee, Joongbo Shin, Kyomin Jung
|
https://iclr.cc/virtual/2021/poster/3115
|
text-to-speech;speech synthesis;non-autoregressive;VAE
| null | 0 | null | null |
iclr
| -0.927173 | 0 | null |
main
| 6.25 |
5;6;6;8
| null |
https://iclr.cc/virtual/2021/poster/3115
|
Bidirectional Variational Inference for Non-Autoregressive Text-to-Speech
| null | null | 0 | 4.75 |
Poster
|
5;5;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.942809 | 0 | null |
main
| 5 |
3;5;6;6
| null | null |
GSdyn: Learning training dynamics via online Gaussian optimization with gradient states
| null | null | 0 | 3.25 |
Withdraw
|
4;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Semi-supervised feature representation;counterfactual explanations
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
Semi-supervised counterfactual explanations
| null | null | 0 | 4.25 |
Reject
|
5;5;4;3
| null |
null |
Paper under double-blind review
|
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Unsupervised Representation Learning;Neighbor Clustering;Variational Autoencoder;Unsupervised Classification
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.333333 |
3;5;5
| null | null |
AC-VAE: Learning Semantic Representation with VAE for Adaptive Clustering
| null | null | 0 | 3 |
Reject
|
3;3;3
| null |
null |
Rensselaer Polytechnic Institute, USA; Northeastern University, USA; MIT-IBM Watson AI Lab, IBM Research, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2906; None
| null | 0 | null | null | null | null | null |
Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang
|
https://iclr.cc/virtual/2021/poster/2906
| null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6;6
| null |
https://iclr.cc/virtual/2021/poster/2906
|
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning
|
https://github.com/wangren09/MetaAdv
| null | 0 | 3.5 |
Poster
|
2;4;4;4
| null |
null |
Northeastern University, Boston, MA, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2526; None
| null | 0 | null | null | null | null | null |
Huan Wang, Can Qin, Yulun Zhang, Yun Fu
|
https://iclr.cc/virtual/2021/poster/2526
|
model compression;deep neural network pruning;Hessian matrix;regularization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
6;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/2526
|
Neural Pruning via Growing Regularization
|
https://github.com/mingsun-tse/regularization-pruning
| null | 0 | 4.75 |
Poster
|
5;5;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;flexibility design;policy gradient;combinatorial optimization
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Reinforcement Learning for Flexibility Design Problems
| null | null | 0 | 4.5 |
Withdraw
|
5;5;4;4
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.