pdf
stringlengths 49
199
⌀ | aff
stringlengths 1
1.36k
⌀ | year
stringclasses 19
values | technical_novelty_avg
float64 0
4
⌀ | video
stringlengths 21
47
⌀ | doi
stringlengths 31
63
⌀ | presentation_avg
float64 0
4
⌀ | proceeding
stringlengths 43
129
⌀ | presentation
stringclasses 796
values | sess
stringclasses 576
values | technical_novelty
stringclasses 700
values | arxiv
stringlengths 10
16
⌀ | author
stringlengths 1
1.96k
⌀ | site
stringlengths 37
191
⌀ | keywords
stringlengths 2
582
⌀ | oa
stringlengths 86
198
⌀ | empirical_novelty_avg
float64 0
4
⌀ | poster
stringlengths 57
95
⌀ | openreview
stringlengths 41
45
⌀ | conference
stringclasses 11
values | corr_rating_confidence
float64 -1
1
⌀ | corr_rating_correctness
float64 -1
1
⌀ | project
stringlengths 1
162
⌀ | track
stringclasses 3
values | rating_avg
float64 0
10
⌀ | rating
stringlengths 1
17
⌀ | correctness
stringclasses 809
values | slides
stringlengths 32
41
⌀ | title
stringlengths 2
192
⌀ | github
stringlengths 3
165
⌀ | authors
stringlengths 7
161
⌀ | correctness_avg
float64 0
5
⌀ | confidence_avg
float64 0
5
⌀ | status
stringclasses 22
values | confidence
stringlengths 1
17
⌀ | empirical_novelty
stringclasses 763
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
meta-learning;active-learning;safe learning
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Meta-Active Learning in Probabilistically-Safe Optimization
| null | null | 0 | 2.75 |
Reject
|
3;2;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Semi-supervised Domain Adaptation
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Learning Invariant Representations and Risks for Semi-supervised Domain Adaptation
| null | null | 0 | 4.5 |
Withdraw
|
4;5;5;4
| null |
null |
Qualcomm AI Research; Qualcomm AI Research, Department of Electrical Engineering, Eindhoven University of Technology
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3371; None
| null | 0 | null | null | null | null | null |
Ties van Rozendaal, Iris Huijben, Taco Cohen
|
https://iclr.cc/virtual/2021/poster/3371
|
Neural data compression;Learned compression;Generative modeling;Overfitting;Finetuning;Instance learning;Instance adaptation;Variational autoencoders;Rate-distortion optimization;Model compression;Weight quantization
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 6.5 |
6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3371
|
Overfitting for Fun and Profit: Instance-Adaptive Data Compression
| null | null | 0 | 3.75 |
Poster
|
4;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep neural network;quantization;neural architecture search;image classification;reduced precision;inference
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
Uniform-Precision Neural Network Quantization via Neural Channel Expansion
| null | null | 0 | 4.666667 |
Reject
|
5;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Reinforcement learning;Recommendation system
| null | 0 | null | null |
iclr
| -0.904534 | 0 | null |
main
| 5.5 |
4;4;7;7
| null | null |
Offline Adaptive Policy Leaning in Real-World Sequential Recommendation Systems
| null | null | 0 | 3.25 |
Reject
|
4;4;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Representation Learning;Deep Learning Theory;Face Recognition;Image Classification;Parametric Estimation Theory
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4.666667 |
4;4;6
| null | null |
Detection Booster Training: A detection booster training method for improving the accuracy of classifiers.
| null | null | 0 | 4 |
Reject
|
5;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Neural Network;Normalization;Graph Normalization
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 3.75 |
3;4;4;4
| null | null |
Learning Graph Normalization for Graph Neural Networks
| null | null | 0 | 4.5 |
Reject
|
4;4;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Metric Learning;Representation Learning
| null | 0 | null | null |
iclr
| 0.19245 | 0 | null |
main
| 5.25 |
4;4;6;7
| null | null |
S2SD: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning
| null | null | 0 | 4.5 |
Withdraw
|
5;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
SGD;Stability;Generalization;Deep Learning
| null | 0 | null | null |
iclr
| -0.408248 | 0 | null |
main
| 5 |
4;4;5;7
| null | null |
Revisiting the Stability of Stochastic Gradient Descent: A Tightness Analysis
| null | null | 0 | 3.5 |
Reject
|
3;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
convolutional attention networks;denoising;computer vision;camera-based physiology
| null | 0 | null | null |
iclr
| -0.654654 | 0 | null |
main
| 6 |
4;5;9
| null | null |
The Benefit of Distraction: Denoising Remote Vitals Measurements Using Inverse Attention
| null | null | 0 | 4.333333 |
Reject
|
5;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Protein Structure;Proteins;Contact Prediction;Representation Learning;Language Modeling;Attention;Transformer;BERT;Markov Random Fields;Potts Models;Self-supervised learning
| null | 0 | null | null |
iclr
| -0.852803 | 0 | null |
main
| 5.75 |
5;5;6;7
| null | null |
Single Layers of Attention Suffice to Predict Protein Contacts
| null | null | 0 | 4 |
Reject
|
5;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Meta-learning;hierarchical data;clustering
| null | 0 | null | null |
iclr
| -0.454545 | 0 | null |
main
| 3.75 |
3;3;4;5
| null | null |
Model agnostic meta-learning on trees
| null | null | 0 | 3.75 |
Reject
|
3;5;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
one-class classification;image classification;object classification;self-supervised learning;geometric robustness
| null | 0 | null | null |
iclr
| 0.522233 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
One-class Classification Robust to Geometric Transformation
| null | null | 0 | 3.75 |
Reject
|
4;3;4;4
| null |
null |
University of Toronto, Vector Institute, CIFAR; Google Research, University of Colorado, Boulder; University of Toronto, Vector Institute
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2534; None
| null | 0 | null | null | null | null | null |
Mengye Ren, Michael L Iuzzolino, Michael Mozer, Richard Zemel
|
https://iclr.cc/virtual/2021/poster/2534
|
Few-shot learning;continual learning;lifelong learning
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 6.75 |
6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2534
|
Wandering within a world: Online contextualized few-shot learning
|
https://github.com/renmengye/oc-fewshot-public
| null | 0 | 3.5 |
Poster
|
3;3;4;4
| null |
null |
IRI, CSIC-UPC; UC San Diego; NYU; Technical University of Denmark; UC Berkeley
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2761; None
| null | 0 | null | null | null | null | null |
Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei Efros, Lerrel Pinto, Xiaolong Wang
|
https://iclr.cc/virtual/2021/poster/2761
|
reinforcement learning;robotics;self-supervised learning;generalization;sim2real
| null | 0 | null | null |
iclr
| 0 | 0 |
https://nicklashansen.github.io/PAD/
|
main
| 7 |
7;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2761
|
Self-Supervised Policy Adaptation during Deployment
|
https://github.com/nicklashansen/PAD
| null | 0 | 4 |
Spotlight
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.522233 | 0 | null |
main
| 4.25 |
3;4;5;5
| null | null |
ChemistryQA: A Complex Question Answering Dataset from Chemistry
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.852803 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
FMix: Enhancing Mixed Sample Data Augmentation
| null | null | 0 | 4 |
Reject
|
5;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph convolutional neural networks;label propagation;semi-supervised node classification
| null | 0 | null | null |
iclr
| -0.662266 | 0 | null |
main
| 4.75 |
3;5;5;6
| null | null |
Unifying Graph Convolutional Neural Networks and Label Propagation
| null | null | 0 | 3.75 |
Reject
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
controllable text summarization
| null | 0 | null | null |
iclr
| -0.852803 | 0 | null |
main
| 6.25 |
5;6;7;7
| null | null |
CTRLsum: Towards Generic Controllable Text Summarization
| null | null | 0 | 4 |
Reject
|
5;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph autoencoders;graph deconvolutional networks
| null | 0 | null | null |
iclr
| -0.942809 | 0 | null |
main
| 5 |
3;5;6;6
| null | null |
Graph Autoencoders with Deconvolutional Networks
| null | null | 0 | 4.25 |
Reject
|
5;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
GAN;generative adversarial networks;generative model;image synthesis;sample weighting;importance weighting;cost function;loss;mode collapse;mode dropping;coverage;divergence;FID;training dynamics;NS-GAN;MM-GAN;non-saturating;minimax
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6;6
| null | null |
Sample weighting as an explanation for mode collapse in generative adversarial networks
| null | null | 0 | 3.75 |
Reject
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Continual Learning;Regularisation;Fisher Information
| null | 0 | null | null |
iclr
| 0.324443 | 0 | null |
main
| 4.75 |
3;5;5;6
| null | null |
Unifying Regularisation Methods for Continual Learning
| null | null | 0 | 4 |
Reject
|
4;3;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep networks;ensembles;reproducibility
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3;3
| null | null |
Anti-Distillation: Improving Reproducibility of Deep Networks
| null | null | 0 | 3.5 |
Reject
|
3;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
unsupervised pretraining;greedy layer-wise pretraining;transfer learning;orthogonality
| null | 0 | null | null |
iclr
| -0.329293 | 0 | null |
main
| 5.25 |
3;4;5;9
| null | null |
Reviving Autoencoder Pretraining
| null | null | 0 | 3.5 |
Reject
|
3;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph;deep;learning;pooling
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 3.75 |
3;3;4;5
| null | null |
Graph Pooling by Edge Cut
| null | null | 0 | 4.25 |
Reject
|
5;5;4;3
| null |
null |
University of Padova; Huawei Noah’s Ark Lab; Huawei Technologies Co., Ltd.; University of Copenhagen
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3249; None
| null | 0 | null | null | null | null | null |
Wang Benyou, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Simonsen
|
https://iclr.cc/virtual/2021/poster/3249
|
Position Embedding;BERT;pretrained language model.
| null | 0 | null | null |
iclr
| -0.522233 | 0 | null |
main
| 6.75 |
6;6;7;8
| null |
https://iclr.cc/virtual/2021/poster/3249
|
On Position Embeddings in BERT
| null | null | 0 | 4.25 |
Poster
|
5;4;4;4
| null |
null |
Hitachi, Ltd., Tokyo, Japan
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2970; None
| null | 0 | null | null | null | null | null |
Naoyuki Terashita, Hiroki Ohashi, Yuichi Nonaka, Takashi Kanemaru
|
https://iclr.cc/virtual/2021/poster/2970
|
influence;generative adversarial networks;data cleansing
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.666667 |
6;7;7
| null |
https://iclr.cc/virtual/2021/poster/2970
|
Influence Estimation for Generative Adversarial Networks
| null | null | 0 | 3 |
Spotlight
|
3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.188982 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
Augmented Sliced Wasserstein Distances
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Learning: Applications;Methodology;and Theory;Recognition: Detection;Categorization;Retrieval and Matching;Scene Understanding;Visual Reasoning
| null | 0 | null | null |
iclr
| -0.98644 | 0 | null |
main
| 3.75 |
2;3;5;5
| null | null |
Multilayer Dense Connections for Hierarchical Concept Classification
| null | null | 0 | 3.75 |
Reject
|
5;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Convolution neural network;Interpretability performance;Markov random field
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3.666667 |
3;3;5
| null | null |
A self-explanatory method for the black box problem on discrimination part of CNN
| null | null | 0 | 3 |
Reject
|
4;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
causal inference;variational inference;disentanglement;variational autoencoder
| null | 0 | null | null |
iclr
| -0.942809 | 0 | null |
main
| 5 |
3;5;6;6
| null | null |
Targeted VAE: Structured Inference and Targeted Learning for Causal Parameter Estimation
| null | null | 0 | 4.25 |
Reject
|
5;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
neural network robustness;adversarial examples
| null | 0 | null | null |
iclr
| -0.440225 | 0 | null |
main
| 4.25 |
3;3;4;7
| null | null |
NETWORK ROBUSTNESS TO PCA PERTURBATIONS
| null | null | 0 | 4.25 |
Reject
|
4;5;4;4
| null |
null |
Department of Computer Science, Czech Technical University in Prague
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2929; None
| null | 0 | null | null | null | null | null |
Gustav Sourek, Filip Zelezny, Ondrej Kuzelka
|
https://iclr.cc/virtual/2021/poster/2929
|
weight sharing;graph neural networks;lifted inference;relational learning;dynamic computation graphs;convolutional models
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.666667 |
5;6;6
| null |
https://iclr.cc/virtual/2021/poster/2929
|
Lossless Compression of Structured Convolutional Models via Lifting
| null | null | 0 | 2 |
Poster
|
2;3;1
| null |
null |
Yale University; Microsoft Research; The Pennsylvania State University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3270; None
| null | 0 | null | null | null | null | null |
Tao Yu, Rui Zhang, Alex Polozov, Christopher Meek, Ahmed H Awadallah
|
https://iclr.cc/virtual/2021/poster/3270
| null | null | 0 | null | null |
iclr
| -0.813489 | 0 | null |
main
| 6.2 |
4;6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/3270
|
SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing
| null | null | 0 | 4 |
Poster
|
5;4;4;3;4
| null |
null |
Harvard University, Cambridge, MA 02138, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2935; None
| null | 0 | null | null | null | null | null |
Kenji Kawaguchi
|
https://iclr.cc/virtual/2021/poster/2935
|
Implicit Deep Learning;Deep Equilibrium Models;Gradient Descent;Learning Theory;Non-Convex Optimization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7.25 |
7;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/2935
|
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
| null | null | 0 | 3 |
Spotlight
|
3;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Stability;Linear Regression;Kernel Regression;Cross Validation Leave One Out Stability;Minimum norm solutions;Interpolation;Double Descent
| null | 0 | null | null |
iclr
| 0.707107 | 0 | null |
main
| 7 |
6;6;8;8
| null | null |
For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability
| null | null | 0 | 4 |
Reject
|
3;4;4;5
| null |
null |
Huawei Noah's Ark Lab, Paris, France
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2884; None
| null | 0 | null | null | null | null | null |
Balázs Kégl, Gabriel Hurtado, Albert Thomas
|
https://iclr.cc/virtual/2021/poster/2884
|
model-based reinforcement learning;generative models;mixture density nets;dynamic systems;heteroscedasticity
| null | 0 | null | null |
iclr
| -0.395285 | 0 | null |
main
| 6.4 |
5;6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2884
|
Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose?
| null | null | 0 | 3 |
Poster
|
3;4;3;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
physics-aware learning;spatiotemporal graph signals;few shot learning
| null | 0 | null | null |
iclr
| -0.288675 | 0 | null |
main
| 6 |
5;5;6;6;8
| null | null |
Physics-aware Spatiotemporal Modules with Auxiliary Tasks for Meta-Learning
| null | null | 0 | 3 |
Reject
|
4;3;3;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Federated Learning
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Sself: Robust Federated Learning against Stragglers and Adversaries
| null | null | 0 | 3.25 |
Reject
|
3;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Data Augmentation;Data Bias;Non-convex Optimization;Deep Learning Theory
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 5 |
4;4;5;7
| null | null |
WeMix: How to Better Utilize Data Augmentation
| null | null | 0 | 3 |
Reject
|
3;2;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.345857 | 0 | null |
main
| 4.5 |
3;3;5;7
| null | null |
Spatially Decomposed Hinge Adversarial Loss by Local Gradient Amplifier
| null | null | 0 | 3.75 |
Withdraw
|
5;4;2;4
| null |
null |
University of California, Los Angeles; Caltech
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2864; None
| null | 0 | null | null | null | null | null |
Michael Kleinman, Alessandro Achille, Daksh Idnani, Jonathan Kao
|
https://iclr.cc/virtual/2021/poster/2864
|
Usable Information;Representation Learning;Learning Dynamics;Initialization;SGD
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 6 |
3;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2864
|
Usable Information and Evolution of Optimal Representations During Training
| null | null | 0 | 3.75 |
Poster
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
unsupervised representation learning;unsupervised scene representation;unsupervised scene decomposition;generative models
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
3;6;6
| null | null |
R-MONet: Region-Based Unsupervised Scene Decomposition and Representation via Consistency of Object Representations
| null | null | 0 | 3 |
Reject
|
3;3;3
| null |
null |
University of Illinois at Urbana-Champaign, USA; Mila - Quebec AI Institute, Universite de Montreal, Canada; Peking University, China; Mila - Quebec AI Institute, Universite de Montreal, Canadian Institute for Advanced Research (CIFAR), Canada; Mila - Quebec AI Institute, Universite de Montreal, HEC Montreal, Canada
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2628; None
| null | 0 | null | null | null | null | null |
Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, Jian Tang
|
https://iclr.cc/virtual/2021/poster/2628
|
Molecular conformation generation;deep generative models;continuous normalizing flow;energy-based models
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.333333 |
6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2628
|
Learning Neural Generative Dynamics for Molecular Conformation Generation
|
https://github.com/DeepGraphLearning/CGCF-ConfGen
| null | 0 | 3.333333 |
Poster
|
3;3;4
| null |
null |
Under double-blind review
|
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.904534 | 0 | null |
main
| 4 |
3;3;5;5
| null | null |
AttackDist: Characterizing Zero-day Adversarial Samples by Counter Attack
| null | null | 0 | 4.25 |
Reject
|
5;5;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
CCA;streaming;differential geometry;DeepCCA;fairness
| null | 0 | null | null |
iclr
| -0.4842 | 0 | null |
main
| 5.75 |
4;6;6;7
| null | null |
Stochastic Canonical Correlation Analysis: A Riemannian Approach
| null | null | 0 | 3.25 |
Reject
|
4;2;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Neural Networks;Dynamic Graph;Signal Processing
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.5 |
4;4;5;5
| null | null |
Dynamic Graph Representation Learning with Fourier Temporal State Embedding
| null | null | 0 | 4.5 |
Reject
|
4;5;4;5
| null |
null |
ICSI and Department of Statistics, University of California, Berkeley, USA; G-STATS Data Science Chair, GIPSA-lab, University Grenobles-Alpes, France
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2523; None
| null | 0 | null | null | null | null | null |
Zhenyu Liao, Romain Couillet, Michael W Mahoney
|
https://iclr.cc/virtual/2021/poster/2523
|
Eigenspectrum;high-dimensional statistic;random matrix theory;spectral clustering
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.75 |
6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2523
|
Sparse Quantized Spectral Clustering
| null | null | 0 | 3 |
Spotlight
|
3;2;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Difference-in-Differences: Bridging Normalization and Disentanglement in PG-GAN
| null | null | 0 | 3.666667 |
Withdraw
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
weight decay;effective learning rate;cross-boundary risk;hyperparameter tuning
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
FixNorm: Dissecting Weight Decay for Training Deep Neural Networks
| null | null | 0 | 3.75 |
Withdraw
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
transfer learning;deep learning
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-Domain Mixup
| null | null | 0 | 4.5 |
Reject
|
5;4;5;4
| null |
null |
Corporate Technology, Siemens AG; Department of Informatics, Technical University of Munich; Institute of Informatics, LMU Munich; Institute of Informatics, LMU Munich; Corporate Technology, Siemens AG
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3378; None
| null | 0 | null | null | null | null | null |
Zhen Han, Peng Chen, Yunpu Ma, Volker Tresp
|
https://iclr.cc/virtual/2021/poster/3378
|
Temporal knowledge graph;future link prediction;graph neural network;subgraph reasoning.
| null | 0 | null | null |
iclr
| 0.650011 | 0 | null |
main
| 5.2 |
1;6;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/3378
|
Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs
| null | null | 0 | 3.6 |
Poster
|
3;3;4;4;4
| null |
null |
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3305; None
| null | 0 | null | null | null | null | null |
Zhou Xian, Shamit Lal, Hsiao-Yu Tung, Anthony Platanios, Katerina Fragkiadaki
|
https://iclr.cc/virtual/2021/poster/3305
| null | null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 6.25 |
6;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/3305
|
HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks
| null | null | 0 | 3.75 |
Poster
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial Robustness;Uncertainty Promotion;Adversarial Training
| null | 0 | null | null |
iclr
| -0.973329 | 0 | null |
main
| 5 |
4;5;5;6
| null | null |
Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness
| null | null | 0 | 3.75 |
Reject
|
5;4;4;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Neural Networks;Reinforcement Learning;Attention Mechanism;Adaptive Receptive Fields
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5;5
| null | null |
Learning Discrete Adaptive Receptive Fields for Graph Convolutional Networks
| null | null | 0 | 3.75 |
Reject
|
4;2;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Representation Learning;Knowledge Graph;Entity Alignment;Knowledge Graph Embedding
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 5.75 |
5;5;5;8
| null | null |
Towards Principled Representation Learning for Entity Alignment
| null | null | 0 | 3.25 |
Reject
|
3;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Self-Supervised Reinforcement Learning;Continuous Control;Zeroth-Order Optimization
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 3.75 |
3;4;4;4
| null | null |
Self-Supervised Continuous Control without Policy Gradient
| null | null | 0 | 3.5 |
Withdraw
|
4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
constrained reinforcement learning;multi-objective reinforcement learning;continuous control;deep reinforcement learning
| null | 0 | null | null |
iclr
| -0.188982 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
Explicit Pareto Front Optimization for Constrained Reinforcement Learning
| null | null | 0 | 2.666667 |
Reject
|
3;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Learning;Imbalance;Multilabel;Classification
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5 |
4;5;6
| null | null |
PLM: Partial Label Masking for Imbalanced Multi-label Classification
| null | null | 0 | 3.333333 |
Withdraw
|
4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Differentiable;Automated machine learning;Neural Architecture Search;Data Augment;Hyperparameter Optimization
| null | 0 | null | null |
iclr
| -0.904534 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
DiffAutoML: Differentiable Joint Optimization for Efficient End-to-End Automated Machine Learning
| null | null | 0 | 3.5 |
Reject
|
4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
model interpretation;attention mechanism;causal effect estimation
| null | 0 | null | null |
iclr
| -0.239046 | 0 | null |
main
| 4.75 |
3;4;5;7
| null | null |
Why is Attention Not So Interpretable?
| null | null | 0 | 4 |
Withdraw
|
5;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
visual hard attention;glimpses;explainability;bayesian optimal experiment design;variational autoencoder
| null | 0 | null | null |
iclr
| 0.471405 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Achieving Explainability in a Visual Hard Attention Model through Content Prediction
| null | null | 0 | 3 |
Reject
|
4;1;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
active learning;bayesian learning;machine learning testing;information theory
| null | 0 | null | null |
iclr
| -0.375823 | 0 | null |
main
| 5.25 |
3;4;6;8
| null | null |
ALT-MAS: A Data-Efficient Framework for Active Testing of Machine Learning Algorithms
| null | null | 0 | 4.25 |
Reject
|
4;5;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Amodal perception;deep learning;segmentation;partial occlusion
| null | 0 | null | null |
iclr
| -0.790569 | 0 | null |
main
| 5 |
4;5;5;5;6
| null | null |
Weakly-Supervised Amodal Instance Segmentation with Compositional Priors
| null | null | 0 | 3.4 |
Withdraw
|
4;3;4;4;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
meta-reinforcement learning;reinforcement learning;multi-task;non-stationary;task representations;regularization
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Meta-Reinforcement Learning With Informed Policy Regularization
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Mesh Reconstruction;Multi-view Stereo;Deep Learning
| null | 0 | null | null |
iclr
| -0.802955 | 0 | null |
main
| 6.333333 |
4;6;9
| null | null |
MeshMVS: Multi-view Stereo Guided Mesh Reconstruction
| null | null | 0 | 3.333333 |
Reject
|
4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Unsupervised Learning;Representation Learning;Scene Decomposition;Computer Vision
| null | 0 | null | null |
iclr
| 0.688247 | 0 | null |
main
| 5.75 |
4;6;6;7
| null | null |
Unsupervised Video Decomposition using Spatio-temporal Iterative Inference
| null | null | 0 | 3.5 |
Reject
|
3;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Transformers;attention;efficient
| null | 0 | null | null |
iclr
| -0.852803 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
An Attention Free Transformer
| null | null | 0 | 4 |
Reject
|
4;5;4;3
| null |
null |
University of Washington and Microsoft Research; Princeton University; University of Washington
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2535; None
| null | 0 | null | null | null | null | null |
Simon Du, Wei Hu, Sham M Kakade, Jason Lee, Qi Lei
|
https://iclr.cc/virtual/2021/poster/2535
|
representation learning;statistical learning theory
| null | 0 | null | null |
iclr
| -0.870388 | 0 | null |
main
| 6.75 |
6;6;7;8
| null |
https://iclr.cc/virtual/2021/poster/2535
|
Few-Shot Learning via Learning the Representation, Provably
| null | null | 0 | 3.75 |
Poster
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Wasserstein distributional normalization;Noisy labels;Classification
| null | 0 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 5 |
4;4;5;6;6
| null | null |
Wasserstein Distributional Normalization : Nonparametric Stochastic Modeling for Handling Noisy Labels
| null | null | 0 | 3 |
Reject
|
4;3;3;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
representation learning for natural language processing;pretrained word embeddings;iterative training method;model regularization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 3.666667 |
3;4;4
| null | null |
Ruminating Word Representations with Random Noise Masking
|
http://github.com/Sweetblueday/GraVeR
| null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph neural network;graph embedding;multi-robot/machine scheduling;Reinforcement learning;Mean-field inference
| null | 0 | null | null |
iclr
| -0.426401 | 0 | null |
main
| 6.25 |
5;6;7;7
| null | null |
Embedding a random graph via GNN: mean-field inference theory and RL applications to NP-Hard multi-robot/machine scheduling
| null | null | 0 | 3 |
Reject
|
3;4;3;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.522233 | 0 | null |
main
| 4.5 |
4;4;4;6
| null | null |
CAT-SAC: Soft Actor-Critic with Curiosity-Aware Entropy Temperature
| null | null | 0 | 3.75 |
Reject
|
4;3;5;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
modular networks;transfer learning;domain adaptation;self-organization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
Compositional Models: Multi-Task Learning and Knowledge Transfer with Modular Networks
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
neural differential equation;neural ODE;SDE;GAN
| null | 0 | null | null |
iclr
| 0.774597 | 0 | null |
main
| 4.5 |
3;4;5;6
| null | null |
Neural SDEs Made Easy: SDEs are Infinite-Dimensional GANs
| null | null | 0 | 3.75 |
Reject
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep learning;adversarial attack;robust certification
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;5;6
| null | null |
Provable Robustness by Geometric Regularization of ReLU Networks
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
inductive reasoning;deductive reasoning;neural network;memory;feature engineering
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 3.25 |
3;3;3;4
| null | null |
Simple deductive reasoning tests and numerical data sets for exposing limitation of today's deep neural networks
| null | null | 0 | 4 |
Reject
|
5;5;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Neural networks;Gaussian processes;model initialization;marginal likelihood
| null | 0 | null | null |
iclr
| 0.816497 | 0 | null |
main
| 3.75 |
3;4;4;4
| null | null |
Guiding Neural Network Initialization via Marginal Likelihood Maximization
| null | null | 0 | 4 |
Reject
|
3;5;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Hardware Accelerator;High-Level-Synthesis;Machine Learning;Neural Network Quantization
| null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
TwinDNN: A Tale of Two Deep Neural Networks
| null | null | 0 | 3.75 |
Reject
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Disentangling;Recommender Systems;VAE;Critiquing;Explainability
| null | 0 | null | null |
iclr
| 0.707107 | 0 | null |
main
| 4.5 |
4;4;5;5
| null | null |
Untangle: Critiquing Disentangled Recommendations
| null | null | 0 | 4 |
Reject
|
3;4;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Bayesian deep learning;Laplace approximations;uncertainty quantification
| null | 0 | null | null |
iclr
| 0.174078 | 0 | null |
main
| 5.25 |
4;4;6;7
| null | null |
Learnable Uncertainty under Laplace Approximations
| null | null | 0 | 3.25 |
Reject
|
2;4;4;3
| null |
null |
Microsoft Research Asia; Zhejiang University; Microsoft Azure Speech
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2919; None
| null | 0 | null | null | null | null | null |
Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu
|
https://iclr.cc/virtual/2021/poster/2919
|
text to speech;speech synthesis;non-autoregressive generation;one-to-many mapping;end-to-end
| null | 0 | null | null |
iclr
| -0.583333 | 0 |
https://speechresearch.github.io/fastspeech2/
|
main
| 6.8 |
5;7;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/2919
|
FastSpeech 2: Fast and High-Quality End-to-End Text to Speech
| null | null | 0 | 4.6 |
Poster
|
5;4;5;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
disentanglement;disentangled representation learning;vae;generative model
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6;6
| null | null |
Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling
| null | null | 0 | 3.25 |
Reject
|
3;4;3;3
| null |
null |
Department of Computer Science, Johns Hopkins University, Maryland, MD 21218, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3073; None
| null | 0 | null | null | null | null | null |
Angtian Wang, Adam Kortylewski, Alan Yuille
|
https://iclr.cc/virtual/2021/poster/3073
|
Pose Estimation;Robust Deep Learning;Contrastive Learning;Render-and-Compare
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 6.5 |
6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3073
|
NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation
|
https://github.com/Angtian/NeMo
| null | 0 | 3.75 |
Poster
|
4;3;4;4
| null |
null |
Google AI; University of Michigan, Google AI; University of Michigan
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3290; None
| null | 0 | null | null | null | null | null |
Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed H. Chi, Honglak Lee, Minmin Chen
|
https://iclr.cc/virtual/2021/poster/3290
|
batch reinforcement learning;continuation method;relaxed regularization
| null | 0 | null | null |
iclr
| -0.800641 | 0 | null |
main
| 6.5 |
4;6;7;9
| null |
https://iclr.cc/virtual/2021/poster/3290
|
Batch Reinforcement Learning Through Continuation Method
| null | null | 0 | 4.25 |
Poster
|
5;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
distillation;deep learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
Neighbourhood Distillation: On the benefits of non end-to-end distillation
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null |
Northwestern University; Princeton University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2740; None
| null | 0 | null | null | null | null | null |
Zuyue Fu, Zhuoran Yang, Zhaoran Wang
|
https://iclr.cc/virtual/2021/poster/2740
| null | null | 0 | null | null |
iclr
| -0.666667 | 0 | null |
main
| 7 |
5;7;8;8
| null |
https://iclr.cc/virtual/2021/poster/2740
|
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy
| null | null | 0 | 3 |
Poster
|
4;4;1;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial machine learning;adversarial examples;stock price forecasting;finance
| null | 0 | null | null |
iclr
| 0.132453 | 0 | null |
main
| 5.25 |
4;5;5;7
| null | null |
On the Robustness of Sentiment Analysis for Stock Price Forecasting
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Pre-trained model;language model;Document understanding;Document intelligence;OCR
| null | 0 | null | null |
iclr
| -0.301511 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
BROS: A Pre-trained Language Model for Understanding Texts in Document
| null | null | 0 | 3.75 |
Reject
|
3;5;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Gibbs sampling;stochastic gradient Langevin dynamics;design of MCMC;Bayesian bridge regression;variational autoencoders
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.25 |
4;4;4;5
| null | null |
MCMC-Interactive Variational Inference
| null | null | 0 | 3 |
Withdraw
|
3;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Binary neural network;edge computing;neural network training
| null | 0 | null | null |
iclr
| 0.984732 | 0 | null |
main
| 6 |
5;5;6;8
| null | null |
Enabling Binary Neural Network Training on the Edge
| null | null | 0 | 3.75 |
Reject
|
3;3;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;deep learning;benchmarks
| null | 0 | null | null |
iclr
| -0.333333 | 0 |
https://sites.google.com/view/d4rl-anonymous/
|
main
| 5 |
2;6;6;6
| null | null |
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
| null | null | 0 | 4.75 |
Reject
|
5;5;5;4
| null |
null |
Alibaba Group; Institute for Artificial Intelligence, Tsinghua University (THUAI), Beijing National Research Center for Information Science and Technology (BNRist), Department of Automation, Tsinghua University, Beijing, P.R.China; ByteDance AI Lab
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2823; None
| null | 0 | null | null | null | null | null |
Ziang Yan, Yiwen Guo, Jian Liang, Changshui Zhang
|
https://iclr.cc/virtual/2021/poster/2823
|
hard-label attack;black-box attack;adversarial attack;reinforcement learning
| null | 0 | null | null |
iclr
| -0.174078 | 0 | null |
main
| 6.75 |
6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2823
|
Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples
|
https://github.com/ZiangYan/pda.pytorch
| null | 0 | 3.75 |
Poster
|
4;5;3;3
| null |
null |
Key Laboratory of Machine Perception and Intelligence (MOE), Peking University, Beijing, China; Center for Data Science, Peking University, Beijing, China
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2926; None
| null | 0 | null | null | null | null | null |
Ziyao Li, Shuwen Yang, Guojie Song, Lingsheng Cai
|
https://iclr.cc/virtual/2021/poster/2926
|
Molecular Representation;Neural Physics Engines;Molecular Dynamics;Graph Neural Networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.333333 |
5;7;7
| null |
https://iclr.cc/virtual/2021/poster/2926
|
Conformation-Guided Molecular Representation with Hamiltonian Neural Networks
| null | null | 0 | 4 |
Poster
|
4;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Invariant Risk Minimization;Causal Machine Learning;Out-of-distribution Prediction
| null | 0 | null | null |
iclr
| -0.544331 | 0 | null |
main
| 5.25 |
4;4;6;7
| null | null |
Out-of-distribution Prediction with Invariant Risk Minimization: The Limitation and An Effective Fix
| null | null | 0 | 4 |
Reject
|
5;4;3;4
| null |
null |
DeepMind, University College London; Google; DeepMind
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3203; None
| null | 0 | null | null | null | null | null |
Jacob Menick, Erich Elsen, Utku Evci, Simon Osindero, Karen Simonyan, Alex Graves
|
https://iclr.cc/virtual/2021/poster/3203
|
recurrent neural networks;backpropagation;biologically plausible;forward mode;real time recurrent learning;rtrl;bptt
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 7 |
6;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3203
|
Practical Real Time Recurrent Learning with a Sparse Approximation
| null | null | 0 | 4 |
Spotlight
|
4;4;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Quantum Machine Learning;Hierarchical Models;Variational Inference;Model Interpretation;Computational Biology
| null | 0 | null | null |
iclr
| -0.090909 | 0 | null |
main
| 3.75 |
3;3;4;5
| null | null |
Hybrid Quantum-Classical Stochastic Networks with Boltzmann Layers
| null | null | 0 | 3.75 |
Withdraw
|
3;5;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph neural networks;heterogeneous graphs;code summarization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
2;4;5;5
| null | null |
Learning to Represent Programs with Heterogeneous Graphs
| null | null | 0 | 4.75 |
Withdraw
|
5;4;5;5
| null |
null |
Stanford University; Physics & Informatics Laboratories, NTT Research, Inc.
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2715; None
| null | 0 | null | null | null | null | null |
Daniel Kunin, Javier Sagastuy-Brena, Surya Ganguli, Daniel L Yamins, Hidenori Tanaka
|
https://iclr.cc/virtual/2021/poster/2715
|
learning dynamics;symmetry;loss landscape;stochastic differential equation;modified equation analysis;conservation law;hessian;geometry;physics;gradient flow
| null | 0 | null | null |
iclr
| 0.258199 | 0 | null |
main
| 6.5 |
5;6;7;8
| null |
https://iclr.cc/virtual/2021/poster/2715
|
Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics
| null | null | 0 | 3.25 |
Poster
|
3;3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Implicit bias;overparametrized neural network;cubic spline interpolation;spatially adaptive smoothing spline;effective capacity
| null | 0 | null | null |
iclr
| 0.228218 | 0 | null |
main
| 6 |
5;5;6;7;7
| null | null |
Implicit bias of gradient descent for mean squared error regression with wide neural networks
| null | null | 0 | 2.8 |
Reject
|
3;3;1;3;4
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.