pdf
stringlengths 49
199
⌀ | aff
stringlengths 1
1.36k
⌀ | year
stringclasses 19
values | technical_novelty_avg
float64 0
4
⌀ | video
stringlengths 21
47
⌀ | doi
stringlengths 31
63
⌀ | presentation_avg
float64 0
4
⌀ | proceeding
stringlengths 43
129
⌀ | presentation
stringclasses 796
values | sess
stringclasses 576
values | technical_novelty
stringclasses 700
values | arxiv
stringlengths 10
16
⌀ | author
stringlengths 1
1.96k
⌀ | site
stringlengths 37
191
⌀ | keywords
stringlengths 2
582
⌀ | oa
stringlengths 86
198
⌀ | empirical_novelty_avg
float64 0
4
⌀ | poster
stringlengths 57
95
⌀ | openreview
stringlengths 41
45
⌀ | conference
stringclasses 11
values | corr_rating_confidence
float64 -1
1
⌀ | corr_rating_correctness
float64 -1
1
⌀ | project
stringlengths 1
162
⌀ | track
stringclasses 3
values | rating_avg
float64 0
10
⌀ | rating
stringlengths 1
17
⌀ | correctness
stringclasses 809
values | slides
stringlengths 32
41
⌀ | title
stringlengths 2
192
⌀ | github
stringlengths 3
165
⌀ | authors
stringlengths 7
161
⌀ | correctness_avg
float64 0
5
⌀ | confidence_avg
float64 0
5
⌀ | status
stringclasses 22
values | confidence
stringlengths 1
17
⌀ | empirical_novelty
stringclasses 763
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
Institute for Interdisciplinary Information Sciences, Tsinghua University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2601; None
| null | 0 | null | null | null | null | null |
Linfeng Zhang, Kaisheng Ma
|
https://iclr.cc/virtual/2021/poster/2601
|
Knowledge Distillation;Object Detection;Teacher-Student Learning;Non-Local Modules;Attention Modules
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 6.333333 |
6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2601
|
Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors
|
https://github.com/ArchipLab-LinfengZhang/Object-Detection-Knowledge-Distillation-ICLR2021
| null | 0 | 4.333333 |
Poster
|
4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
max affine spline;generative networks;manifold smoothing;dropout;dropconnect;inverse problems;GAN;VAE;multimodal density estimation;elbow method
| null | 0 | null | null |
iclr
| -0.760886 | 0 | null |
main
| 4.5 |
2;4;4;8
| null | null |
Max-Affine Spline Insights Into Deep Generative Networks
| null | null | 0 | 2.75 |
Withdraw
|
4;3;2;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
physical simulations;spatio-temporal dynamics;generative adversarial networks;fluids;elasto-plasticity
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Frequency-aware Interface Dynamics with Generative Adversarial Networks
| null | null | 0 | 3.333333 |
Reject
|
3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Generalized Mirror Descent;Linear Convergence;Implicit Regularization
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.25 |
3;4;5;5
| null | null |
Linear Convergence and Implicit Regularization of Generalized Mirror Descent with Time-Dependent Mirrors
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Offline reinforcement learning
| null | 0 | null | null |
iclr
| -0.422577 | 0 | null |
main
| 4.6 |
2;4;5;6;6
| null | null |
Robust Offline Reinforcement Learning from Low-Quality Data
| null | null | 0 | 4 |
Withdraw
|
5;3;4;4;4
| null |
null |
; University of Oxford
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3035; None
| null | 0 | null | null | null | null | null |
Alessandro De Palma, Harkirat Singh Behl, Rudy R Bunel, Philip Torr, M. Pawan Kumar
|
https://iclr.cc/virtual/2021/poster/3035
|
Neural Network Verification;Neural Network Bounding;Optimisation for Deep Learning
| null | 0 | null | null |
iclr
| 0.582975 | 0 | null |
main
| 6.5 |
5;6;6;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3035
|
Scaling the Convex Barrier with Active Sets
| null | null | 0 | 2.833333 |
Poster
|
3;1;2;4;2;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Evolution Strategy;Circuit Routing;A*;PCB
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 5.25 |
5;5;5;6
| null | null |
Ranking Cost: One-Stage Circuit Routing by Directly Optimizing Global Objective Function
| null | null | 0 | 3.25 |
Reject
|
3;4;3;3
| null |
null |
Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2694; None
| null | 0 | null | null | null | null | null |
Emilio Parisotto, Ruslan Salakhutdinov
|
https://iclr.cc/virtual/2021/poster/2694
|
Deep Reinforcement Learning;Memory;Transformers;Distillation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.75 |
5;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/2694
|
Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation
| null | null | 0 | 4 |
Poster
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.478091 | 0 | null |
main
| 5 |
2;5;6;7
| null | null |
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
| null | null | 0 | 3.5 |
Reject
|
5;2;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Noisy labels;Deep metric learning;Bayesian inference;Variational inference
| null | 0 | null | null |
iclr
| -0.814345 | 0 | null |
main
| 4.75 |
3;4;5;7
| null | null |
Bayesian Metric Learning for Robust Training of Deep Models under Noisy Labels
| null | null | 0 | 3.75 |
Reject
|
4;5;4;2
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
3;4;8
| null | null |
Efficiently Troubleshooting Image Segmentation Models with Human-In-The-Loop
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
unsupervised;autoencoders;disentanglement;generative models;representation learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Unsupervised Learning of Global Factors in Deep Generative Models
| null | null | 0 | 3.5 |
Reject
|
4;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Tensor Network;Language Representation;Natural Language Processing;Quantum Machine Learning;Entanglement Entropy
| null | 0 | null | null |
iclr
| 0.258199 | 0 | null |
main
| 5.5 |
4;5;6;7
| null | null |
TextTN: Probabilistic Encoding of Language on Tensor Network
| null | null | 0 | 3.25 |
Reject
|
4;2;2;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
continual learning;reinforcement learning;recurrent neural network;deep neural network
| null | 0 | null | null |
iclr
| -0.870388 | 0 | null |
main
| 5.5 |
4;6;6;6
| null | null |
Efficient Architecture Search for Continual Learning
| null | null | 0 | 3.75 |
Reject
|
5;3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Compressed Convolutional Neural Network;Tensor Decomposition;Sample Complexity Analysis
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Rethinking Compressed Convolution Neural Network from a Statistical Perspective
| null | null | 0 | 3.333333 |
Reject
|
3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
programming languages;representation learning;contrastive learning;unsupervised learning;self-supervised learning;transfer learning;nlp;pretraining;type inference;summarization
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.333333 |
4;6;6
| null | null |
Contrastive Code Representation Learning
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Safe Reinforcement Learning
| null | 0 | null | null |
iclr
| 0.157243 | 0 | null |
main
| 5.4 |
4;5;5;6;7
| null | null |
Learning Safe Policies with Cost-sensitive Advantage Estimation
| null | null | 0 | 3.2 |
Reject
|
4;2;3;3;4
| null |
null |
Adobe Research; UC Berkeley
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2874; None
| null | 0 | null | null | null | null | null |
Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, trevor darrell
|
https://iclr.cc/virtual/2021/poster/2874
|
deep learning;unsupervised learning;domain adaptation;self-supervision;robustness
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 7.333333 |
7;7;8
| null |
https://iclr.cc/virtual/2021/poster/2874
|
Tent: Fully Test-Time Adaptation by Entropy Minimization
|
https://github.com/DequanWang/tent
| null | 0 | 4 |
Spotlight
|
5;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Spiking neural networks;threshold optimization;leak optimization;input encoding;deep convolutional networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.666667 |
3;5;6
| null | null |
DIET-SNN: A Low-Latency Spiking Neural Network with Direct Input Encoding & Leakage and Threshold Optimization
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Bayesian online learning;few-shot learning;meta-learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
5;6;6;7
| null | null |
Bayesian Online Meta-Learning
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
constrained clustering;semi-supervised representation learning;generative model;deep learning
| null | 0 | null | null |
iclr
| 1 | 0 | null |
main
| 4.666667 |
4;5;5
| null | null |
A Probabilistic Approach to Constrained Deep Clustering
| null | null | 0 | 4.666667 |
Reject
|
4;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Computer Vision;Robustness;Common Corruptions;Benchmark
| null | 0 | null | null |
iclr
| -0.426401 | 0 | null |
main
| 5 |
4;5;5;6
| null | null |
Increasing the Coverage and Balance of Robustness Benchmarks by Using Non-Overlapping Corruptions
| null | null | 0 | 4.25 |
Reject
|
5;3;5;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;language grounding
| null | 0 | null | null |
iclr
| 0.790569 | 0 |
https://www.dropbox.com/s/fnprjrfekbnxxru/code_data.zip?raw=1
|
main
| 6 |
5;6;6;6;7
| null | null |
Grounding Language to Entities for Generalization in Reinforcement Learning
| null | null | 0 | 2.6 |
Reject
|
1;3;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Convolutional neural network;Multiserial convolution filter bank;Analytic expressions
| null | 0 | null | null |
iclr
| 0.102598 | 0 | null |
main
| 3.75 |
2;4;4;5
| null | null |
Unified analytic forms for Convolutional Neural Networks and Wavelet Filter Banks
| null | null | 0 | 3.5 |
Withdraw
|
4;3;2;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph neural networks;combinatorial optimization;differentiable search;model explanation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.666667 |
4;6;7
| null | null |
A Framework For Differentiable Discovery Of Graph Algorithms
| null | null | 0 | 3 |
Reject
|
3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 0 | null | null | null |
Communication Efficient Primal Dual Algorithm for Nonconvex Nonsmooth Distributed Optimization
| null | null | 0 | 0 |
Withdraw
| null | null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep learning;parallelized training;local learning;compute-efficient learning;hebbian learning;biologically inspired learning
| null | 0 | null | null |
iclr
| -0.592157 | 0 | null |
main
| 5.5 |
3;4;6;9
| null | null |
Parallel Training of Deep Networks with Local Updates
| null | null | 0 | 3.75 |
Reject
|
5;3;4;3
| null |
null |
Bosch Center for Artificial Intelligence, Renningen, Germany; Karlsruhe Institute of Technology, Karlsruhe, Germany; University of Tübingen, Tübingen, Germany; Bosch Center for Artificial Intelligence, Renningen, Germany; Karlsruhe Institute of Technology, Karlsruhe, Germany
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3148; None
| null | 0 | null | null | null | null | null |
Michael Volpp, Fabian Flürenbrock, Lukas Grossberger, Christian Daniel, Gerhard Neumann
|
https://iclr.cc/virtual/2021/poster/3148
|
Aggregation Methods;Neural Processes;Latent Variable Models;Meta Learning;Multi-task Learning;Deep Sets
| null | 0 | null | null |
iclr
| -0.09759 | 0 | null |
main
| 6.25 |
6;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/3148
|
Bayesian Context Aggregation for Neural Processes
| null | null | 0 | 3.25 |
Poster
|
5;4;1;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0.816497 | 0 | null |
main
| 5 |
4;5;5;6
| null | null |
Visualizing High-Dimensional Trajectories on the Loss-Landscape of ANNs
| null | null | 0 | 4.25 |
Reject
|
4;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Multi-source Domain Adaptation;Transfer learning;Adversarial learning;Information theory
| null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 5 |
4;4;5;7
| null | null |
A Simple Unified Information Regularization Framework for Multi-Source Domain Adaptation
| null | null | 0 | 4 |
Reject
|
4;5;4;3
| null |
null |
Facebook AI Research; University of Washington; University of Washington, Allen Institute for AI; University of Washington, Facebook AI Research
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2739; None
| null | 0 | null | null | null | null | null |
Sachin Mehta, Marjan Ghazvininejad, Srini Iyer, Luke Zettlemoyer, Hannaneh Hajishirzi
|
https://iclr.cc/virtual/2021/poster/2739
|
Transformers;Sequence Modeling;Machine Translation;Language Modeling;Representation learning;Efficient Networks
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.25 |
6;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2739
|
DeLighT: Deep and Light-weight Transformer
| null | null | 0 | 4 |
Poster
|
4;4;4;4
| null |
null |
Graduate School of Knowledge Service Engineering, KAIST; Graduate School of Artificial Intelligence, KAIST
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3347; None
| null | 0 | null | null | null | null | null |
Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun
|
https://iclr.cc/virtual/2021/poster/3347
| null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7 |
7;7;7
| null |
https://iclr.cc/virtual/2021/poster/3347
|
BOIL: Towards Representation Change for Few-shot Learning
| null | null | 0 | 4.333333 |
Poster
|
5;4;4
| null |
null |
University of Cambridge, UK; Bilkent University, Ankara, Turkey; University of Cambridge, UK; Cambridge Centre for AI in Medicine, UK; The Alan Turing Institute, UK; UCLA, USA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2780; None
| null | 0 | null | null | null | null | null |
Alihan Hüyük, Daniel Jarrett, Cem Tekin, Mihaela van der Schaar
|
https://iclr.cc/virtual/2021/poster/2780
|
interpretable policy learning;understanding decision-making
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.666667 |
6;7;7
| null |
https://iclr.cc/virtual/2021/poster/2780
|
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning
| null | null | 0 | 3 |
Poster
|
3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.870388 | 0 | null |
main
| 4.75 |
4;4;5;6
| null | null |
Improved Techniques for Model Inversion Attacks
| null | null | 0 | 3.75 |
Withdraw
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Convolutional Networks;Graph Neural Network;Deep Learning;Structured Data;Machine Learning on Graphs
| null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 4.75 |
4;5;5;5
| null | null |
Polynomial Graph Convolutional Networks
| null | null | 0 | 4.25 |
Reject
|
4;5;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -1 | 0 | null |
main
| 4.666667 |
4;4;6
| null | null |
Variance Reduction in Hierarchical Variational Autoencoders
| null | null | 0 | 3.666667 |
Reject
|
4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph neural networks;graph coloring;combinational problem
| null | 0 | null | null |
iclr
| -0.953463 | 0 | null |
main
| 4 |
2;3;5;6
| null | null |
Rethinking Graph Neural Networks for Graph Coloring
| null | null | 0 | 3.75 |
Withdraw
|
5;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Preference-based Reinforcement Learning;Treatment Recommendation;healthcare
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
4;4;4
| null | null |
An Examination of Preference-based Reinforcement Learning for Treatment Recommendation
| null | null | 0 | 3.666667 |
Reject
|
4;3;4
| null |
null |
University of California, San Diego, [email protected]; Ohio State University, [email protected]; University of California, San Diego, [email protected]
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2646; None
| null | 0 | null | null | null | null | null |
Chen Cai, Dingkang Wang, Yusu Wang
|
https://iclr.cc/virtual/2021/poster/2646
|
graph coarsening;graph neural network;Doubly-weighted Laplace operator
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.5 |
6;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/2646
|
Graph Coarsening with Neural Networks
| null | null | 0 | 3.5 |
Poster
|
4;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 5.666667 |
5;6;6
| null | null |
MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention
| null | null | 0 | 2.666667 |
Reject
|
3;3;2
| null |
null |
MIT; Washington U. St. Louis; Google Research
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3253; None
| null | 0 | null | null | null | null | null |
Atish Agarwala, Abhimanyu Das, Brendan Juba, Rina Panigrahy, Vatsal Sharan, Xin Wang, Qiuyi Zhang
|
https://iclr.cc/virtual/2021/poster/3253
|
deep learning theory;multi-task learning
| null | 0 | null | null |
iclr
| -0.478091 | 0 | null |
main
| 5.25 |
3;5;6;7
| null |
https://iclr.cc/virtual/2021/poster/3253
|
One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks
| null | null | 0 | 3 |
Poster
|
4;2;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
graph neural networks;GNN;long-range dependencies;deep GNN;relational GNN
| null | 0 | null | null |
iclr
| 0.800641 | 0 | null |
main
| 4.5 |
2;4;5;7
| null | null |
Gated Relational Graph Attention Networks
| null | null | 0 | 4.25 |
Reject
|
4;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Self-supervised Exploration;Multimodal Machine Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
4;4;5;7
| null | null |
SEMI: Self-supervised Exploration via Multisensory Incongruity
| null | null | 0 | 3.25 |
Withdraw
|
4;2;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Dynamical Systems;Representation Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.5 |
4;6;6;6
| null | null |
Accurately Solving Rod Dynamics with Graph Learning
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
multicore;performance;machine learning;word embeddings;word2vec;fasttext
| null | 0 | null | null |
iclr
| -0.662266 | 0 | null |
main
| 5.25 |
4;5;5;7
| null | null |
Faster Training of Word Embeddings
| null | null | 0 | 3.25 |
Reject
|
4;3;3;3
| null |
null |
Princeton University; Purdue University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2676; None
| null | 0 | null | null | null | null | null |
Vikash Sehwag, Mung Chiang, Prateek Mittal
|
https://iclr.cc/virtual/2021/poster/2676
|
Outlier detection;Out-of-distribution detection in deep learning;Anomaly detection with deep neural networks;Self-supervised learning
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 6.25 |
6;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2676
|
SSD: A Unified Framework for Self-Supervised Outlier Detection
|
https://github.com/inspire-group/SSD
| null | 0 | 4.5 |
Poster
|
4;5;5;4
| null |
null |
Department of Computer Science, Psychology, Stanford University; Department of Computer Science, Stanford University; Department of Computer Science, Psychology, and Philosophy, Stanford University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3245; None
| null | 0 | null | null | null | null | null |
Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
|
https://iclr.cc/virtual/2021/poster/3245
|
contrastive learning;hard negative mining;mutual information;lower bound;detection;segmentation;MoCo
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.75 |
5;5;6;7
| null |
https://iclr.cc/virtual/2021/poster/3245
|
Conditional Negative Sampling for Contrastive Learning of Visual Representations
| null | null | 0 | 4 |
Poster
|
4;4;4;4
| null |
null |
Monash University; Sun Yat-sen University, Dark Matter AI Inc.
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2811; None
| null | 0 | null | null | null | null | null |
Siyi Hu, Fengda Zhu, Xiaojun Chang, Xiaodan Liang
|
https://iclr.cc/virtual/2021/poster/2811
|
Multi-agent Reinforcement Learning;Transfer Learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 7.333333 |
6;7;9
| null |
https://iclr.cc/virtual/2021/poster/2811
|
UPDeT: Universal Multi-agent RL via Policy Decoupling with Transformers
|
https://github.com/hhhusiyi-monash/UPDeT
| null | 0 | 4 |
Spotlight
|
4;4;4
| null |
null |
The Ohio State University; Carnegie Mellon University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3205; None
| null | 0 | null | null | null | null | null |
Ziyu Yao, Frank F Xu, Pengcheng Yin, Huan Sun, Graham Neubig
|
https://iclr.cc/virtual/2021/poster/3205
|
Tree-structured Data;Edit;Incremental Tree Transformations;Representation Learning;Imitation Learning;Source Code
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.75 |
5;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3205
|
Learning Structural Edits via Incremental Tree Transformations
| null | null | 0 | 4 |
Poster
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
online continual learning;catastrophic forgetting;benchmark;language modelling
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.25 |
3;4;4;6
| null | null |
Evaluating Online Continual Learning with CALM
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Learning Theory;Weight Normalization;Inductive Bias;Gradient Descent
| null | 0 | null | null |
iclr
| -0.522233 | 0 | null |
main
| 5.75 |
4;5;7;7
| null | null |
Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets
| null | null | 0 | 3.75 |
Reject
|
5;3;3;4
| null |
null |
Department of Statistics and Data Science, Yale University; Simons Institute for the Theory of Computing, University of California, Berkeley
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2680; None
| null | 0 | null | null | null | null | null |
Lin Chen, Sheng Xu
|
https://iclr.cc/virtual/2021/poster/2680
|
Neural tangent kernel;Reproducing kernel Hilbert space;Laplace kernel;Singularity analysis
| null | 0 | null | null |
iclr
| 0.688247 | 0 | null |
main
| 6.75 |
5;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/2680
|
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
| null | null | 0 | 3.5 |
Poster
|
3;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
deep learning;data imbalance;incremental learning;catastrophic forgetting
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 0 | null | null | null |
Learning to Balance with Incremental Learning
| null | null | 0 | 0 |
Withdraw
| null | null |
null |
Sorbonne Université, CNRS, LIP6, F-75005 Paris, France & Criteo AI Lab, Paris, France; Sorbonne Université, CNRS, LIP6, F-75005 Paris, France
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3233; None
| null | 0 | null | null | null | null | null |
Jérémie DONA, Jean-Yves Franceschi, sylvain lamprier, patrick gallinari
|
https://iclr.cc/virtual/2021/poster/3233
|
disentanglement;spatiotemporal prediction;representation learning;dynamical systems;separation of variables
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.333333 |
5;7;7
| null |
https://iclr.cc/virtual/2021/poster/3233
|
PDE-Driven Spatiotemporal Disentanglement
| null | null | 0 | 4 |
Poster
|
4;3;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Deep Kernel;Bayesian Learning
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 6.25 |
5;5;7;8
| null | null |
Physics Informed Deep Kernel Learning
| null | null | 0 | 3.75 |
Reject
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Adversarial Examples;Robustness;Safety;Fairness
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.25 |
5;5;5;6
| null | null |
To be Robust or to be Fair: Towards Fairness in Adversarial Training
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
continual learning;reinforcement learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 0 | null | null | null |
Continual Reinforcement Learning using Feature Map Attention Loss
| null | null | 0 | 0 |
Withdraw
| null | null |
null |
University of Electronic Science and Technology of China, Chengdu, China; University College London, London, United Kingdom; Beijing National Research Center for Information Science and Technology, Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3372; None
| null | 0 | null | null | null | null | null |
Siyi Liu, Chen Gao, Yihong Chen, Depeng Jin, Yong Li
|
https://iclr.cc/virtual/2021/poster/3372
|
Recommender Systems;Deep Learning;Embedding Size
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 6.75 |
6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/3372
|
Learnable Embedding sizes for Recommender Systems
|
https://github.com/ssui-liu/learnable-embed-sizes-for-RecSys
| null | 0 | 3.75 |
Poster
|
4;4;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
point cloud;convolution;irregular sampling;starcraft;nowcasting
| null | 0 | null | null |
iclr
| 0.57735 | 0 | null |
main
| 4.75 |
4;5;5;5
| null | null |
Deep Convolution for Irregularly Sampled Temporal Point Clouds
| null | null | 0 | 3.5 |
Reject
|
3;4;3;4
| null |
null |
Paper under double-blind review
|
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Model-Based Reinforcement Learning;Deep Exploration;Continuous Visual Control;UCB;Latent Space;Ensembling
| null | 0 | null | null |
iclr
| 0.522233 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
| null | null | 0 | 4.25 |
Reject
|
4;4;5;4
| null |
null |
College of Computer Science, Sichuan University, Chengdu 610065, China; College of Computer Science, Zhejiang University of Technology, Hangzhou 310023, China; College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
|
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Grid cell;path integration;spatial cognition;recurrent neural network
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 0 | null | null | null |
Grid cell modeling with mapping representation of self-motion for path integration
| null | null | 0 | 0 |
Desk Reject
| null | null |
null |
Princeton University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2824; None
| null | 0 | null | null | null | null | null |
Nikunj Umesh Saunshi, Sadhika Malladi, Sanjeev Arora
|
https://iclr.cc/virtual/2021/poster/2824
|
language models;theory;representation learning;self-supervised learning;unsupervised learning;transfer learning;natural language processing
| null | 0 | null | null |
iclr
| 0.785714 | 0 | null |
main
| 6.8 |
6;6;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/2824
|
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
| null | null | 0 | 3.2 |
Poster
|
2;3;4;3;4
| null |
null |
Department of Statistics, University of Oxford, United Kingdom
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2902; None
| null | 0 | null | null | null | null | null |
Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh
|
https://iclr.cc/virtual/2021/poster/2902
|
Pruning;Initialization;Compression
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.333333 |
6;6;7
| null |
https://iclr.cc/virtual/2021/poster/2902
|
Robust Pruning at Initialization
| null | null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Q-learning;episodic MDP;full-feedback;one-sided-feedback;inventory control;inventory
| null | 0 | null | null |
iclr
| 0.426401 | 0 | null |
main
| 5 |
4;5;5;6
| null | null |
Provably More Efficient Q-Learning in the One-Sided-Feedback/Full-Feedback Settings
| null | null | 0 | 3.25 |
Reject
|
2;4;4;3
| null |
null |
KAUST
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3119; None
| null | 0 | null | null | null | null | null |
Samuel Horváth, Peter Richtarik
|
https://iclr.cc/virtual/2021/poster/3119
|
distributed optimization;communication efficiency
| null | 0 | null | null |
iclr
| 0.845154 | 0 | null |
main
| 6.75 |
5;6;7;9
| null |
https://iclr.cc/virtual/2021/poster/3119
|
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
| null | null | 0 | 3.5 |
Poster
|
3;3;4;4
| null |
null |
CUNY Graduate Center, ITS, Facebook AI Research; MIT CSAIL; Facebook AI Research
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3292; None
| null | 0 | null | null | null | null | null |
Jonathan Frankle, David J Schwab, Ari Morcos
|
https://iclr.cc/virtual/2021/poster/3292
|
affine parameters;random features;batchnorm
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6.5 |
6;6;6;8
| null |
https://iclr.cc/virtual/2021/poster/3292
|
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
| null | null | 0 | 4 |
Poster
|
3;5;4;4
| null |
null |
Stanford University; Google Brain; UCLA
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3238; None
| null | 0 | null | null | null | null | null |
Ruiqi Gao, Yang Song, Ben Poole, Yingnian Wu, Durk Kingma
|
https://iclr.cc/virtual/2021/poster/3238
|
energy-based model;EBM;recovery likelihood;generative model;diffusion process;MCMC;Langevin dynamics;HMC
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.666667 |
6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3238
|
Learning Energy-Based Models by Diffusion Recovery Likelihood
|
https://github.com/ruiqigao/recovery_likelihood
| null | 0 | 3.333333 |
Poster
|
3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Task Complexity;Information Pursuit;Deep Generative Models;Information Theory;Variational Autoencoders;Normalizing Flows
| null | 0 | null | null |
iclr
| 0.866025 | 0 | null |
main
| 5.333333 |
5;5;6
| null | null |
Quantifying Task Complexity Through Generalized Information Measures
| null | null | 0 | 3 |
Reject
|
3;1;5
| null |
null |
University of Montreal, Mila, Montreal, Canada; University of Montreal, Mila, CIFAR, Montreal, Canada; MIT-IBM Watson AI Lab, Cambridge, United States; MIT BCS, CBMM, CSAIL, Cambridge, United States; Peking University, Beijing, China
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2974; None
| null | 0 | null | null | null | null | null |
Yuchen Lu, Yikang Shen, Siyuan Zhou, Aaron Courville, Joshua B Tenenbaum, Chuang Gan
|
https://iclr.cc/virtual/2021/poster/2974
|
Task Segmentation;Hierarchical Imitation Learning;Network Inductive Bias
| null | 0 | null | null |
iclr
| 0.870388 | 0 |
https://ordered-memory-rl.github.io/
|
main
| 5.5 |
4;6;6;6
| null |
https://iclr.cc/virtual/2021/poster/2974
|
Learning Task Decomposition with Ordered Memory Policy Network
| null | null | 0 | 3.25 |
Poster
|
2;3;4;4
| null |
null |
University of Oxford; University of Edinburgh; University College London
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3143; None
| null | 0 | null | null | null | null | null |
Yuge Shi, Brooks Paige, Philip Torr, Siddharth N
|
https://iclr.cc/virtual/2021/poster/3143
|
Deep generative model;multi-modal learning;representation learning
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
5;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/3143
|
Relating by Contrasting: A Data-efficient Framework for Multimodal Generative Models
| null | null | 0 | 4 |
Poster
|
4;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Stochastic Backpropagation;Variational Inference;Probabilistic Graphical Models;Deep Learning
| null | 0 | null | null |
iclr
| 0.485071 | 0 | null |
main
| 6.5 |
5;5;6;10
| null | null |
Fourier Stochastic Backpropagation
| null | null | 0 | 3.5 |
Reject
|
4;3;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
combinatorial optimization;linear programs;generalized gradient
| null | 0 | null | null |
iclr
| -0.807781 | 0 | null |
main
| 5.8 |
3;5;6;7;8
| null | null |
Differentiable Combinatorial Losses through Generalized Gradients of Linear Programs
| null | null | 0 | 3.8 |
Reject
|
5;4;3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
generative modeling;disentangle learning;wasserstein autoencoder
| null | 0 | null | null |
iclr
| 0.471405 | 0 | null |
main
| 6 |
5;5;6;8
| null | null |
Learning disentangled representations with the Wasserstein Autoencoder
| null | null | 0 | 3.75 |
Reject
|
4;3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Optimal transport;outliers;robustness
| null | 0 | null | null |
iclr
| 0.447214 | 0 | null |
main
| 5.5 |
4;5;6;7
| null | null |
Outlier Robust Optimal Transport
| null | null | 0 | 4.5 |
Reject
|
4;5;4;5
| null |
null |
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47906
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3172; None
| null | 0 | null | null | null | null | null |
Rui Wang, Xiaoqian Wang, David Inouye
|
https://iclr.cc/virtual/2021/poster/3172
|
Shapley values;Feature Attribution;Interpretable Machine Learning
| null | 0 | null | null |
iclr
| 0.5 | 0 | null |
main
| 6.333333 |
6;6;7
| null |
https://iclr.cc/virtual/2021/poster/3172
|
Shapley Explanation Networks
|
https://github.com/inouye-lab/ShapleyExplanationNetworks
| null | 0 | 3.666667 |
Poster
|
3;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
adversarial machine learning;certifiable defense;patch attack
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4.75 |
4;5;5;5
| null | null |
Certified robustness against physically-realizable patch attack via randomized cropping
| null | null | 0 | 4 |
Reject
|
4;4;4;4
| null |
null |
University of California San Diego, Amazon Web Services; University of California San Diego
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3322; None
| null | 0 | null | null | null | null | null |
Weijian Xu, Yifan Xu, Huaijin Wang, Zhuowen Tu
|
https://iclr.cc/virtual/2021/poster/3322
|
few-shot learning;constellation models
| null | 0 | null | null |
iclr
| -0.333333 | 0 | null |
main
| 5.75 |
5;6;6;6
| null |
https://iclr.cc/virtual/2021/poster/3322
|
Attentional Constellation Nets for Few-Shot Learning
| null | null | 0 | 4.75 |
Poster
|
5;4;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Cellular Automata;Manifold;Embedding
| null | 0 | null | null |
iclr
| -0.628539 | 0 | null |
main
| 5 |
4;4;5;7
| null | null |
Neural Cellular Automata Manifold
| null | null | 0 | 2.75 |
Withdraw
|
2;4;4;1
| null |
null |
Professorship of Continuum Mechanics, Technical University of Munich
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/2719; None
| null | 0 | null | null | null | null | null |
Sebastian Kaltenbach, PS Koutsourelakis
|
https://iclr.cc/virtual/2021/poster/2719
|
inductive bias;probabilistic generative models;state-space models;model order reduction;slowness;long-term stability
| null | 0 | null | null |
iclr
| -0.218218 | 0 | null |
main
| 6.6 |
6;6;7;7;7
| null |
https://iclr.cc/virtual/2021/poster/2719
|
Physics-aware, probabilistic model order reduction with guaranteed stability
| null | null | 0 | 3.8 |
Poster
|
5;3;4;4;3
| null |
null |
Machine Learning and Robotics Lab, University of Stuttgart, Germany; Learning and Intelligent Systems Group, TU Berlin, Germany; Max Planck Institute for Intelligent Systems, Stuttgart, Germany
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3020; None
| null | 0 | null | null | null | null | null |
Ingmar Schubert, Ozgur Oguz, Marc Toussaint
|
https://iclr.cc/virtual/2021/poster/3020
|
reinforcement learning;reward shaping;plan-based reward shaping;robotics;robotic manipulation
| null | 0 | null | null |
iclr
| -0.088045 | 0 | null |
main
| 5.75 |
3;6;7;7
| null |
https://iclr.cc/virtual/2021/poster/3020
|
Plan-Based Relaxed Reward Shaping for Goal-Directed Tasks
| null | null | 0 | 3.75 |
Poster
|
4;3;4;4
| null |
null |
Department of Computer Science, University of British Columbia, Vancouver, BC, Canada
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3185; None
| null | 0 | null | null | null | null | null |
Huang Fang, Zhenan Fan, Michael Friedlander
|
https://iclr.cc/virtual/2021/poster/3185
|
Optimization;stochastic subgradient method;interpolation;convergence analysis
| null | 0 | null | null |
iclr
| 0.816497 | 0 | null |
main
| 7 |
6;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3185
|
Fast convergence of stochastic subgradient method under interpolation
| null | null | 0 | 4.25 |
Poster
|
4;4;4;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null |
Thomas Adler, Johannes Brandstetter, Michael Widrich, Andreas Mayr, Michael Kopp, David Kreil, Günther Klambauer, Sepp Hochreiter
| null |
cross-domain learning;few-shot learning;Hebbian learning;ensemble learning;domain shift;domain adaptation;representation fusion
| null | 0 | null | null |
iclr
| -0.801784 | 0 | null |
main
| 4.6 |
4;4;4;5;6
| null | null |
Cross-Domain Few-Shot Learning by Representation Fusion
| null | null | 0 | 3.8 |
Reject
|
5;4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
multi-lingual;cross-lingual;data-augmentation;nlp
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
XLA: A Robust Unsupervised Data Augmentation Framework for Cross-Lingual NLP
| null | null | 0 | 3.5 |
Reject
|
4;3;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Graph Representation learning;Graph Convolutional Network;Graph Fourier transform
| null | 0 | null | null |
iclr
| -0.693375 | 0 | null |
main
| 6 |
4;7;7
| null | null |
Global Node Attentions via Adaptive Spectral Filters
| null | null | 0 | 3.333333 |
Reject
|
5;4;1
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Meta-learning;Few-shot image classification;Transductive few-shot learning
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 5.5 |
5;5;6;6
| null | null |
Improving Few-Shot Visual Classification with Unlabelled Examples
| null | null | 0 | 3.25 |
Reject
|
4;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
inductive biases;meta-learning;semi-supervised learning;adversarial learning;representation learning;transductive learning
| null | 0 | null | null |
iclr
| 0.258199 | 0 | null |
main
| 5.5 |
4;5;6;7
| null | null |
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
| null | null | 0 | 2.75 |
Reject
|
3;2;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Differential Privacy;Federated Learning;Communication Efficiency
| null | 0 | null | null |
iclr
| 0.774597 | 0 | null |
main
| 5.5 |
4;5;6;7
| null | null |
D2p-fed:Differentially Private Federated Learning with Efficient Communication
| null | null | 0 | 3.75 |
Reject
|
3;4;4;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
reinforcement learning;factored mdp;factored rl
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 5.75 |
5;6;6;6
| null | null |
FactoredRL: Leveraging Factored Graphs for Deep Reinforcement Learning
| null | null | 0 | 3.5 |
Reject
|
4;4;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.818182 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
Learning Private Representations with Focal Entropy
| null | null | 0 | 3.25 |
Reject
|
4;4;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Image classification;supervised learning;contrastive learning
| null | 0 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.5 |
4;4;5;5
| null | null |
ImCLR: Implicit Contrastive Learning for Image Classification
| null | null | 0 | 3.75 |
Withdraw
|
4;4;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| -0.866025 | 0 | null |
main
| 3.666667 |
3;3;5
| null | null |
DACT-BERT: Increasing the efficiency and interpretability of BERT by using adaptive computation time.
| null | null | 0 | 4 |
Withdraw
|
5;4;3
| null |
null |
Paper under double-blind review
|
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
hard-label attacks;adversarial machine learning;generalization
| null | 0 | null | null |
iclr
| -0.5 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Hard-label Manifolds: Unexpected advantages of query efficiency for finding on-manifold adversarial examples
| null | null | 0 | 3 |
Reject
|
4;2;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
language generation;non-autoregressive text generation;generative adversarial networks;GANs;latent variable
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
| null | null |
A Text GAN for Language Generation with Non-Autoregressive Generator
| null | null | 0 | 4 |
Reject
|
4;4;4
| null |
null |
University of Oxford & Improbable; University of Oxford & University of Edinburgh & The Alan Turing Institute; University of Oxford
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3121; None
| null | 0 | null | null | null | null | null |
Tom Joy, Sebastian Schmon, Philip Torr, Siddharth N, Tom Rainforth
|
https://iclr.cc/virtual/2021/poster/3121
|
variational autoencoder;representation learning;deep generative models
| null | 0 | null | null |
iclr
| 0.816497 | 0 | null |
main
| 6 |
5;6;6;7
| null |
https://iclr.cc/virtual/2021/poster/3121
|
Capturing Label Characteristics in VAEs
| null | null | 0 | 4.25 |
Poster
|
4;4;4;5
| null |
null |
KAIST; KAIST, AITRICS
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3317; None
| null | 0 | null | null | null | null | null |
Dong Bok Lee, Dongchan Min, Seanie Lee, Sung Ju Hwang
|
https://iclr.cc/virtual/2021/poster/3317
|
Unsupervised Learning;Meta-Learning;Unsupervised Meta-learning;Variational Autoencoders
| null | 0 | null | null |
iclr
| -0.816497 | 0 | null |
main
| 7.25 |
7;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3317
|
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
| null | null | 0 | 4 |
Spotlight
|
4;5;4;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null | null | null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 5.25 |
4;5;6;6
| null | null |
Should Ensemble Members Be Calibrated?
| null | null | 0 | 3 |
Reject
|
3;3;3;3
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Semi-supervised Domain Adaptation;Self-Distillation
| null | 0 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;4;5
| null | null |
Pair-based Self-Distillation for Semi-supervised Domain Adaptation
| null | null | 0 | 5 |
Withdraw
|
5;5;5
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
Flow neural network;contrastive induction learning;representation learning;spatio-temporal induction
| null | 0 | null | null |
iclr
| -0.090909 | 0 | null |
main
| 3.25 |
2;3;4;4
| null | null |
Flow Neural Network for Traffic Flow Modelling in IP Networks
| null | null | 0 | 3.25 |
Reject
|
4;2;3;4
| null |
null | null |
2021
| 0 | null | null | 0 | null | null | null | null | null | null | null |
image animation;occlusion;inpainting;gan;augmentation;regularization
| null | 0 | null | null |
iclr
| -0.730297 | 0 | null |
main
| 4 |
2;4;5;5
| null | null |
PriorityCut: Occlusion-aware Regularization for Image Animation
| null | null | 0 | 3.5 |
Reject
|
5;3;2;4
| null |
null |
Stanford University; Princeton University; Rice University; Columbia University
|
2021
| 0 |
https://iclr.cc/virtual/2021/poster/3277; None
| null | 0 | null | null | null | null | null |
Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan L Li, Tri Dao, Zhao Song, Anshumali Shrivastava, Christopher Re
|
https://iclr.cc/virtual/2021/poster/3277
|
Large-scale Deep Learning;Large-scale Machine Learning;Efficient Training;Randomized Algorithms
| null | 0 | null | null |
iclr
| 0.333333 | 0 | null |
main
| 7.25 |
7;7;7;8
| null |
https://iclr.cc/virtual/2021/poster/3277
|
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training
| null | null | 0 | 3.75 |
Oral
|
4;4;3;4
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.