pdf
stringlengths 49
199
⌀ | aff
stringlengths 1
1.36k
⌀ | year
stringclasses 19
values | technical_novelty_avg
float64 0
4
⌀ | video
stringlengths 21
47
⌀ | doi
stringlengths 31
63
⌀ | presentation_avg
float64 0
4
⌀ | proceeding
stringlengths 43
129
⌀ | presentation
stringclasses 796
values | sess
stringclasses 576
values | technical_novelty
stringclasses 700
values | arxiv
stringlengths 10
16
⌀ | author
stringlengths 1
1.96k
⌀ | site
stringlengths 37
191
⌀ | keywords
stringlengths 2
582
⌀ | oa
stringlengths 86
198
⌀ | empirical_novelty_avg
float64 0
4
⌀ | poster
stringlengths 57
95
⌀ | openreview
stringlengths 41
45
⌀ | conference
stringclasses 11
values | corr_rating_confidence
float64 -1
1
⌀ | corr_rating_correctness
float64 -1
1
⌀ | project
stringlengths 1
162
⌀ | track
stringclasses 3
values | rating_avg
float64 0
10
⌀ | rating
stringlengths 1
17
⌀ | correctness
stringclasses 809
values | slides
stringlengths 32
41
⌀ | title
stringlengths 2
192
⌀ | github
stringlengths 3
165
⌀ | authors
stringlengths 7
161
⌀ | correctness_avg
float64 0
5
⌀ | confidence_avg
float64 0
5
⌀ | status
stringclasses 22
values | confidence
stringlengths 1
17
⌀ | empirical_novelty
stringclasses 763
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;1;3
| null | null | null |
network resilience;neural combinatorial optimization;graph neural networks;reinforcement learning
| null | 1.666667 | null | null |
iclr
| -0.5 | -0.5 | null |
main
| 4 |
3;3;6
|
3;4;3
| null |
Edge Rewiring Goes Neural: Boosting Network Resilience via Policy Gradient
| null | null | 3.333333 | 3.666667 |
Reject
|
5;3;3
|
2;1;2
|
null |
Princeton University, Computer Science Department
|
2022
| 3.25 |
https://iclr.cc/virtual/2022/poster/6778; None
| null | 0 | null | null | null |
3;3;3;4
| null |
Yi Zhang, Arushi Gupta, Nikunj Umesh Saunshi, Sanjeev Arora
|
https://iclr.cc/virtual/2022/poster/6778
|
generalization;generative adversarial network
| null | 3.5 | null |
https://openreview.net/forum?id=eW5R4Cek6y6
|
iclr
| -0.333333 | 1 | null |
main
| 7.25 |
5;8;8;8
|
3;4;4;4
|
https://iclr.cc/virtual/2022/poster/6778
|
On Predicting Generalization using GANs
| null | null | 3.75 | 3.75 |
Spotlight
|
4;4;4;3
|
3;3;4;4
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
1;2;3;2
| null | null | null |
Deployment Constrained Reinforcement Learning;Deep Reinforcement Learning;Model-based Reinforcement Learning
| null | 2.75 | null | null |
iclr
| -0.816497 | 0.816497 | null |
main
| 4.25 |
3;3;5;6
|
2;3;3;4
| null |
MURO: Deployment Constrained Reinforcement Learning with Model-based Uncertainty Regularized Batch Optimization
| null | null | 3 | 4 |
Withdraw
|
5;4;4;3
|
2;3;3;3
|
null |
University of Amsterdam, QUV A lab; Qualcomm AI Research∗
|
2022
| 2.8 |
https://iclr.cc/virtual/2022/poster/6272; None
| null | 0 | null | null | null |
2;3;3;3;3
| null |
Phillip Lippe, Taco Cohen, Efstratios Gavves
|
https://iclr.cc/virtual/2022/poster/6272
|
Causal discovery;structure learning
| null | 2.8 | null |
https://openreview.net/forum?id=eYciPrLuUhG
|
iclr
| -0.166667 | 0 | null |
main
| 6.2 |
5;6;6;6;8
|
3;3;3;3;3
|
https://iclr.cc/virtual/2022/poster/6272
|
Efficient Neural Causal Discovery without Acyclicity Constraints
| null | null | 3 | 3.4 |
Poster
|
3;3;4;4;3
|
2;3;2;4;3
|
null | null |
2022
| 1.75 | null | null | 0 | null | null | null |
2;1;2;2
| null | null | null |
Transformer;BERT;self-supervision;compute efficiency;sparsity;convolution;natural language processing
| null | 1.75 | null | null |
iclr
| -0.707107 | 0 | null |
main
| 4 |
3;3;5;5
|
3;3;3;3
| null |
GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures
| null | null | 3 | 4 |
Reject
|
5;4;4;3
|
2;1;2;2
|
null | null |
2022
| 2.333333 | null | null | 0 | null | null | null |
2;3;2
| null | null | null |
machine learning;healthcare applications;latent encoding;surgical predictions
| null | 1.333333 | null | null |
iclr
| 0 | 0.188982 | null |
main
| 4.666667 |
3;5;6
|
3;4;3
| null |
Surgical Prediction with Interpretable Latent Representation
| null | null | 3.333333 | 4 |
Reject
|
4;4;4
|
1;0;3
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;1;3
| null | null | null |
generalized Gauss-Newton;curvature;second-order methods;Hessian spectrum in deep learning;automatic differentiation
| null | 2.75 | null | null |
iclr
| 0.333333 | -0.333333 | null |
main
| 5.25 |
5;5;5;6
|
3;3;4;3
| null |
ViViT: Curvature access through the generalized Gauss-Newton's low-rank structure
| null | null | 3.25 | 3.5 |
Reject
|
4;2;4;4
|
2;3;3;3
|
null | null |
2022
| 2.333333 | null | null | 0 | null | null | null |
2;3;2
| null | null | null |
neural causal discovery;causal structure learning;active learning;experimental design
| null | 2 | null | null |
iclr
| 1 | 1 | null |
main
| 4.333333 |
3;5;5
|
2;3;3
| null |
Learning Neural Causal Models with Active Interventions
| null | null | 2.666667 | 3.333333 |
Reject
|
2;4;4
|
2;2;2
|
null | null |
2022
| 3 | null | null | 0 | null | null | null |
3;3;3;3
| null | null | null | null | null | 1 | null | null |
iclr
| 0 | 0.57735 | null |
main
| 5.5 |
5;5;6;6
|
3;2;3;3
| null |
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
| null | null | 2.75 | 3.5 |
Withdraw
|
3;4;3;4
|
1;3;0;0
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
3;2;2;3
| null | null | null |
hardware-software codesign;optical neural network;regularization
| null | 2 | null | null |
iclr
| 0.333333 | -0.333333 | null |
main
| 5.25 |
5;5;5;6
|
4;3;3;3
| null |
Differentiable Discrete Device-to-System Codesign for Optical Neural Networks via Gumbel-Softmax
| null | null | 3.25 | 3.5 |
Withdraw
|
2;4;4;4
|
2;2;2;2
|
null |
Samsung Electronics; Texas A&M University; Rice University; Samsung Research America
|
2022
| 3.25 |
https://iclr.cc/virtual/2022/poster/7088; None
| null | 0 | null | null | null |
3;3;4;3
| null |
Zhimeng Jiang, Kaixiong Zhou, Zirui Liu, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
|
https://iclr.cc/virtual/2022/poster/7088
|
Instance-dependent label noise;posterior transition matrix;statiscally consistent classifier
| null | 2.75 | null |
https://openreview.net/forum?id=ecH2FKaARUp
|
iclr
| 0.662266 | 0.816497 | null |
main
| 5.75 |
5;5;5;8
|
2;3;3;4
|
https://iclr.cc/virtual/2022/poster/7088
|
An Information Fusion Approach to Learning with Instance-Dependent Label Noise
| null | null | 3 | 3.75 |
Poster
|
4;2;4;5
|
2;3;4;2
|
null | null |
2022
| 2.6 | null | null | 0 | null | null | null |
2;2;3;3;3
| null | null | null |
Federated learning;client sampling;bias;convergence rate;distributed optimization;data heterogeneity
| null | 2.2 | null | null |
iclr
| 0.534522 | -0.25 | null |
main
| 4.4 |
3;3;5;5;6
|
4;4;4;3;4
| null |
On the Impact of Client Sampling on Federated Learning Convergence
| null | null | 3.8 | 3.8 |
Reject
|
3;4;4;3;5
|
2;2;2;3;2
|
null |
Yale University; Google Research
|
2022
| 3 |
https://iclr.cc/virtual/2022/poster/6070; None
| null | 0 | null | null | null |
3;3;3;3
| null |
Juntang Zhuang, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha C Dvornek, sekhar tatikonda, James s Duncan, Ting Liu
|
https://iclr.cc/virtual/2022/poster/6070
|
generalization;sharpness-aware minimization;surrogate gap;deep learning
| null | 3 | null |
https://openreview.net/forum?id=edONMAnhLu-
|
iclr
| 0.333333 | 0 | null |
main
| 6.5 |
6;6;6;8
|
3;3;3;3
|
https://iclr.cc/virtual/2022/poster/6070
|
Surrogate Gap Minimization Improves Sharpness-Aware Training
|
https://sites.google.com/view/gsam-iclr22/home
| null | 3 | 3.75 |
Poster
|
4;4;3;4
|
3;3;3;3
|
null | null |
2022
| 2.333333 | null | null | 0 | null | null | null |
2;2;3
| null | null | null |
Deep Clustering;Learning Prototypes;Topological Representations
| null | 1.333333 | null | null |
iclr
| 0 | 0.5 | null |
main
| 2.333333 |
1;3;3
|
2;2;3
| null |
Shaping latent representations using Self-Organizing Maps with Relevance Learning
| null | null | 2.333333 | 4 |
Withdraw
|
4;5;3
|
0;2;2
|
null |
Texas A&M University; The University of Texas at Austin
|
2022
| 2.8 |
https://iclr.cc/virtual/2022/poster/6741; None
| null | 0 | null | null | null |
2;3;3;3;3
| null |
Wenqing Zheng, Tianlong Chen, Ting-Kuei Hu, Zhangyang Wang
|
https://iclr.cc/virtual/2022/poster/6741
|
Symbolic Regression;Learning To Optimize;Interpretability
| null | 2.2 | null |
https://openreview.net/forum?id=ef0nInZHKIC
|
iclr
| -0.25 | 0.612372 | null |
main
| 5.8 |
5;6;6;6;6
|
2;3;3;2;3
|
https://iclr.cc/virtual/2022/poster/6741
|
Symbolic Learning to Optimize: Towards Interpretability and Scalability
|
https://github.com/VITA-Group/Symbolic-Learning-To-Optimize
| null | 2.6 | 3.8 |
Poster
|
4;3;4;4;4
|
1;3;2;2;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;3;2;3
| null | null | null |
Multi-dataset;semantic segmentation;contrastive learning
| null | 2.5 | null | null |
iclr
| -0.927173 | 0.760886 | null |
main
| 4.75 |
3;5;5;6
|
2;3;2;4
| null |
Multi-dataset Pretraining: A Unified Model for Semantic Segmentation
| null | null | 2.75 | 4.25 |
Withdraw
|
5;4;4;4
|
2;2;3;3
|
null |
Mila, Université de Montréal; Mila, Université de Montréal, CIFAR Fellow; Mila, CIFAR Fellow, Google Research, Brain Team
|
2022
| 2.75 |
https://iclr.cc/virtual/2022/poster/6512; None
| null | 0 | null | null | null |
3;3;2;3
| null |
Hattie Zhou, Ankit Vani, Hugo Larochelle, Aaron Courville
|
https://iclr.cc/virtual/2022/poster/6512
|
Neural Networks;Generalization;Iterative Training;Compositionality;Iterated Learning
| null | 2.75 | null |
https://openreview.net/forum?id=ei3SY1_zYsE
|
iclr
| 0.333333 | 0.816497 | null |
main
| 7 |
6;6;6;10
|
3;2;3;4
|
https://iclr.cc/virtual/2022/poster/6512
|
Fortuitous Forgetting in Connectionist Networks
| null | null | 3 | 3.75 |
Poster
|
3;4;4;4
|
3;2;2;4
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;3;1
| null | null | null |
neuronal learning;unsupervised learning;calcium imaging;generative adversarial networks;cycle-consistent adversarial networks;explainable AI
| null | 2.666667 | null | null |
iclr
| 1 | -0.5 | null |
main
| 5.333333 |
5;5;6
|
3;4;3
| null |
Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks
| null | null | 3.333333 | 3.666667 |
Reject
|
3;3;5
|
3;2;3
|
null | null |
2022
| 2.25 | null | null | 0 | null | null | null |
2;2;2;3
| null | null | null |
NLP;Transformer;BERT;Position Encodings
| null | 2.75 | null | null |
iclr
| -0.333333 | 0.333333 | null |
main
| 4.5 |
3;5;5;5
|
2;2;3;2
| null |
Analyzing the Implicit Position Encoding Ability of Transformer Decoder
| null | null | 2.25 | 3.75 |
Withdraw
|
4;4;4;3
|
2;3;3;3
|
null |
Alibaba Group; College of Computer Science and Technology, Zhejiang University; School of Software Technology, Zhejiang University; Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies; College of Computer Science and Technology, Zhejiang University; Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies; College of Computer Science and Technology, Zhejiang University; Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies; Hangzhou Innovation Center, Zhejiang University; School of Software Technology, Zhejiang University; Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies
|
2022
| 3.25 |
https://iclr.cc/virtual/2022/poster/6259; None
| null | 0 | null | null | null |
3;3;3;4
| null |
Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, Huajun Chen
|
https://iclr.cc/virtual/2022/poster/6259
|
prompt-tuning;pre-trained language model;few-shot learning
| null | 2.75 | null |
https://openreview.net/forum?id=ek9a0qIafW
|
iclr
| -0.57735 | 1 | null |
main
| 7 |
6;6;8;8
|
3;3;4;4
|
https://iclr.cc/virtual/2022/poster/6259
|
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
|
https://github.com/zjunlp/DART
| null | 3.5 | 3.75 |
Poster
|
4;4;3;4
|
3;2;3;3
|
null | null |
2022
| 2.2 | null | null | 0 | null | null | null |
2;2;2;2;3
| null | null | null |
Second-Order Methods;Stochastic Optimization;Deep Neural Networks
| null | 2.6 | null | null |
iclr
| -0.968246 | 0.845154 | null |
main
| 4 |
3;3;3;5;6
|
3;2;3;4;4
| null |
SLIM-QN: A Stochastic, Light, Momentumized Quasi-Newton Optimizer for Deep Neural Networks
| null | null | 3.2 | 3.6 |
Reject
|
4;4;4;3;3
|
2;3;3;2;3
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;2;2
| null | null | null |
GAN;Transformer;generative models
| null | 2.25 | null | null |
iclr
| 0.899229 | 0.927173 | null |
main
| 4.75 |
3;5;5;6
|
2;3;3;3
| null |
STransGAN: An Empirical Study on Transformer in GANs
| null | null | 2.75 | 4.25 |
Withdraw
|
3;5;4;5
|
2;2;3;2
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;2;2
| null | null | null |
Physical Design;Mechanical Design;Generative Modeling;Hamiltonian Monte Carlo
| null | 2.25 | null | null |
iclr
| -0.57735 | 0.57735 | null |
main
| 4 |
3;3;5;5
|
3;2;3;3
| null |
Physical System Design Using Hamiltonian Monte Carlo over Learned Manifolds
| null | null | 2.75 | 3.5 |
Withdraw
|
5;3;3;3
|
2;2;3;2
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null | null | null | 0 | null | null |
iclr
| 0.57735 | 0.57735 | null |
main
| 5.5 |
3;3;8;8
|
4;2;4;4
| null |
Invariance in Policy Optimisation and Partial Identifiability in Reward Learning
| null | null | 3.5 | 3.25 |
Reject
|
4;1;4;4
| null |
null | null |
2022
| 1.5 | null | null | 0 | null | null | null |
1;1;2;2
| null | null | null |
Visual Reinforcement Learning;Transfer in Reinforcement Learning;Generalization in Reinforcement Learning
| null | 2 | null | null |
iclr
| -0.816497 | -0.333333 | null |
main
| 3.75 |
3;3;3;6
|
2;2;3;2
| null |
Understanding the Generalization Gap in Visual Reinforcement Learning
| null | null | 2.25 | 4 |
Reject
|
4;4;5;3
|
1;2;3;2
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
1;2;3
| null | null | null |
image-to-image translation;contrastive learning;federated learning
| null | 2 | null | null |
iclr
| -0.5 | 0.5 | null |
main
| 4 |
3;3;6
|
2;3;3
| null |
Federated Contrastive Learning for Privacy-Preserving Unpaired Image-to-Image Translation
| null | null | 2.666667 | 4.333333 |
Withdraw
|
5;4;4
|
1;2;3
|
null | null |
2022
| 1.5 | null | null | 0 | null | null | null |
1;1;2;2
| null | null | null |
Out-of-Distribution;Adaptive Learning;Contextual Features
| null | 2 | null | null |
iclr
| -0.57735 | 0.447214 | null |
main
| 2 |
1;1;3;3
|
3;1;4;2
| null |
OUT-OF-DISTRIBUTION CLASSIFICATION WITH ADAPTIVE LEARNING OF LOW-LEVEL CONTEXTUAL FEATURES
| null | null | 2.5 | 3.75 |
Withdraw
|
4;4;4;3
|
1;2;2;3
|
null | null |
2022
| 2.333333 | null | null | 0 | null | null | null |
2;3;2
| null | null | null |
optimization;communication compression;natural language processing;language model pre-training
| null | 2.666667 | null | null |
iclr
| -0.5 | 0.5 | null |
main
| 5.333333 |
5;5;6
|
4;3;4
| null |
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
| null | null | 3.666667 | 3.333333 |
Reject
|
4;3;3
|
2;3;3
|
null | null |
2022
| 2.666667 | null | null | 0 | null | null | null |
3;3;2
| null | null | null |
molecular modeling;sequence modeling;conditional sequence modeling;drug discovery
| null | 2.333333 | null | null |
iclr
| -0.5 | 0.866025 | null |
main
| 4.333333 |
3;5;5
|
2;3;4
| null |
C5T5: Controllable Generation of Organic Molecules with Transformers
| null | null | 3 | 3.666667 |
Reject
|
4;4;3
|
2;2;3
|
null | null |
2022
| 2.666667 | null | null | 0 | null | null | null |
2;3;3
| null | null | null |
robustness;imagenet-c;mixup
| null | 2.333333 | null | null |
iclr
| -0.866025 | -0.866025 | null |
main
| 4.333333 |
3;5;5
|
4;2;3
| null |
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
| null | null | 3 | 4 |
Reject
|
5;3;4
|
3;2;2
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null |
Artistic Style Transfer;Language-based Image Editing
| null | 2 | null | null |
iclr
| 0 | 0 |
https://ai-sub.github.io/ldist/
|
main
| 4 |
3;3;5;5
|
4;2;3;3
| null |
Language-Driven Image Style Transfer
| null | null | 3 | 4.5 |
Withdraw
|
4;5;5;4
|
2;2;2;2
|
null | null |
2022
| 2.75 | null | null | 0 | null | null | null |
2;3;3;3
| null | null | null |
Second-order optimization;Deep learning;Kernel machines
| null | 1.75 | null | null |
iclr
| -0.57735 | 0.816497 | null |
main
| 3.75 |
3;3;3;6
|
1;2;2;3
| null |
Efficient Second-Order Optimization for Deep Learning with Kernel Machines
| null | null | 2 | 3 |
Withdraw
|
4;4;2;2
|
0;2;2;3
|
null |
Google Research & DeepMind; Google Research
|
2022
| 2 |
https://iclr.cc/virtual/2022/poster/5960; None
| null | 0 | null | null | null |
2;3;2;1
| null |
Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, SHARAN NARANG, Dani Yogatama, Ashish Vaswani, Donald Metzler
|
https://iclr.cc/virtual/2022/poster/5960
|
transformers;attention;deep learning
| null | 3 | null |
https://openreview.net/forum?id=f2OYVDyfIB
|
iclr
| 0.132453 | 0.688247 | null |
main
| 6.25 |
5;6;6;8
|
3;4;3;4
|
https://iclr.cc/virtual/2022/poster/5960
|
Scale Efficiently: Insights from Pretraining and Finetuning Transformers
| null | null | 3.5 | 3.75 |
Poster
|
4;4;3;4
|
3;3;3;3
|
null |
Department of Computer Science, ETH Zurich
|
2022
| 2.75 |
https://iclr.cc/virtual/2022/poster/6934; None
| null | 0 | null | null | null |
3;3;2;3
| null |
Mislav Balunovic, Dimitar I. Dimitrov, Robin Staab, Martin Vechev
|
https://iclr.cc/virtual/2022/poster/6934
|
federated learning;privacy;gradient leakage
| null | 2 | null |
https://openreview.net/forum?id=f2lrIbGx3x7
|
iclr
| 0.471405 | 0.57735 | null |
main
| 6.5 |
6;6;6;8
|
3;4;3;4
|
https://iclr.cc/virtual/2022/poster/6934
|
Bayesian Framework for Gradient Leakage
| null | null | 3.5 | 4 |
Poster
|
4;2;5;5
|
1;2;2;3
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;2;2
| null | null | null |
multimodal;capsule networks;self-supervision;computer vision;self-attention;routing
| null | 1.25 | null | null |
iclr
| 1 | 0 | null |
main
| 4.5 |
3;5;5;5
|
3;3;3;3
| null |
Routing with Self-Attention for Multimodal Capsule Networks
| null | null | 3 | 3.75 |
Withdraw
|
3;4;4;4
|
0;3;0;2
|
null | null |
2022
| 1.5 | null | null | 0 | null | null | null |
1;1;2;2
| null | null | null |
service orchestration;manifold distance detection;adversarial example;neural network.
| null | 1.25 | null | null |
iclr
| -0.522233 | 0.870388 | null |
main
| 2.5 |
1;1;3;5
|
2;2;2;3
| null |
Manifold Distance Judge, an Adversarial Samples Defense Strategy Based on Service Orchestration
| null | null | 2.25 | 4.25 |
Reject
|
5;4;4;4
|
2;1;0;2
|
null | null |
2022
| 2.75 | null | null | 0 | null | null | null |
3;3;3;2
| null | null | null | null | null | 2.25 | null | null |
iclr
| 0.333333 | 1 | null |
main
| 5.25 |
3;6;6;6
|
2;3;3;3
| null |
Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
| null | null | 2.75 | 3.25 |
Reject
|
3;4;3;3
|
2;3;2;2
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null |
Explainable AI;Interpretable ML;Visual explanation of CNN;Class activation maps;Computer Vision
| null | 2.75 | null | null |
iclr
| 0 | 0.777778 | null |
main
| 4.25 |
3;3;5;6
|
2;2;2;4
| null |
Pixab-CAM: Attend Pixel, not Channel
| null | null | 2.5 | 4 |
Reject
|
4;4;4;4
|
2;3;3;3
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;2;2
| null | null | null |
Deep Learning;Optimisation;Step-size selection
| null | 2.5 | null | null |
iclr
| -0.870388 | 1 | null |
main
| 4.5 |
3;5;5;5
|
2;3;3;3
| null |
Faking Interpolation Until You Make It
| null | null | 2.75 | 3.75 |
Withdraw
|
5;3;3;4
|
3;2;3;2
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;3;2;3
| null | null | null |
representation learning;reinforcement learning
| null | 2.75 | null | null |
iclr
| -0.727607 | 0.70014 | null |
main
| 5.25 |
3;5;5;8
|
3;4;3;4
| null |
A Free Lunch from the Noise: Provable and Practical Exploration for Representation Learning
| null | null | 3.5 | 3.25 |
Reject
|
4;3;3;3
|
2;3;3;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;3;2;3
| null | null | null |
Cybersecurity;Cross Project Software Vulnerability Detection;Domain Adaptation;Max-Margin Principle.
| null | 2.25 | null | null |
iclr
| -0.70014 | 0.889297 | null |
main
| 5.25 |
3;5;5;8
|
3;3;3;4
| null |
Cross Project Software Vulnerability Detection via Domain Adaptation and Max-Margin Principle
| null | null | 3.25 | 3.5 |
Withdraw
|
4;3;4;3
|
2;2;2;3
|
null | null |
2022
| 2.25 | null | null | 0 | null | null | null |
1;2;3;3
| null | null | null | null | null | 2.5 | null | null |
iclr
| 0 | 0.57735 | null |
main
| 5.5 |
3;3;8;8
|
4;2;4;4
| null |
Detecting Worst-case Corruptions via Loss Landscape Curvature in Deep Reinforcement Learning
| null | null | 3.5 | 4 |
Reject
|
3;5;4;4
|
1;2;3;4
|
null |
Paper under double-blind review
|
2022
| 1.666667 | null | null | 0 | null | null | null |
2;1;2
| null |
Mohamed Ishmael Belghazi
| null |
Uncertainty quantification;neural networks;benchmark
| null | 2.333333 | null | null |
iclr
| 0 | -0.188982 | null |
main
| 4.666667 |
3;5;6
|
3;2;3
| null |
What classifiers know what they don't know?
|
https://github.com/anonymous-author/UIMNET
| null | 2.666667 | 4 |
Reject
|
4;4;4
|
2;2;3
|
null |
New York University; KAIST; AITRICS
|
2022
| 3 |
https://iclr.cc/virtual/2022/poster/6362; None
| null | 0 | null | null | null |
3;3;3;3
| null |
Jaehong Yoon, Divyam Madaan, Eunho Yang, Sung Ju Hwang
|
https://iclr.cc/virtual/2022/poster/6362
|
Continual Learning
| null | 2.25 | null |
https://openreview.net/forum?id=f9D-5WNG4Nv
|
iclr
| -0.927173 | 0.324443 | null |
main
| 6.25 |
5;6;6;8
|
2;3;4;3
|
https://iclr.cc/virtual/2022/poster/6362
|
Online Coreset Selection for Rehearsal-based Continual Learning
| null | null | 3 | 3.75 |
Poster
|
4;4;4;3
|
3;3;0;3
|
null | null |
2022
| 2.4 | null | null | 0 | null | null | null |
2;2;3;2;3
| null | null | null | null | null | 2.6 | null | null |
iclr
| 0.71875 | 0.801784 | null |
main
| 5.2 |
3;5;5;5;8
|
2;3;3;4;4
| null |
The Needle in the haystack: Out-distribution aware Self-training in an Open-World Setting
| null | null | 3.2 | 3.4 |
Reject
|
2;4;3;4;4
|
2;2;3;3;3
|
null |
The University of Hong Kong; AI Technology Center of Tencent Video; ARC Lab, Tencent PCG; The Chinese University of Hong Kong
|
2022
| 2.75 |
https://iclr.cc/virtual/2022/poster/5898; None
| null | 0 | null | null | null |
3;3;2;3
| null |
Wenqi Shao, Yixiao Ge, Zhaoyang Zhang, XUYUAN XU, Xiaogang Wang, Ying Shan, Ping Luo
|
https://iclr.cc/virtual/2022/poster/5898
|
classification;Normalization;transformer
| null | 3 | null |
https://openreview.net/forum?id=f9MHpAGUyMn
|
iclr
| 0 | 0.707107 | null |
main
| 5.5 |
5;5;6;6
|
3;2;4;3
|
https://iclr.cc/virtual/2022/poster/5898
|
Dynamic Token Normalization improves Vision Transformers
|
https://github.com/wqshao126/DTN
| null | 3 | 4 |
Poster
|
4;4;4;4
|
3;3;3;3
|
null |
Yonsei University, Seoul, South Korea
|
2022
| 3.25 |
https://iclr.cc/virtual/2022/poster/6473; None
| null | 0 | null | null | null |
3;3;4;3
| null |
Jaehoon Lee, Jeon Jinsung, Sheo yon Jhin, Jihyeon Hyeong, Jayoung Kim, Minju Jo, Kook Seungji, Noseong Park
|
https://iclr.cc/virtual/2022/poster/6473
| null | null | 3.5 | null |
https://openreview.net/forum?id=fCG75wd39ze
|
iclr
| 0.333333 | 1 | null |
main
| 7.5 |
6;8;8;8
|
3;4;4;4
|
https://iclr.cc/virtual/2022/poster/6473
|
LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations
| null | null | 3.75 | 4.25 |
Poster
|
4;5;4;4
|
4;4;4;2
|
null |
KTH Royal Institute of Technology, [email protected]; KTH Royal Institute of Technology, [email protected]; Ericsson AB, [email protected]
|
2022
| 3 |
https://iclr.cc/virtual/2022/poster/6984; None
| null | 0 | null | null | null |
3;3;3;3
| null |
Vien Mai, Jacob Lindbäck, Mikael Johansson
|
https://iclr.cc/virtual/2022/poster/6984
|
Optimal transport;Operator splitting;Douglas-Rachford;ADMM;GPUs
| null | 1.75 | null |
https://openreview.net/forum?id=fCSq8yrDkc
|
iclr
| -0.57735 | 0.57735 | null |
main
| 5.25 |
3;6;6;6
|
3;4;4;3
|
https://iclr.cc/virtual/2022/poster/6984
|
A fast and accurate splitting method for optimal transport: analysis and implementation
| null | null | 3.5 | 3.5 |
Poster
|
4;3;3;4
|
2;3;2;0
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
3;2;1;2
| null | null | null |
Adversarial training;random mask;regularization;generalization;neural networks
| null | 1.75 | null | null |
iclr
| -1 | -0.333333 |
Not provided
|
main
| 3.5 |
3;3;3;5
|
2;3;2;2
| null |
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks
|
Not provided
| null | 2.25 | 4.75 |
Withdraw
|
5;5;5;4
|
2;2;1;2
|
null | null |
2022
| 2.75 | null | null | 0 | null | null | null |
2;3;3;3
| null | null | null |
Q learning;regularization;deep Q learning
| null | 1.75 | null | null |
iclr
| 0 | 0.738549 | null |
main
| 6 |
5;5;6;8
|
3;2;4;4
| null |
Beyond Target Networks: Improving Deep $Q$-learning with Functional Regularization
| null | null | 3.25 | 3.75 |
Reject
|
4;4;3;4
|
2;2;0;3
|
null |
Mila, University of Montreal; Microsoft Research, Montreal; Mila, University of Montreal, Canada CIFAR AI Chair
|
2022
| 2.666667 |
https://iclr.cc/virtual/2022/poster/6123; None
| null | 0 | null | null | null |
3;2;3
| null |
Shawn Tan, Chin-Wei Huang, Alessandro Sordoni, Aaron Courville
|
https://iclr.cc/virtual/2022/poster/6123
|
variational inference;variational bayes;dequantisation;normalizing flows
| null | 2.666667 | null |
https://openreview.net/forum?id=fExcSKdDo_
|
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
|
4;3;4
|
https://iclr.cc/virtual/2022/poster/6123
|
Learning to Dequantise with Truncated Flows
| null | null | 3.666667 | 3.666667 |
Poster
|
4;3;4
|
2;3;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null | null | null | 2 | null | null |
iclr
| 0 | 0 | null |
main
| 4 |
3;3;5;5
|
3;2;3;2
| null |
Rethinking Positional Encoding
| null | null | 2.5 | 3.5 |
Withdraw
|
3;4;3;4
|
2;2;2;2
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null |
deep neural networks;partial differential equations;solution operator
| null | 2.25 | null | null |
iclr
| -0.471405 | 0.471405 | null |
main
| 5 |
3;5;6;6
|
3;3;3;4
| null |
A framework of deep neural networks via the solution operator of partial differential equations
| null | null | 3.25 | 3.75 |
Reject
|
4;4;3;4
|
2;2;3;2
|
null | null |
2022
| 2.75 | null | null | 0 | null | null | null |
3;2;2;4
| null | null | null | null | null | 2.25 | null | null |
iclr
| -0.187317 | 0.547723 | null |
main
| 6 |
5;5;6;8
|
3;2;1;4
| null |
Decoupled Kernel Neural Processes: Neural Network-Parameterized Stochastic Processes using Explicit Data-driven Kernel
| null | null | 2.5 | 3.75 |
Reject
|
5;4;2;4
|
3;2;3;1
|
null | null |
2022
| 3 | null | null | 0 | null | null | null |
2;3;3;3;3;4
| null | null | null |
Gradient Descent;Adaptive Step Size;Adaptive Learning Rate
| null | 3 | null | null |
iclr
| -0.130466 | 0.360997 | null |
main
| 6.666667 |
3;5;6;8;8;10
|
3;4;4;3;4;4
| null |
Trainable Learning Rate
| null | null | 3.666667 | 4.333333 |
Reject
|
5;5;3;4;4;5
|
2;3;3;3;3;4
|
null |
DeepMind
|
2022
| 3.25 |
https://iclr.cc/virtual/2022/poster/6269; None
| null | 0 | null | null | null |
3;3;3;4
| null |
Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, Fengning Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Henaff, Matthew Botvinick, Andrew Zisserman, Oriol Vinyals, Joao Carreira
|
https://iclr.cc/virtual/2022/poster/6269
|
Perceiver;BERT;natural language processing;optical flow;computer vision;multimodal;GLUE;ImageNet;StarCraft
| null | 3.25 | null |
https://openreview.net/forum?id=fILj7WpI-g
|
iclr
| 0 | 0 | null |
main
| 8 |
8;8;8;8
|
4;3;4;4
|
https://iclr.cc/virtual/2022/poster/6269
|
Perceiver IO: A General Architecture for Structured Inputs & Outputs
| null | null | 3.75 | 3.25 |
Spotlight
|
4;3;3;3
|
3;4;2;4
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;3;2;3
| null | null | null |
Positive and Unlabeled Learning;Federated Learning
| null | 2.75 | null | null |
iclr
| -0.57735 | 0.522233 | null |
main
| 4.5 |
3;5;5;5
|
2;3;4;2
| null |
Positive and Unlabeled Federated Learning
| null | null | 2.75 | 3.5 |
Withdraw
|
4;4;3;3
|
2;3;3;3
|
null | null |
2022
| 2.75 | null | null | 0 | null | null | null |
3;2;4;2
| null | null | null |
Graph Neural Networks;Graph Filters
| null | 2.5 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5;5
|
3;4;3;4
| null |
Effective Polynomial Filter Adaptation for Graph Neural Networks
|
https://tinyurl.com/PPGNN
| null | 3.5 | 4.25 |
Reject
|
5;4;4;4
|
2;2;3;3
|
null | null |
2022
| 3 | null | null | 0 | null | null | null |
2;4;3
| null |
Anonymous Authors
| null |
metric learning;PDEs;numerical simulation;physical modeling
| null | 3 | null | null |
iclr
| 1 | 0.5 | null |
main
| 6.333333 |
3;8;8
|
3;4;3
| null |
Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs
| null | null | 3.333333 | 3.666667 |
Reject
|
3;4;4
|
2;4;3
|
null | null |
2022
| 2.4 | null | null | 0 | null | null | null |
1;2;3;3;3
| null | null | null |
computational psychiatric;variational auto-encoder;fMRI analysis
| null | 2.2 | null | null |
iclr
| -0.542139 | 0.907062 | null |
main
| 5.2 |
1;5;6;6;8
|
2;3;3;3;3
| null |
Discovering the neural correlate informed nosological relation among multiple neuropsychiatric disorders through dual utilisation of diagnostic information
| null | null | 2.8 | 3.4 |
Reject
|
5;3;2;3;4
|
1;2;3;3;2
|
null | null |
2022
| 2.333333 | null | null | 0 | null | null | null |
2;2;3
| null | null | null |
Time Series;Mixture-of-Experts;Data Aggregation;Uncertainty Estimation
| null | 2.666667 | null | null |
iclr
| -1 | 0 | null |
main
| 3.666667 |
3;3;5
|
3;3;3
| null |
MECATS: Mixture-of-Experts for Probabilistic Forecasts of Aggregated Time Series
| null | null | 3 | 3.333333 |
Withdraw
|
4;4;2
|
3;2;3
|
null |
Meta Research, Burlingame, CA, USA; Northeastern University, Boston, MA, USA; Santa Clara University, Santa Clara, CA, USA
|
2022
| 2.8 |
https://iclr.cc/virtual/2022/poster/6084; None
| null | 0 | null | null | null |
3;2;3;3;3
| null |
Yue Bai, Huan Wang, Zhiqiang Tao, Kunpeng Li, Yun Fu
|
https://iclr.cc/virtual/2022/poster/6084
|
Dual Lottery Ticket Hypothesis;Sparse Network Training
| null | 2.8 | null |
https://openreview.net/forum?id=fOsN52jn25l
|
iclr
| -0.612372 | 1 | null |
main
| 7.2 |
6;6;8;8;8
|
2;2;3;3;3
|
https://iclr.cc/virtual/2022/poster/6084
|
Dual Lottery Ticket Hypothesis
|
https://github.com/yueb17/DLTH
| null | 2.6 | 4.2 |
Poster
|
4;5;4;4;4
|
3;3;3;2;3
|
null |
Univ. Bordeaux, Bordeaux INP, CNRS, IMB, UMR 5251, F-33400 Talence, France
|
2022
| 2.75 |
https://iclr.cc/virtual/2022/poster/6192; None
| null | 0 | null | null | null |
3;2;2;4
| null |
Samuel Hurault, Arthur Leclaire, Nicolas Papadakis
|
https://iclr.cc/virtual/2022/poster/6192
|
Plug-and-Play;Inverse Problem;Image Restoration;Denoising
| null | 2.75 | null |
https://openreview.net/forum?id=fPhKeld3Okz
|
iclr
| -0.333333 | 0.57735 | null |
main
| 6.5 |
6;6;6;8
|
3;3;4;4
|
https://iclr.cc/virtual/2022/poster/6192
|
Gradient Step Denoiser for convergent Plug-and-Play
| null | null | 3.5 | 4.25 |
Poster
|
4;4;5;4
|
3;3;2;3
|
null |
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data
|
2022
| 3.25 |
https://iclr.cc/virtual/2022/poster/6556; None
| null | 0 | null | null | null |
4;3;3;3
| null |
Tianjian Zhang, Feng Yin, Zhi-Quan Luo
|
https://iclr.cc/virtual/2022/poster/6556
| null | null | 3 | null |
https://openreview.net/forum?id=fQTlgI2qZqE
|
iclr
| 0.333333 | 0 | null |
main
| 6.5 |
6;6;6;8
|
3;3;3;3
|
https://iclr.cc/virtual/2022/poster/6556
|
Fast Generic Interaction Detection for Model Interpretability and Compression
|
https://github.com/zhangtj1996/ParaACE
| null | 3 | 3.75 |
Poster
|
4;3;4;4
|
4;2;2;4
|
null |
Alibaba A.I. Lab; Simon Fraser University
|
2022
| 2.75 |
https://iclr.cc/virtual/2022/poster/6822; None
| null | 0 | null | null | null |
2;3;3;3
| null |
Shitao Tang, Jiahui Zhang, Siyu Zhu, Ping Tan
|
https://iclr.cc/virtual/2022/poster/6822
|
Vision Transformer;Efficient Transformer;Feature matching;Stereo;image classification;detection;3D Vision
| null | 2.25 | null |
https://openreview.net/forum?id=fR-EnKWL_Zb
|
iclr
| 0 | -0.132453 | null |
main
| 6.25 |
5;6;6;8
|
3;4;3;3
|
https://iclr.cc/virtual/2022/poster/6822
|
Quadtree Attention for Vision Transformers
|
https://github.com/Tangshitao/QuadtreeAttention
| null | 3.25 | 4 |
Poster
|
4;4;4;4
|
0;3;3;3
|
null | null |
2022
| 1.5 | null | null | 0 | null | null | null |
2;2;1;1
| null | null | null |
MRI reconstruction;MRI sampling;reinforcement learning;accelerated MRI;replication
| null | 2.25 | null | null |
iclr
| -0.301511 | -0.57735 | null |
main
| 4 |
3;3;5;5
|
3;4;3;3
| null |
On the benefits of deep RL in accelerated MRI sampling
| null | null | 3.25 | 2.75 |
Reject
|
4;2;2;3
|
2;2;3;2
|
null | null |
2022
| 2.25 | null | null | 0 | null | null | null |
1;2;3;3
| null | null | null |
single-step adversarial training;catastrophic overfitting;FGSM;efficient adversarial training;fast adversarial training
| null | 2.5 | null | null |
iclr
| 0.157895 | 0.324443 | null |
main
| 4.75 |
3;5;5;6
|
3;2;3;4
| null |
Towards fast and effective single-step adversarial training
| null | null | 3 | 3.75 |
Reject
|
4;4;2;5
|
2;3;2;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
3;2;3;2
| null | null | null |
Continuous classification of time series;Deep learning;Model training
| null | 2.5 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3;3
|
2;3;3;2
| null |
ACCTS: an Adaptive Model Training Policy for Continuous Classification of Time Series
| null | null | 2.5 | 2.75 |
Reject
|
2;3;3;3
|
2;3;2;3
|
null | null |
2022
| 2.333333 | null | null | 0 | null | null | null |
2;2;3
| null | null | null |
Semantic Segmentation;Robustness;Natural Variation
| null | 2.333333 | null | null |
iclr
| 0.5 | 1 | null |
main
| 5.333333 |
5;5;6
|
2;2;3
| null |
Model-Based Robust Adaptive Semantic Segmentation
| null | null | 2.333333 | 3.666667 |
Reject
|
4;3;4
|
2;2;3
|
null | null |
2022
| 2.25 | null | null | 0 | null | null | null |
2;2;3;2
| null | null | null |
Heterogeneous Graphs;Graph Neural Networks;GNN;Equivariance
| null | 2 | null | null |
iclr
| 0 | 0 | null |
main
| 5 |
5;5;5;5
|
3;4;4;3
| null |
Equivariant Heterogeneous Graph Networks
| null | null | 3.5 | 3.5 |
Reject
|
4;4;2;4
|
2;1;3;2
|
null | null |
2022
| 2.25 | null | null | 0 | null | null | null |
2;3;2;2
| null | null | null |
Reinforcement learning;generalization
| null | 2 | null | null |
iclr
| 0 | 0 | null |
main
| 4.5 |
3;5;5;5
|
3;3;3;3
| null |
Disentangling Generalization in Reinforcement Learning
| null | null | 3 | 4 |
Reject
|
4;4;4;4
|
2;2;2;2
|
null |
Microsoft Research at Redmond; Microsoft Cloud + AI
|
2022
| 2.666667 |
https://iclr.cc/virtual/2022/poster/6312; None
| null | 0 | null | null | null |
2;3;3
| null |
Chunyuan Li, Jianwei Yang, Pengchuan Zhang, Mei Gao, Bin Xiao, Xiyang Dai, Lu Yuan, Jianfeng Gao
|
https://iclr.cc/virtual/2022/poster/6312
|
self-supervised learning;vision transformers;non-contrastive region-matching task
| null | 3 | null |
https://openreview.net/forum?id=fVu3o-YUGQK
|
iclr
| -1 | 0 | null |
main
| 7.333333 |
6;8;8
|
4;4;4
|
https://iclr.cc/virtual/2022/poster/6312
|
Efficient Self-supervised Vision Transformers for Representation Learning
|
https://github.com/microsoft/esvit
| null | 4 | 4.333333 |
Poster
|
5;4;4
|
3;3;3
|
null | null |
2022
| 1.5 | null | null | 0 | null | null | null |
2;2;1;1
| null | null | null |
LSTM;Data Aggregation;Time Series
| null | 2 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3;3
|
2;3;3;2
| null |
A Study of Aggregation of Long Time-series Input for LSTM Neural Networks
| null | null | 2.5 | 4 |
Reject
|
4;5;3;4
|
1;2;2;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;3;2;3
| null | null | null |
Model-based reinforcement leanring;Multi-teacher knowledge distillation;Emsemble learning
| null | 2.5 | null | null |
iclr
| -0.57735 | -0.333333 | null |
main
| 4.5 |
3;5;5;5
|
3;2;3;3
| null |
MOBA: Multi-teacher Model Based Reinforcement Learning
| null | null | 2.75 | 3.5 |
Withdraw
|
4;3;4;3
|
2;2;3;3
|
null |
Duke University, USA; Duke University, USA; King Abdullah University of Science and Technology, Saudi Arabia
|
2022
| 3.5 |
https://iclr.cc/virtual/2022/poster/6859; None
| null | 0 | null | null | null |
3;4;3;4
| null |
Qitong Gao, Dong Wang, Joshua Amason, Siyang Yuan, Chenyang Tao, Ricardo Henao, Majda Hadziahmetovic, Lawrence Carin, Miroslav Pajic
|
https://iclr.cc/virtual/2022/poster/6859
|
Missing Data;Reinforcement Learning;Representation Learning
| null | 2.5 | null |
https://openreview.net/forum?id=fXHl76nO2AZ
|
iclr
| 0 | 0.333333 | null |
main
| 6.5 |
6;6;6;8
|
4;3;4;4
|
https://iclr.cc/virtual/2022/poster/6859
|
Gradient Importance Learning for Incomplete Observations
|
https://github.com/gaoqitong/gradient-importance-learning
| null | 3.75 | 3 |
Poster
|
4;3;2;3
|
3;3;4;0
|
null | null |
2022
| 2.2 | null | null | 0 | null | null | null |
2;2;2;2;3
| null | null | null | null | null | 2.2 | null | null |
iclr
| -0.527046 | -0.583333 | null |
main
| 4.4 |
3;3;5;5;6
|
4;3;3;3;3
| null |
MemREIN: Rein the Domain Shift for Cross-Domain Few-Shot Learning
| null | null | 3.2 | 4 |
Withdraw
|
5;4;3;4;4
|
2;2;2;2;3
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;3;1
| null | null | null |
Bioinformatics;Protein function prediction;Machine Learning
| null | 2 | null | null |
iclr
| -0.333333 | 0.333333 | null |
main
| 2.5 |
1;3;3;3
|
2;4;2;2
| null |
An Effective GCN-based Hierarchical Multi-label classification for Protein Function Prediction
| null | null | 2.5 | 4.75 |
Reject
|
5;5;5;4
|
2;2;3;1
|
null | null |
2022
| 2.75 | null | null | 0 | null | null | null |
2;2;3;4
| null | null | null |
Binary Neural Networks;Hardware-Friendly Neural Architecture Design
| null | 1.75 | null | null |
iclr
| 0.756745 | -0.070535 | null |
main
| 4.75 |
3;3;5;8
|
3;3;2;3
| null |
BoolNet: Streamlining Binary Neural Networks Using Binary Feature Maps
| null | null | 2.75 | 3.75 |
Reject
|
4;2;4;5
|
2;2;0;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null |
deep learning;simulation;optimization;inverse problems;physics
| null | 2.75 | null | null |
iclr
| 0.301511 | 0.57735 | null |
main
| 4.5 |
3;3;6;6
|
2;2;2;4
| null |
Physical Gradients for Deep Learning
| null | null | 2.5 | 3.25 |
Reject
|
2;4;4;3
|
2;2;4;3
|
null |
Paper under double-blind review
|
2022
| 2 | null | null | 0 | null | null | null |
2;2;2;2
| null | null | null |
language emergence;language grounding;compositionality;systematicity;few-shot learning
| null | 2 | null | null |
iclr
| -0.816497 | 1 | null |
main
| 2.5 |
1;3;3;3
|
1;3;3;3
| null |
Meta-Referential Games to Learn Compositional Learning Behaviours
| null | null | 2.5 | 3 |
Reject
|
4;3;3;2
|
2;2;2;2
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;2;2
| null | null | null |
Attention;multi-scale;phrase information;sparsity scheme
| null | 2 | null | null |
iclr
| 0 | 0 | null |
main
| 3 |
3;3;3;3
|
3;3;3;2
| null |
Multi-scale fusion self attention mechanism
| null | null | 2.75 | 3.75 |
Reject
|
3;3;4;5
|
2;2;2;2
|
null |
Southeast University, Monash University; Monash University; MILA; Southeast University
|
2022
| 2.25 |
https://iclr.cc/virtual/2022/poster/6154; None
| null | 0 | null | null | null |
2;2;2;3
| null |
Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, Gholamreza Haffari
|
https://iclr.cc/virtual/2022/poster/6154
|
Continual Learning;Pre-trained Language Model
| null | 2.75 | null |
https://openreview.net/forum?id=figzpGMrdD
|
iclr
| 0.800641 | 0 | null |
main
| 5.5 |
3;5;6;8
|
4;3;3;4
|
https://iclr.cc/virtual/2022/poster/6154
|
Pretrained Language Model in Continual Learning: A Comparative Study
| null | null | 3.5 | 4.25 |
Poster
|
4;4;4;5
|
2;2;3;4
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null |
Graph Neural Networks;Transformer;Graph Coarsening
| null | 2.5 | null | null |
iclr
| 0.196116 | 0 | null |
main
| 5.5 |
3;5;6;8
|
3;3;3;3
| null |
Coarformer: Transformer for large graph via graph coarsening
| null | null | 3 | 4 |
Reject
|
4;3;5;4
|
2;2;3;3
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
1;2;3
| null | null | null | null | null | 2.333333 | null | null |
iclr
| -0.802955 | 0.993399 | null |
main
| 3.333333 |
1;3;6
|
1;2;3
| null |
Folded Hamiltonian Monte Carlo for Bayesian Generative Adversarial Networks
| null | null | 2 | 3.333333 |
Reject
|
4;3;3
|
2;2;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
3;2;3;2
| null | null | null |
computational microscopy;computational photography;computer vision;deep learning
| null | 2.75 | null | null |
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6;6
|
4;3;3;4
| null |
Programmable 3D snapshot microscopy with Fourier convolutional networks
| null | null | 3.5 | 3.25 |
Reject
|
3;4;3;3
|
3;3;2;3
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null |
Explainability;Neural Explainer;Faithfullness;Global;Post-hoc
| null | 2.5 | null | null |
iclr
| -0.57735 | 0 | null |
main
| 4.5 |
3;3;6;6
|
3;3;3;3
| null |
MAGNEx: A Model Agnostic Global Neural Explainer
| null | null | 3 | 3.75 |
Reject
|
4;4;4;3
|
2;2;3;3
|
null |
ShanghaiTech University & Shanghai Engineering Research Center of Intelligent Vision and Imaging & Shanghai Engineering Research Center of Energy Efficient and Custom AI IC; Youtu Lab, Tencent; ShanghaiTech University
|
2022
| 2.333333 |
https://iclr.cc/virtual/2022/poster/6277; None
| null | 0 | null | null | null |
3;1;3
| null |
Dongze Lian, Zehao Yu, Xing Sun, Shenghua Gao
|
https://iclr.cc/virtual/2022/poster/6277
|
Architecture Design;MLP;Classification;Detection;Segmentation
| null | 3 | null |
https://openreview.net/forum?id=fvLLcIYmXb
|
iclr
| -0.5 | 0 | null |
main
| 5.333333 |
5;5;6
|
3;3;3
|
https://iclr.cc/virtual/2022/poster/6277
|
AS-MLP: An Axial Shifted MLP Architecture for Vision
|
https://github.com/svip-lab/AS-MLP
| null | 3 | 4.333333 |
Poster
|
4;5;4
|
4;2;3
|
null | null |
2022
| 2.25 | null | null | 0 | null | null | null |
2;2;2;3
| null |
Hannah Lawrence
| null |
Dictionary learning;generative priors;sparsity;alternating minimization;linear transformation;transfer learning;compression;algorithms
| null | 1.75 | null | null |
iclr
| 0 | 0.555556 | null |
main
| 4.25 |
3;3;5;6
|
2;3;3;3
| null |
Dictionary Learning Under Generative Coefficient Priors with Applications to Compression
| null | null | 2.75 | 4 |
Withdraw
|
4;4;4;4
|
2;2;1;2
|
null | null |
2022
| 2 | null | null | 0 | null | null | null |
2;2;2;2
| null | null | null |
Multi-task RL;Decision Transformer;self-supervised RL;Pretraining
| null | 1.5 | null | null |
iclr
| 0.57735 | 0.57735 | null |
main
| 4 |
3;3;5;5
|
2;3;3;3
| null |
Semi-supervised Offline Reinforcement Learning with Pre-trained Decision Transformers
| null | null | 2.75 | 3.75 |
Reject
|
3;4;4;4
|
2;2;2;0
|
null | null |
2022
| 2.25 | null | null | 0 | null | null | null |
2;2;1;4
| null | null | null | null | null | 2 | null | null |
iclr
| 0 | 0.272166 | null |
main
| 4.25 |
3;3;5;6
|
3;3;2;4
| null |
Improving Fairness via Federated Learning
| null | null | 3 | 4 |
Reject
|
4;4;4;4
|
2;2;0;4
|
null |
Center for Data Science, New York University; Department of Computer Science, University of Maryland; Department of Mathematics, University of Maryland
|
2022
| 2.75 |
https://iclr.cc/virtual/2022/poster/7067; None
| null | 0 | null | null | null |
2;2;3;4
| null |
Liam H Fowl, Jonas Geiping, Wojciech Czaja, Micah Goldblum, Tom Goldstein
|
https://iclr.cc/virtual/2022/poster/7067
|
Privacy;Federated Learning;Gradient Inversion
| null | 2.75 | null |
https://openreview.net/forum?id=fwzUgo0FM9v
|
iclr
| 0.688247 | 0.688247 | null |
main
| 6.25 |
5;6;6;8
|
3;4;3;4
|
https://iclr.cc/virtual/2022/poster/7067
|
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
| null | null | 3.5 | 3 |
Poster
|
2;4;2;4
|
2;2;3;4
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
3;2;3;2
| null | null | null |
NLP;disentanglement;unsupervised learning;controllable generation.
| null | 2.75 | null | null |
iclr
| 0 | 0.866025 | null |
main
| 6 |
5;5;6;8
|
2;3;3;4
| null |
Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles
| null | null | 3 | 3 |
Reject
|
3;3;3;3
|
2;3;3;3
|
null |
Kim Jaechul Graduate School of AI, KAIST, Daejeon, Republic of Korea; School of Computing, KAIST, Daejeon, Republic of Korea
|
2022
| 2.5 |
https://iclr.cc/virtual/2022/poster/6945; None
| null | 0 | null | null | null |
2;3;3;2
| null |
Sunghoon Hong, Deunsol Yoon, Kee-Eung Kim
|
https://iclr.cc/virtual/2022/poster/6945
|
Multitask Reinforcement Learning;Modular Reinforcement Learning;Transfer Learning;Transformer;Structural Embedding
| null | 2.75 | null |
https://openreview.net/forum?id=fy_XRVHqly
|
iclr
| 0.174078 | 0.333333 | null |
main
| 5.75 |
5;6;6;6
|
3;4;3;3
|
https://iclr.cc/virtual/2022/poster/6945
|
Structure-Aware Transformer Policy for Inhomogeneous Multi-Task Reinforcement Learning
| null | null | 3.25 | 3.25 |
Poster
|
3;2;4;4
|
2;3;3;3
|
null | null |
2022
| 1.75 | null | null | 0 | null | null | null |
2;2;1;2
| null | null | null |
Behavior Cloning;Learning from demonstration
| null | 1.5 | null | null |
iclr
| -0.707107 | 0.19245 | null |
main
| 2 |
1;1;3;3
|
1;4;4;2
| null |
Improving Learning from Demonstrations by Learning from Experience
| null | null | 2.75 | 4 |
Withdraw
|
4;5;4;3
|
1;2;1;2
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
2;2;3;3
| null | null | null |
Noisy Labels;Label Correction;Meta-Learning
| null | 2.5 | null | null |
iclr
| -0.863868 | 0 | null |
main
| 4.75 |
3;3;5;8
|
3;3;3;3
| null |
Learning with Noisy Labels by Efficient Transition Matrix Estimation to Combat Label Miscorrection
| null | null | 3 | 4 |
Withdraw
|
5;4;4;3
|
2;2;3;3
|
null |
Department of Brain and Cognitive Sciences, MIT; Center for Brains, Minds and Machines, MIT; McGovern Institute for Brain Research, MIT; Department of Brain and Cognitive Sciences, MIT; Center for Brains, Minds and Machines, MIT; McGovern Institute for Brain Research, MIT; University of Augsburg; Ludwig Maximilian University; Technical University of Munich
|
2022
| 3 |
https://iclr.cc/virtual/2022/poster/6890; None
| null | 0 | null | null | null |
3;3;3;3
| null |
Franziska Geiger, Martin Schrimpf, Tiago Marques, James DiCarlo
|
https://iclr.cc/virtual/2022/poster/6890
|
computational neuroscience;primate visual ventral stream;convolutional neural networks;biologically plausible learning
| null | 3.25 | null |
https://openreview.net/forum?id=g1SzIRLQXMM
|
iclr
| 0 | 0 | null |
main
| 8 |
8;8;8;8
|
3;4;3;4
|
https://iclr.cc/virtual/2022/poster/6890
|
Wiring Up Vision: Minimizing Supervised Synaptic Updates Needed to Produce a Primate Ventral Stream
| null | null | 3.5 | 3.75 |
Spotlight
|
3;4;4;4
|
2;4;4;3
|
null |
Technical University of Munich
|
2022
| 3.333333 |
https://iclr.cc/virtual/2022/poster/5940; None
| null | 0 | null | null | null |
3;3;4
| null |
Daniel Zügner, Bertrand Charpentier, Morgane Ayle, Sascha Geringer, Stephan Günnemann
|
https://iclr.cc/virtual/2022/poster/5940
|
hierarchical clustering;graphs;networks;graph mining;network mining;graph custering
| null | 2 | null |
https://openreview.net/forum?id=g2LCQwG7Of
|
iclr
| -1 | 1 | null |
main
| 6.666667 |
6;6;8
|
3;3;4
|
https://iclr.cc/virtual/2022/poster/5940
|
End-to-End Learning of Probabilistic Hierarchies on Graphs
| null | null | 3.333333 | 3.666667 |
Poster
|
4;4;3
|
3;3;0
|
null | null |
2022
| 2.5 | null | null | 0 | null | null | null |
3;2;2;3
| null | null | null |
unsupervised reinforcement learning;open-ended learning;skill discovery
| null | 1.5 | null | null |
iclr
| -0.870388 | 0 | null |
main
| 3.75 |
3;3;3;6
|
3;3;3;3
| null |
Rewardless Open-Ended Learning (ROEL)
| null | null | 3 | 3.25 |
Reject
|
4;4;3;2
|
2;2;0;2
|
null | null |
2022
| 3 | null | null | 0 | null | null | null |
2;3;3;4
| null | null | null | null | null | 2.25 | null | null |
iclr
| 0 | 0 | null |
main
| 5.25 |
3;5;5;8
|
3;3;3;3
| null |
Multilevel physics informed neural networks (MPINNs)
| null | null | 3 | 4 |
Reject
|
4;4;4;4
|
2;3;3;1
|
null |
Division of Decision and Control Systems, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden; LSC, NCMIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China; School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China; Department of Electrical Engineering, University of Notre Dame, IN, USA
|
2022
| 3 |
https://iclr.cc/virtual/2022/poster/6933; None
| null | 0 | null | null | null |
3;3;3
| null |
Ruinan Jin, Yu XING, Xingkang He
|
https://iclr.cc/virtual/2022/poster/6933
|
stochastic gradient descent;adaptive gradient algorithm;asymptotic convergence
| null | 1 | null |
https://openreview.net/forum?id=g5tANwND04i
|
iclr
| 0 | 0 | null |
main
| 6 |
6;6;6
|
4;2;4
|
https://iclr.cc/virtual/2022/poster/6933
|
On the Convergence of mSGD and AdaGrad for Stochastic Optimization
| null | null | 3.333333 | 3 |
Poster
|
3;4;2
|
3;0;0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.