title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning
| 221 |
icml
| 12 | 4 |
2023-06-17 03:10:45.844000
|
https://github.com/istarjun/TapNet
| 52 |
Tapnet: Neural network augmented with task-adaptive projection for few-shot learning
|
https://scholar.google.com/scholar?cluster=12575801957058912486&hl=en&as_sdt=0,5
| 3 | 2,019 |
Towards Accurate Model Selection in Deep Unsupervised Domain Adaptation
| 110 |
icml
| 10 | 2 |
2023-06-17 03:10:46.060000
|
https://github.com/thuml/Deep-Embedded-Validation
| 58 |
Towards accurate model selection in deep unsupervised domain adaptation
|
https://scholar.google.com/scholar?cluster=2565642679287912484&hl=en&as_sdt=0,39
| 3 | 2,019 |
Position-aware Graph Neural Networks
| 386 |
icml
| 75 | 11 |
2023-06-17 03:10:46.276000
|
https://github.com/JiaxuanYou/P-GNN
| 367 |
Position-aware graph neural networks
|
https://scholar.google.com/scholar?cluster=2886623965746954945&hl=en&as_sdt=0,5
| 15 | 2,019 |
DAG-GNN: DAG Structure Learning with Graph Neural Networks
| 277 |
icml
| 55 | 21 |
2023-06-17 03:10:46.492000
|
https://github.com/fishmoon1234/DAG-GNN
| 233 |
DAG-GNN: DAG structure learning with graph neural networks
|
https://scholar.google.com/scholar?cluster=12962909633357312064&hl=en&as_sdt=0,34
| 8 | 2,019 |
Multi-Agent Adversarial Inverse Reinforcement Learning
| 90 |
icml
| 26 | 6 |
2023-06-17 03:10:46.723000
|
https://github.com/ermongroup/MA-AIRL
| 153 |
Multi-agent adversarial inverse reinforcement learning
|
https://scholar.google.com/scholar?cluster=13913946030309510400&hl=en&as_sdt=0,5
| 16 | 2,019 |
Online Adaptive Principal Component Analysis and Its extensions
| 4 |
icml
| 1 | 0 |
2023-06-17 03:10:46.938000
|
https://github.com/yuanx270/online-adaptive-PCA
| 7 |
Online adaptive principal component analysis and its extensions
|
https://scholar.google.com/scholar?cluster=11284462216308687300&hl=en&as_sdt=0,34
| 2 | 2,019 |
Bayesian Nonparametric Federated Learning of Neural Networks
| 394 |
icml
| 30 | 2 |
2023-06-17 03:10:47.156000
|
https://github.com/IBM/probabilistic-federated-neural-matching
| 120 |
Bayesian nonparametric federated learning of neural networks
|
https://scholar.google.com/scholar?cluster=14489502397862024393&hl=en&as_sdt=0,21
| 15 | 2,019 |
Dirichlet Simplex Nest and Geometric Inference
| 5 |
icml
| 1 | 0 |
2023-06-17 03:10:47.371000
|
https://github.com/moonfolk/VLAD
| 3 |
Dirichlet simplex nest and geometric inference
|
https://scholar.google.com/scholar?cluster=3107204927758089702&hl=en&as_sdt=0,36
| 1 | 2,019 |
Making Convolutional Networks Shift-Invariant Again
| 669 |
icml
| 206 | 14 |
2023-06-17 03:10:47.587000
|
https://github.com/adobe/antialiased-cnns
| 1,613 |
Making convolutional networks shift-invariant again
|
https://scholar.google.com/scholar?cluster=6405795848737680233&hl=en&as_sdt=0,5
| 39 | 2,019 |
Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback
| 27 |
icml
| 2 | 0 |
2023-06-17 03:10:47.802000
|
https://github.com/zcc1307/warmcb_scripts
| 4 |
Warm-starting contextual bandits: Robustly combining supervised and bandit feedback
|
https://scholar.google.com/scholar?cluster=13381714542277312288&hl=en&as_sdt=0,5
| 2 | 2,019 |
Self-Attention Generative Adversarial Networks
| 3,723 |
icml
| 173 | 17 |
2023-06-17 03:10:48.018000
|
https://github.com/brain-research/self-attention-gan
| 967 |
Self-attention generative adversarial networks
|
https://scholar.google.com/scholar?cluster=7330853420568873733&hl=en&as_sdt=0,31
| 38 | 2,019 |
LatentGNN: Learning Efficient Non-local Relations for Visual Recognition
| 78 |
icml
| 14 | 2 |
2023-06-17 03:10:48.233000
|
https://github.com/latentgnn/LatentGNN-V1-PyTorch
| 74 |
Latentgnn: Learning efficient non-local relations for visual recognition
|
https://scholar.google.com/scholar?cluster=7578360606999759452&hl=en&as_sdt=0,5
| 6 | 2,019 |
Bridging Theory and Algorithm for Domain Adaptation
| 530 |
icml
| 27 | 5 |
2023-06-17 03:10:48.449000
|
https://github.com/thuml/MDD
| 121 |
Bridging theory and algorithm for domain adaptation
|
https://scholar.google.com/scholar?cluster=12036658661059863941&hl=en&as_sdt=0,5
| 5 | 2,019 |
SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning
| 234 |
icml
| 18 | 5 |
2023-06-17 03:10:48.665000
|
https://github.com/sharadmv/parasol
| 67 |
Solar: Deep structured representations for model-based reinforcement learning
|
https://scholar.google.com/scholar?cluster=3160286257401504607&hl=en&as_sdt=0,33
| 3 | 2,019 |
Theoretically Principled Trade-off between Robustness and Accuracy
| 1,779 |
icml
| 119 | 3 |
2023-06-17 03:10:48.880000
|
https://github.com/yaodongyu/TRADES
| 474 |
Theoretically principled trade-off between robustness and accuracy
|
https://scholar.google.com/scholar?cluster=3311622924435738798&hl=en&as_sdt=0,5
| 10 | 2,019 |
Interpreting Adversarially Trained Convolutional Neural Networks
| 120 |
icml
| 9 | 0 |
2023-06-17 03:10:49.095000
|
https://github.com/PKUAI26/AT-CNN
| 62 |
Interpreting adversarially trained convolutional neural networks
|
https://scholar.google.com/scholar?cluster=6664229559742953811&hl=en&as_sdt=0,5
| 7 | 2,019 |
Adaptive Monte Carlo Multiple Testing via Multi-Armed Bandits
| 15 |
icml
| 3 | 0 |
2023-06-17 03:10:49.311000
|
https://github.com/martinjzhang/AMT
| 4 |
Adaptive monte carlo multiple testing via multi-armed bandits
|
https://scholar.google.com/scholar?cluster=17419761528871683302&hl=en&as_sdt=0,44
| 0 | 2,019 |
Maximum Entropy-Regularized Multi-Goal Reinforcement Learning
| 73 |
icml
| 6 | 1 |
2023-06-17 03:10:49.527000
|
https://github.com/ruizhaogit/mep
| 21 |
Maximum entropy-regularized multi-goal reinforcement learning
|
https://scholar.google.com/scholar?cluster=12004531622883216435&hl=en&as_sdt=0,5
| 3 | 2,019 |
Stochastic Iterative Hard Thresholding for Graph-structured Sparsity Optimization
| 9 |
icml
| 3 | 0 |
2023-06-17 03:10:49.742000
|
https://github.com/baojianzhou/graph-sto-iht
| 3 |
Stochastic iterative hard thresholding for graph-structured sparsity optimization
|
https://scholar.google.com/scholar?cluster=4121937272467164287&hl=en&as_sdt=0,26
| 2 | 2,019 |
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
| 227 |
icml
| 10 | 4 |
2023-06-17 03:10:49.964000
|
https://github.com/zhuchen03/ConvexPolytopePosioning
| 28 |
Transferable clean-label poisoning attacks on deep neural nets
|
https://scholar.google.com/scholar?cluster=457598797512585014&hl=en&as_sdt=0,5
| 3 | 2,019 |
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects
| 197 |
icml
| 0 | 0 |
2023-06-17 03:10:50.205000
|
https://github.com/uuujf/SGDNoise
| 11 |
The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects
|
https://scholar.google.com/scholar?cluster=8530319537943237114&hl=en&as_sdt=0,5
| 2 | 2,019 |
Latent Normalizing Flows for Discrete Sequences
| 103 |
icml
| 15 | 4 |
2023-06-17 03:10:50.420000
|
https://github.com/harvardnlp/TextFlow
| 113 |
Latent normalizing flows for discrete sequences
|
https://scholar.google.com/scholar?cluster=14468956623112090674&hl=en&as_sdt=0,36
| 11 | 2,019 |
Fast Context Adaptation via Meta-Learning
| 342 |
icml
| 39 | 1 |
2023-06-17 03:10:50.635000
|
https://github.com/lmzintgraf/cavia
| 126 |
Fast context adaptation via meta-learning
|
https://scholar.google.com/scholar?cluster=731845317332872337&hl=en&as_sdt=0,36
| 6 | 2,019 |
A distributional view on multi-objective policy optimization
| 51 |
icml
| 613 | 69 |
2023-06-17 03:56:43.582000
|
https://github.com/deepmind/dm_control
| 3,200 |
A distributional view on multi-objective policy optimization
|
https://scholar.google.com/scholar?cluster=8438162900583355554&hl=en&as_sdt=0,11
| 127 | 2,020 |
An Optimistic Perspective on Offline Reinforcement Learning
| 366 |
icml
| 72 | 9 |
2023-06-17 03:56:43.785000
|
https://github.com/google-research/batch_rl
| 441 |
An optimistic perspective on offline reinforcement learning
|
https://scholar.google.com/scholar?cluster=199235154784983919&hl=en&as_sdt=0,37
| 12 | 2,020 |
LazyIter: A Fast Algorithm for Counting Markov Equivalent DAGs and Designing Experiments
| 8 |
icml
| 0 | 0 |
2023-06-17 03:56:43.987000
|
https://github.com/teshnizi/LazyIter
| 7 |
Lazyiter: a fast algorithm for counting Markov equivalent DAGs and designing experiments
|
https://scholar.google.com/scholar?cluster=11588857558630683059&hl=en&as_sdt=0,5
| 1 | 2,020 |
Restarted Bayesian Online Change-point Detector achieves Optimal Detection Delay
| 16 |
icml
| 0 | 0 |
2023-06-17 03:56:44.190000
|
https://github.com/Ralami1859/Restarted-BOCPD
| 2 |
Restarted Bayesian online change-point detector achieves optimal detection delay
|
https://scholar.google.com/scholar?cluster=12357062813763301915&hl=en&as_sdt=0,5
| 2 | 2,020 |
Structural Language Models of Code
| 82 |
icml
| 7 | 6 |
2023-06-17 03:56:44.392000
|
https://github.com/tech-srl/slm-code-generation
| 75 |
Structural language models of code
|
https://scholar.google.com/scholar?cluster=12400277411486589122&hl=en&as_sdt=0,44
| 11 | 2,020 |
LowFER: Low-rank Bilinear Pooling for Link Prediction
| 31 |
icml
| 5 | 0 |
2023-06-17 03:56:44.595000
|
https://github.com/suamin/LowFER
| 12 |
LowFER: Low-rank bilinear pooling for link prediction
|
https://scholar.google.com/scholar?cluster=6369643568974944132&hl=en&as_sdt=0,5
| 0 | 2,020 |
Discount Factor as a Regularizer in Reinforcement Learning
| 42 |
icml
| 2 | 1 |
2023-06-17 03:56:44.797000
|
https://github.com/ron-amit/Discount_as_Regularizer
| 5 |
Discount factor as a regularizer in reinforcement learning
|
https://scholar.google.com/scholar?cluster=4222677586854479535&hl=en&as_sdt=0,33
| 2 | 2,020 |
The Differentiable Cross-Entropy Method
| 45 |
icml
| 10 | 0 |
2023-06-17 03:56:44.998000
|
https://github.com/facebookresearch/dcem
| 118 |
The differentiable cross-entropy method
|
https://scholar.google.com/scholar?cluster=5207717261153832790&hl=en&as_sdt=0,5
| 9 | 2,020 |
Fairwashing explanations with off-manifold detergent
| 73 |
icml
| 3 | 0 |
2023-06-17 03:56:45.201000
|
https://github.com/fairwashing/fairwashing
| 10 |
Fairwashing explanations with off-manifold detergent
|
https://scholar.google.com/scholar?cluster=869145400827496969&hl=en&as_sdt=0,14
| 2 | 2,020 |
Online metric algorithms with untrusted predictions
| 96 |
icml
| 1 | 0 |
2023-06-17 03:56:45.403000
|
https://github.com/adampolak/mts-with-predictions
| 1 |
Online metric algorithms with untrusted predictions
|
https://scholar.google.com/scholar?cluster=8779637967313325541&hl=en&as_sdt=0,24
| 4 | 2,020 |
Invertible generative models for inverse problems: mitigating representation error and dataset bias
| 113 |
icml
| 14 | 3 |
2023-06-17 03:56:45.605000
|
https://github.com/CACTuS-AI/GlowIP
| 17 |
Invertible generative models for inverse problems: mitigating representation error and dataset bias
|
https://scholar.google.com/scholar?cluster=18360186920065669378&hl=en&as_sdt=0,5
| 5 | 2,020 |
Forecasting Sequential Data Using Consistent Koopman Autoencoders
| 78 |
icml
| 17 | 2 |
2023-06-17 03:56:45.807000
|
https://github.com/erichson/koopmanAE
| 45 |
Forecasting sequential data using consistent koopman autoencoders
|
https://scholar.google.com/scholar?cluster=604581388291751037&hl=en&as_sdt=0,5
| 4 | 2,020 |
Learning De-biased Representations with Biased Representations
| 178 |
icml
| 28 | 0 |
2023-06-17 03:56:46.008000
|
https://github.com/clovaai/rebias
| 152 |
Learning de-biased representations with biased representations
|
https://scholar.google.com/scholar?cluster=2454950202861832490&hl=en&as_sdt=0,33
| 7 | 2,020 |
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
| 281 |
icml
| 1,868 | 365 |
2023-06-17 03:56:46.210000
|
https://github.com/microsoft/unilm
| 12,786 |
Unilmv2: Pseudo-masked language models for unified language model pre-training
|
https://scholar.google.com/scholar?cluster=17252701423323416900&hl=en&as_sdt=0,5
| 260 | 2,020 |
Option Discovery in the Absence of Rewards with Manifold Analysis
| 5 |
icml
| 0 | 0 |
2023-06-17 03:56:46.412000
|
https://github.com/amitaybar/Diffusion-options
| 0 |
Option discovery in the absence of rewards with manifold analysis
|
https://scholar.google.com/scholar?cluster=5097986500723178583&hl=en&as_sdt=0,33
| 1 | 2,020 |
Decoupled Greedy Learning of CNNs
| 74 |
icml
| 4 | 1 |
2023-06-17 03:56:46.614000
|
https://github.com/eugenium/DGL
| 24 |
Decoupled greedy learning of cnns
|
https://scholar.google.com/scholar?cluster=984410843298404679&hl=en&as_sdt=0,41
| 7 | 2,020 |
Efficient Policy Learning from Surrogate-Loss Classification Reductions
| 16 |
icml
| 1 | 0 |
2023-06-17 03:56:46.816000
|
https://github.com/CausalML/ESPRM
| 2 |
Efficient policy learning from surrogate-loss classification reductions
|
https://scholar.google.com/scholar?cluster=17482295204063069180&hl=en&as_sdt=0,33
| 3 | 2,020 |
Training Neural Networks for and by Interpolation
| 35 |
icml
| 5 | 0 |
2023-06-17 03:56:47.018000
|
https://github.com/oval-group/ali-g
| 22 |
Training neural networks for and by interpolation
|
https://scholar.google.com/scholar?cluster=12646838748171851359&hl=en&as_sdt=0,26
| 3 | 2,020 |
Implicit differentiation of Lasso-type models for hyperparameter optimization
| 50 |
icml
| 14 | 20 |
2023-06-17 03:56:47.220000
|
https://github.com/QB3/sparse-ho
| 37 |
Implicit differentiation of lasso-type models for hyperparameter optimization
|
https://scholar.google.com/scholar?cluster=9364706080727749786&hl=en&as_sdt=0,5
| 6 | 2,020 |
The Boomerang Sampler
| 32 |
icml
| 0 | 0 |
2023-06-17 03:56:47.421000
|
https://github.com/jbierkens/ICML-boomerang
| 7 |
The boomerang sampler
|
https://scholar.google.com/scholar?cluster=8538965772361697464&hl=en&as_sdt=0,5
| 4 | 2,020 |
Fast Differentiable Sorting and Ranking
| 132 |
icml
| 40 | 11 |
2023-06-17 03:56:47.623000
|
https://github.com/google-research/fast-soft-sort
| 483 |
Fast differentiable sorting and ranking
|
https://scholar.google.com/scholar?cluster=1601300606865471199&hl=en&as_sdt=0,10
| 15 | 2,020 |
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
| 12 |
icml
| 0 | 0 |
2023-06-17 03:56:47.826000
|
https://github.com/yanivbl6/BeyondSigProp
| 2 |
Beyond signal propagation: is feature diversity necessary in deep neural network initialization?
|
https://scholar.google.com/scholar?cluster=12443428565734084047&hl=en&as_sdt=0,5
| 1 | 2,020 |
Deep Coordination Graphs
| 130 |
icml
| 20 | 9 |
2023-06-17 03:56:48.027000
|
https://github.com/wendelinboehmer/dcg
| 65 |
Deep coordination graphs
|
https://scholar.google.com/scholar?cluster=8113641514627174064&hl=en&as_sdt=0,23
| 5 | 2,020 |
Lorentz Group Equivariant Neural Network for Particle Physics
| 97 |
icml
| 6 | 0 |
2023-06-17 03:56:48.229000
|
https://github.com/fizisist/LorentzGroupNetwork
| 40 |
Lorentz group equivariant neural network for particle physics
|
https://scholar.google.com/scholar?cluster=354482020847877812&hl=en&as_sdt=0,33
| 5 | 2,020 |
Proper Network Interpretability Helps Adversarial Robustness in Classification
| 49 |
icml
| 3 | 0 |
2023-06-17 03:56:48.431000
|
https://github.com/AkhilanB/Proper-Interpretability
| 11 |
Proper network interpretability helps adversarial robustness in classification
|
https://scholar.google.com/scholar?cluster=9035074662025671292&hl=en&as_sdt=0,15
| 4 | 2,020 |
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
| 106 |
icml
| 3 | 5 |
2023-06-17 03:56:48.632000
|
https://github.com/Pehlevan-Group/NTK_Learning_Curves
| 3 |
Spectrum dependent learning curves in kernel regression and wide neural networks
|
https://scholar.google.com/scholar?cluster=3712020461682803664&hl=en&as_sdt=0,5
| 4 | 2,020 |
Latent Variable Modelling with Hyperbolic Normalizing Flows
| 40 |
icml
| 7 | 23 |
2023-06-17 03:56:48.834000
|
https://github.com/joeybose/HyperbolicNF
| 52 |
Latent variable modelling with hyperbolic normalizing flows
|
https://scholar.google.com/scholar?cluster=16943766719750515886&hl=en&as_sdt=0,15
| 3 | 2,020 |
Preference Modeling with Context-Dependent Salient Features
| 9 |
icml
| 0 | 0 |
2023-06-17 03:56:49.052000
|
https://github.com/Amandarg/salient_features
| 1 |
Preference modeling with context-dependent salient features
|
https://scholar.google.com/scholar?cluster=14377795205947287878&hl=en&as_sdt=0,14
| 2 | 2,020 |
All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference
| 12 |
icml
| 0 | 0 |
2023-06-17 03:56:49.254000
|
https://github.com/vmasrani/tvo_all_in
| 0 |
All in the exponential family: Bregman duality in thermodynamic variational inference
|
https://scholar.google.com/scholar?cluster=6653952944869299139&hl=en&as_sdt=0,47
| 0 | 2,020 |
Estimating the Number and Effect Sizes of Non-null Hypotheses
| 8 |
icml
| 0 | 0 |
2023-06-17 03:56:49.455000
|
https://github.com/jenniferbrennan/CountingDiscoveries
| 1 |
Estimating the number and effect sizes of non-null hypotheses
|
https://scholar.google.com/scholar?cluster=13761193891605574377&hl=en&as_sdt=0,44
| 1 | 2,020 |
GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation
| 83 |
icml
| 229 | 10 |
2023-06-17 03:56:49.658000
|
https://github.com/microsoft/tf-gnn-samples
| 877 |
Gnn-film: Graph neural networks with feature-wise linear modulation
|
https://scholar.google.com/scholar?cluster=17006226546313472447&hl=en&as_sdt=0,1
| 35 | 2,020 |
TaskNorm: Rethinking Batch Normalization for Meta-Learning
| 82 |
icml
| 22 | 1 |
2023-06-17 03:56:49.860000
|
https://github.com/cambridge-mlg/cnaps
| 152 |
Tasknorm: Rethinking batch normalization for meta-learning
|
https://scholar.google.com/scholar?cluster=5780176448524951533&hl=en&as_sdt=0,5
| 11 | 2,020 |
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
| 78 |
icml
| 4 | 0 |
2023-06-17 03:56:50.083000
|
https://github.com/dsbrown1331/bayesianrex
| 11 |
Safe imitation learning via fast bayesian reward inference from preferences
|
https://scholar.google.com/scholar?cluster=7057495303121096550&hl=en&as_sdt=0,5
| 3 | 2,020 |
Empirical Study of the Benefits of Overparameterization in Learning Latent Variable Models
| 21 |
icml
| 1 | 0 |
2023-06-17 03:56:50.284000
|
https://github.com/clinicalml/overparam
| 6 |
Empirical study of the benefits of overparameterization in learning latent variable models
|
https://scholar.google.com/scholar?cluster=18021082651755132236&hl=en&as_sdt=0,5
| 2 | 2,020 |
DeBayes: a Bayesian Method for Debiasing Network Embeddings
| 54 |
icml
| 1 | 1 |
2023-06-17 03:56:50.486000
|
https://github.com/aida-ugent/DeBayes
| 7 |
Debayes: a bayesian method for debiasing network embeddings
|
https://scholar.google.com/scholar?cluster=12507703931590961178&hl=en&as_sdt=0,47
| 2 | 2,020 |
Online Learned Continual Compression with Adaptive Quantization Modules
| 55 |
icml
| 5 | 0 |
2023-06-17 03:56:50.691000
|
https://github.com/pclucas14/adaptive-quantization-modules
| 26 |
Online learned continual compression with adaptive quantization modules
|
https://scholar.google.com/scholar?cluster=4962059148023200241&hl=en&as_sdt=0,5
| 4 | 2,020 |
Near-linear time Gaussian process optimization with adaptive batching and resparsification
| 19 |
icml
| 1 | 0 |
2023-06-17 03:56:50.893000
|
https://github.com/luigicarratino/batch-bkb
| 11 |
Near-linear time Gaussian process optimization with adaptive batching and resparsification
|
https://scholar.google.com/scholar?cluster=9965392032053007731&hl=en&as_sdt=0,44
| 3 | 2,020 |
Poisson Learning: Graph Based Semi-Supervised Learning At Very Low Label Rates
| 55 |
icml
| 19 | 0 |
2023-06-17 03:56:51.095000
|
https://github.com/jwcalder/GraphLearning
| 62 |
Poisson learning: Graph based semi-supervised learning at very low label rates
|
https://scholar.google.com/scholar?cluster=11788739359346189749&hl=en&as_sdt=0,43
| 3 | 2,020 |
Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills
| 85 |
icml
| 0 | 0 |
2023-06-17 03:56:51.297000
|
https://github.com/imatge-upc/edl
| 3 |
Explore, discover and learn: Unsupervised discovery of state-covering skills
|
https://scholar.google.com/scholar?cluster=6344383621952136699&hl=en&as_sdt=0,5
| 2 | 2,020 |
Data preprocessing to mitigate bias: A maximum entropy based approach
| 29 |
icml
| 1 | 2 |
2023-06-17 03:56:51.499000
|
https://github.com/vijaykeswani/Fair-Max-Entropy-Distributions
| 8 |
Data preprocessing to mitigate bias: A maximum entropy based approach
|
https://scholar.google.com/scholar?cluster=1389448522545210547&hl=en&as_sdt=0,10
| 3 | 2,020 |
Concise Explanations of Neural Networks using Adversarial Training
| 40 |
icml
| 1 | 17 |
2023-06-17 03:56:51.701000
|
https://github.com/jfc43/advex
| 5 |
Concise explanations of neural networks using adversarial training
|
https://scholar.google.com/scholar?cluster=13018632630820208929&hl=en&as_sdt=0,10
| 3 | 2,020 |
Optimizing for the Future in Non-Stationary MDPs
| 48 |
icml
| 0 | 3 |
2023-06-17 03:56:51.903000
|
https://github.com/yashchandak/OptFuture_NSMDP
| 7 |
Optimizing for the future in non-stationary mdps
|
https://scholar.google.com/scholar?cluster=2732891290707774950&hl=en&as_sdt=0,33
| 2 | 2,020 |
Learning to Simulate and Design for Structural Engineering
| 27 |
icml
| 0 | 0 |
2023-06-17 03:56:52.105000
|
https://github.com/AutodeskAILab/LSDSE-Dataset
| 7 |
Learning to simulate and design for structural engineering
|
https://scholar.google.com/scholar?cluster=3089482596592308925&hl=en&as_sdt=0,33
| 3 | 2,020 |
Invariant Rationalization
| 119 |
icml
| 4 | 11 |
2023-06-17 03:56:52.306000
|
https://github.com/code-terminator/invariant_rationalization
| 43 |
Invariant rationalization
|
https://scholar.google.com/scholar?cluster=2718521387879023599&hl=en&as_sdt=0,10
| 4 | 2,020 |
Explainable and Discourse Topic-aware Neural Language Understanding
| 5 |
icml
| 2 | 4 |
2023-06-17 03:56:52.508000
|
https://github.com/YatinChaudhary/NCLM
| 9 |
Explainable and discourse topic-aware neural language understanding
|
https://scholar.google.com/scholar?cluster=159060864795495099&hl=en&as_sdt=0,31
| 2 | 2,020 |
Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
| 61 |
icml
| 13 | 0 |
2023-06-17 03:56:52.710000
|
https://github.com/TAMU-VITA/Self-PU
| 51 |
Self-pu: Self boosted and calibrated positive-unlabeled training
|
https://scholar.google.com/scholar?cluster=10514971696768538295&hl=en&as_sdt=0,5
| 15 | 2,020 |
Graph Optimal Transport for Cross-Domain Alignment
| 106 |
icml
| 21 | 3 |
2023-06-17 03:56:52.912000
|
https://github.com/LiqunChen0606/Graph-Optimal-Transport
| 131 |
Graph optimal transport for cross-domain alignment
|
https://scholar.google.com/scholar?cluster=13506984443465445309&hl=en&as_sdt=0,5
| 6 | 2,020 |
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization
| 141 |
icml
| 12 | 1 |
2023-06-17 03:56:53.114000
|
https://github.com/xiangning-chen/SmoothDARTS
| 70 |
Stabilizing differentiable architecture search via perturbation-based regularization
|
https://scholar.google.com/scholar?cluster=16658085005261012709&hl=en&as_sdt=0,34
| 3 | 2,020 |
Convolutional Kernel Networks for Graph-Structured Data
| 47 |
icml
| 9 | 2 |
2023-06-17 03:56:53.316000
|
https://github.com/claying/GCKN
| 47 |
Convolutional kernel networks for graph-structured data
|
https://scholar.google.com/scholar?cluster=6544343038344215140&hl=en&as_sdt=0,14
| 5 | 2,020 |
A Simple Framework for Contrastive Learning of Visual Representations
| 10,491 |
icml
| 570 | 69 |
2023-06-17 03:56:53.522000
|
https://github.com/google-research/simclr
| 3,562 |
A simple framework for contrastive learning of visual representations
|
https://scholar.google.com/scholar?cluster=13219652991368821610&hl=en&as_sdt=0,23
| 46 | 2,020 |
Retro*: Learning Retrosynthetic Planning with Neural Guided A* Search
| 66 |
icml
| 20 | 11 |
2023-06-17 03:56:53.723000
|
https://github.com/binghong-ml/retro_star
| 101 |
Retro*: learning retrosynthetic planning with neural guided A* search
|
https://scholar.google.com/scholar?cluster=6946559653071134529&hl=en&as_sdt=0,5
| 4 | 2,020 |
Differentiable Product Quantization for End-to-End Embedding Compression
| 37 |
icml
| 10 | 3 |
2023-06-17 03:56:53.924000
|
https://github.com/chentingpc/dpq_embedding_compression
| 52 |
Differentiable product quantization for end-to-end embedding compression
|
https://scholar.google.com/scholar?cluster=15237200124504416658&hl=en&as_sdt=0,34
| 4 | 2,020 |
VFlow: More Expressive Generative Flows with Variational Data Augmentation
| 46 |
icml
| 3 | 0 |
2023-06-17 03:56:54.126000
|
https://github.com/thu-ml/vflow
| 34 |
Vflow: More expressive generative flows with variational data augmentation
|
https://scholar.google.com/scholar?cluster=3780987304943068813&hl=en&as_sdt=0,26
| 10 | 2,020 |
Generative Pretraining From Pixels
| 1,046 |
icml
| 362 | 13 |
2023-06-17 03:56:54.329000
|
https://github.com/openai/image-gpt
| 1,909 |
Generative pretraining from pixels
|
https://scholar.google.com/scholar?cluster=7981583694904172555&hl=en&as_sdt=0,5
| 81 | 2,020 |
Simple and Deep Graph Convolutional Networks
| 828 |
icml
| 64 | 12 |
2023-06-17 03:56:54.531000
|
https://github.com/chennnM/GCNII
| 270 |
Simple and deep graph convolutional networks
|
https://scholar.google.com/scholar?cluster=16283804483876681464&hl=en&as_sdt=0,18
| 6 | 2,020 |
On Breaking Deep Generative Model-based Defenses and Beyond
| 5 |
icml
| 2 | 0 |
2023-06-17 03:56:54.734000
|
https://github.com/cyz-ai/attack_DGM
| 7 |
On breaking deep generative model-based defenses and beyond
|
https://scholar.google.com/scholar?cluster=13887603208363837628&hl=en&as_sdt=0,39
| 2 | 2,020 |
Automated Synthetic-to-Real Generalization
| 63 |
icml
| 4 | 3 |
2023-06-17 03:56:54.935000
|
https://github.com/NVlabs/ASG
| 30 |
Automated synthetic-to-real generalization
|
https://scholar.google.com/scholar?cluster=14261788417891163581&hl=en&as_sdt=0,3
| 16 | 2,020 |
CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information
| 147 |
icml
| 35 | 7 |
2023-06-17 03:56:55.137000
|
https://github.com/Linear95/CLUB
| 226 |
Club: A contrastive log-ratio upper bound of mutual information
|
https://scholar.google.com/scholar?cluster=384230567728582843&hl=en&as_sdt=0,43
| 7 | 2,020 |
Streaming Coresets for Symmetric Tensor Factorization
| 9 |
icml
| 0 | 0 |
2023-06-17 03:56:55.339000
|
https://github.com/supratim05/Streaming-Coresets-for-Symmetric-Tensor-Factorization
| 0 |
Streaming coresets for symmetric tensor factorization
|
https://scholar.google.com/scholar?cluster=17659645573901290217&hl=en&as_sdt=0,18
| 2 | 2,020 |
Fair Generative Modeling via Weak Supervision
| 69 |
icml
| 7 | 6 |
2023-06-17 03:56:55.540000
|
https://github.com/ermongroup/fairgen
| 15 |
Fair generative modeling via weak supervision
|
https://scholar.google.com/scholar?cluster=17083056249871731008&hl=en&as_sdt=0,36
| 4 | 2,020 |
Distance Metric Learning with Joint Representation Diversification
| 7 |
icml
| 0 | 0 |
2023-06-17 03:56:55.742000
|
https://github.com/YangLin122/JRD
| 1 |
Distance metric learning with joint representation diversification
|
https://scholar.google.com/scholar?cluster=1557397264873578069&hl=en&as_sdt=0,33
| 1 | 2,020 |
Estimating Generalization under Distribution Shifts via Domain-Invariant Representations
| 32 |
icml
| 2 | 0 |
2023-06-17 03:56:55.944000
|
https://github.com/chingyaoc/estimating-generalization
| 21 |
Estimating generalization under distribution shifts via domain-invariant representations
|
https://scholar.google.com/scholar?cluster=2002502648003109319&hl=en&as_sdt=0,10
| 3 | 2,020 |
Boosting Frank-Wolfe by Chasing Gradients
| 24 |
icml
| 2 | 0 |
2023-06-17 03:56:56.145000
|
https://github.com/cyrillewcombettes/boostfw
| 3 |
Boosting Frank-Wolfe by chasing gradients
|
https://scholar.google.com/scholar?cluster=3076591881269921139&hl=en&as_sdt=0,5
| 2 | 2,020 |
Learnable Group Transform For Time-Series
| 14 |
icml
| 4 | 0 |
2023-06-17 03:56:56.347000
|
https://github.com/Koldh/LearnableGroupTransform-TimeSeries
| 7 |
Learnable group transform for time-series
|
https://scholar.google.com/scholar?cluster=11923042673090742544&hl=en&as_sdt=0,5
| 3 | 2,020 |
Causal Modeling for Fairness In Dynamical Systems
| 42 |
icml
| 4 | 5 |
2023-06-17 03:56:56.549000
|
https://github.com/ecreager/causal-dyna-fair
| 8 |
Causal modeling for fairness in dynamical systems
|
https://scholar.google.com/scholar?cluster=12839359629093476958&hl=en&as_sdt=0,44
| 4 | 2,020 |
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
| 307 |
icml
| 8 | 0 |
2023-06-17 03:56:56.751000
|
https://github.com/fra31/fab-attack
| 32 |
Minimally distorted adversarial examples with a fast adaptive boundary attack
|
https://scholar.google.com/scholar?cluster=11433432412885384423&hl=en&as_sdt=0,25
| 2 | 2,020 |
Scalable Deep Generative Modeling for Sparse Graphs
| 40 |
icml
| 7,322 | 1,026 |
2023-06-17 03:56:56.953000
|
https://github.com/google-research/google-research
| 29,791 |
Scalable deep generative modeling for sparse graphs
|
https://scholar.google.com/scholar?cluster=13017453490963979295&hl=en&as_sdt=0,34
| 727 | 2,020 |
Confidence Sets and Hypothesis Testing in a Likelihood-Free Inference Setting
| 16 |
icml
| 1 | 2 |
2023-06-17 03:56:57.155000
|
https://github.com/Mr8ND/ACORE-LFI
| 9 |
Confidence sets and hypothesis testing in a likelihood-free inference setting
|
https://scholar.google.com/scholar?cluster=14385524652709102879&hl=en&as_sdt=0,5
| 2 | 2,020 |
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models
| 20 |
icml
| 11 | 7 |
2023-06-17 03:56:57.356000
|
https://github.com/eth-sri/probabilistic-forecasts-attacks
| 29 |
Adversarial attacks on probabilistic autoregressive forecasting models
|
https://scholar.google.com/scholar?cluster=1773916962694787403&hl=en&as_sdt=0,5
| 8 | 2,020 |
Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid Flow Prediction
| 116 |
icml
| 31 | 5 |
2023-06-17 03:56:57.558000
|
https://github.com/locuslab/cfd-gcn
| 91 |
Combining differentiable PDE solvers and graph neural networks for fluid flow prediction
|
https://scholar.google.com/scholar?cluster=5822388869556870864&hl=en&as_sdt=0,23
| 9 | 2,020 |
Randomly Projected Additive Gaussian Processes for Regression
| 24 |
icml
| 3 | 1 |
2023-06-17 03:56:57.759000
|
https://github.com/idelbrid/Randomly-Projected-Additive-GPs
| 24 |
Randomly projected additive Gaussian processes for regression
|
https://scholar.google.com/scholar?cluster=11838391975313028153&hl=en&as_sdt=0,5
| 4 | 2,020 |
Non-convex Learning via Replica Exchange Stochastic Gradient MCMC
| 27 |
icml
| 4 | 0 |
2023-06-17 03:56:57.961000
|
https://github.com/gaoliyao/Replica_Exchange_Stochastic_Gradient_MCMC
| 22 |
Non-convex learning via replica exchange stochastic gradient mcmc
|
https://scholar.google.com/scholar?cluster=6979152849103979749&hl=en&as_sdt=0,5
| 4 | 2,020 |
A Swiss Army Knife for Minimax Optimal Transport
| 15 |
icml
| 1 | 0 |
2023-06-17 03:56:58.163000
|
https://github.com/sofiendhouib/minimax_OT
| 6 |
A swiss army knife for minimax optimal transport
|
https://scholar.google.com/scholar?cluster=2500404421772704612&hl=en&as_sdt=0,5
| 2 | 2,020 |
Margin-aware Adversarial Domain Adaptation with Optimal Transport
| 22 |
icml
| 3 | 0 |
2023-06-17 03:56:58.365000
|
https://github.com/sofiendhouib/MADAOT
| 14 |
Margin-aware adversarial domain adaptation with optimal transport
|
https://scholar.google.com/scholar?cluster=5511163225310216545&hl=en&as_sdt=0,1
| 1 | 2,020 |
Growing Adaptive Multi-hyperplane Machines
| 1 |
icml
| 1 | 0 |
2023-06-17 03:56:58.567000
|
https://github.com/djurikom/BudgetedSVM
| 6 |
Growing adaptive multi-hyperplane machines
|
https://scholar.google.com/scholar?cluster=8685157416945290118&hl=en&as_sdt=0,43
| 1 | 2,020 |
Towards Adaptive Residual Network Training: A Neural-ODE Perspective
| 22 |
icml
| 0 | 0 |
2023-06-17 03:56:58.769000
|
https://github.com/shwinshaker/LipGrow
| 14 |
Towards adaptive residual network training: A neural-ode perspective
|
https://scholar.google.com/scholar?cluster=790808977072857265&hl=en&as_sdt=0,5
| 4 | 2,020 |
On the Expressivity of Neural Networks for Deep Reinforcement Learning
| 21 |
icml
| 5 | 0 |
2023-06-17 03:56:58.971000
|
https://github.com/roosephu/boots
| 11 |
On the expressivity of neural networks for deep reinforcement learning
|
https://scholar.google.com/scholar?cluster=10031650459091105952&hl=en&as_sdt=0,10
| 4 | 2,020 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.