title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
A Temporal-Difference Approach to Policy Gradient Estimation
| 1 |
icml
| 0 | 0 |
2023-06-17 04:55:39.110000
|
https://github.com/samuelepolimi/temporal-difference-gradient
| 3 |
A Temporal-Difference Approach to Policy Gradient Estimation
|
https://scholar.google.com/scholar?cluster=12213929390329707477&hl=en&as_sdt=0,40
| 2 | 2,022 |
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
| 5 |
icml
| 0 | 0 |
2023-06-17 04:55:39.317000
|
https://github.com/htt-trangtran/nasg
| 0 |
Nesterov accelerated shuffling gradient method for convex optimization
|
https://scholar.google.com/scholar?cluster=14735125807077653853&hl=en&as_sdt=0,5
| 1 | 2,022 |
Tackling covariate shift with node-based Bayesian neural networks
| 4 |
icml
| 0 | 0 |
2023-06-17 04:55:39.523000
|
https://github.com/aaltopml/node-bnn-covariate-shift
| 6 |
Tackling covariate shift with node-based Bayesian neural networks
|
https://scholar.google.com/scholar?cluster=8088780476336589916&hl=en&as_sdt=0,33
| 7 | 2,022 |
Prototype Based Classification from Hierarchy to Fairness
| 1 |
icml
| 0 | 0 |
2023-06-17 04:55:39.729000
|
https://github.com/mycal-tucker/csn
| 1 |
Prototype Based Classification from Hierarchy to Fairness
|
https://scholar.google.com/scholar?cluster=11530419927101336822&hl=en&as_sdt=0,5
| 2 | 2,022 |
Path-Gradient Estimators for Continuous Normalizing Flows
| 2 |
icml
| 2 | 0 |
2023-06-17 04:55:39.935000
|
https://github.com/lenz3000/ffjord-path
| 5 |
Path-gradient estimators for continuous normalizing flows
|
https://scholar.google.com/scholar?cluster=102102598474391702&hl=en&as_sdt=0,33
| 0 | 2,022 |
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
| 10 |
icml
| 1 | 0 |
2023-06-17 04:55:40.140000
|
https://github.com/amitport/eden-distributed-mean-estimation
| 7 |
Eden: Communication-efficient and robust distributed mean estimation for federated learning
|
https://scholar.google.com/scholar?cluster=3209500586717789200&hl=en&as_sdt=0,34
| 2 | 2,022 |
Correlation Clustering via Strong Triadic Closure Labeling: Fast Approximation Algorithms and Practical Lower Bounds
| 6 |
icml
| 1 | 0 |
2023-06-17 04:55:40.346000
|
https://github.com/nveldt/fastcc-via-stc
| 0 |
Correlation Clustering via Strong Triadic Closure Labeling: Fast Approximation Algorithms and Practical Lower Bounds
|
https://scholar.google.com/scholar?cluster=18023293593694212775&hl=en&as_sdt=0,23
| 1 | 2,022 |
The CLRS Algorithmic Reasoning Benchmark
| 15 |
icml
| 48 | 4 |
2023-06-17 04:55:40.552000
|
https://github.com/deepmind/clrs
| 304 |
The CLRS algorithmic reasoning benchmark
|
https://scholar.google.com/scholar?cluster=9181302241653376962&hl=en&as_sdt=0,5
| 13 | 2,022 |
Bregman Power k-Means for Clustering Exponential Family Data
| 3 |
icml
| 1 | 0 |
2023-06-17 04:55:40.759000
|
https://github.com/avellal14/bregman_power_kmeans
| 3 |
Bregman power k-means for clustering exponential family data
|
https://scholar.google.com/scholar?cluster=10416936130963333532&hl=en&as_sdt=0,33
| 2 | 2,022 |
Calibrated Learning to Defer with One-vs-All Classifiers
| 8 |
icml
| 1 | 0 |
2023-06-17 04:55:40.965000
|
https://github.com/rajevv/ova-l2d
| 1 |
Calibrated learning to defer with one-vs-all classifiers
|
https://scholar.google.com/scholar?cluster=8829480964232923072&hl=en&as_sdt=0,33
| 1 | 2,022 |
Bayesian Nonparametrics for Offline Skill Discovery
| 2 |
icml
| 1 | 0 |
2023-06-17 04:55:41.171000
|
https://github.com/layer6ai-labs/bnpo
| 4 |
Bayesian nonparametrics for offline skill discovery
|
https://scholar.google.com/scholar?cluster=5074347961003664860&hl=en&as_sdt=0,33
| 4 | 2,022 |
Hermite Polynomial Features for Private Data Generation
| 5 |
icml
| 1 | 1 |
2023-06-17 04:55:41.376000
|
https://github.com/parklabml/dp-hp
| 3 |
Hermite polynomial features for private data generation
|
https://scholar.google.com/scholar?cluster=16485118791106646859&hl=en&as_sdt=0,31
| 2 | 2,022 |
Multirate Training of Neural Networks
| 3 |
icml
| 3 | 0 |
2023-06-17 04:55:41.583000
|
https://github.com/tiffanyvlaar/multiratetrainingofnns
| 3 |
Multirate training of neural networks
|
https://scholar.google.com/scholar?cluster=14672109036130949413&hl=en&as_sdt=0,33
| 2 | 2,022 |
Provably Adversarially Robust Nearest Prototype Classifiers
| 1 |
icml
| 0 | 0 |
2023-06-17 04:55:41.788000
|
https://github.com/vvoracek/provably-adversarially-robust-nearest-prototype-classifiers
| 4 |
Provably Adversarially Robust Nearest Prototype Classifiers
|
https://scholar.google.com/scholar?cluster=12783036933914721155&hl=en&as_sdt=0,21
| 1 | 2,022 |
Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods
| 4 |
icml
| 0 | 0 |
2023-06-17 04:55:41.994000
|
https://github.com/chandar-lab/LoCA2
| 2 |
Towards evaluating adaptivity of model-based reinforcement learning methods
|
https://scholar.google.com/scholar?cluster=8278156303366460605&hl=en&as_sdt=0,50
| 3 | 2,022 |
Fast Lossless Neural Compression with Integer-Only Discrete Flows
| 3 |
icml
| 2 | 0 |
2023-06-17 04:55:42.200000
|
https://github.com/thu-ml/iodf
| 15 |
Fast Lossless Neural Compression with Integer-Only Discrete Flows
|
https://scholar.google.com/scholar?cluster=9606476142959964204&hl=en&as_sdt=0,39
| 8 | 2,022 |
Accelerating Shapley Explanation via Contributive Cooperator Selection
| 3 |
icml
| 1 | 0 |
2023-06-17 04:55:42.406000
|
https://github.com/guanchuwang/shear
| 10 |
Accelerating Shapley Explanation via Contributive Cooperator Selection
|
https://scholar.google.com/scholar?cluster=2493376524235633954&hl=en&as_sdt=0,5
| 2 | 2,022 |
Denoised MDPs: Learning World Models Better Than the World Itself
| 11 |
icml
| 8 | 0 |
2023-06-17 04:55:42.612000
|
https://github.com/facebookresearch/denoised_mdp
| 118 |
Denoised mdps: Learning world models better than the world itself
|
https://scholar.google.com/scholar?cluster=4094945741122544681&hl=en&as_sdt=0,33
| 138 | 2,022 |
Robust Models Are More Interpretable Because Attributions Look Normal
| 5 |
icml
| 1 | 1 |
2023-06-17 04:55:42.818000
|
https://github.com/zifanw/boundary
| 6 |
Robust models are more interpretable because attributions look normal
|
https://scholar.google.com/scholar?cluster=14430069598728045155&hl=en&as_sdt=0,5
| 1 | 2,022 |
VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
| 13 |
icml
| 0 | 1 |
2023-06-17 04:55:43.024000
|
https://github.com/ttengwang/vlmixer
| 14 |
Vlmixer: Unpaired vision-language pre-training via cross-modal cutmix
|
https://scholar.google.com/scholar?cluster=6137962123845990063&hl=en&as_sdt=0,5
| 6 | 2,022 |
DynaMixer: A Vision MLP Architecture with Dynamic Mixing
| 14 |
icml
| 1 | 1 |
2023-06-17 04:55:43.229000
|
https://github.com/ziyuwwang/dynamixer
| 19 |
Dynamixer: a vision MLP architecture with dynamic mixing
|
https://scholar.google.com/scholar?cluster=9756910838903336255&hl=en&as_sdt=0,5
| 1 | 2,022 |
Improving Screening Processes via Calibrated Subset Selection
| 7 |
icml
| 2 | 0 |
2023-06-17 04:55:43.445000
|
https://github.com/LequnWang/Improve-Screening-via-Calibrated-Subset-Selection
| 2 |
Improving screening processes via calibrated subset selection
|
https://scholar.google.com/scholar?cluster=9485317495432772346&hl=en&as_sdt=0,19
| 1 | 2,022 |
What Dense Graph Do You Need for Self-Attention?
| 1 |
icml
| 3 | 0 |
2023-06-17 04:55:43.651000
|
https://github.com/yxzwang/normalized-information-payload
| 7 |
What Dense Graph Do You Need for Self-Attention?
|
https://scholar.google.com/scholar?cluster=6817431716045479667&hl=en&as_sdt=0,33
| 2 | 2,022 |
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
| 21 |
icml
| 0 | 0 |
2023-06-17 04:55:43.858000
|
https://github.com/wangwenxiao/FiniteAggregation
| 5 |
Improved certified defenses against data poisoning with (deterministic) finite aggregation
|
https://scholar.google.com/scholar?cluster=13385935402210758494&hl=en&as_sdt=0,33
| 1 | 2,022 |
Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond
| 8 |
icml
| 0 | 0 |
2023-06-17 04:55:44.063000
|
https://github.com/Haoxiang-Wang/gradual-domain-adaptation
| 6 |
Understanding gradual domain adaptation: Improved analysis, optimal path and beyond
|
https://scholar.google.com/scholar?cluster=8368642919883535588&hl=en&as_sdt=0,33
| 3 | 2,022 |
Convergence and Recovery Guarantees of the K-Subspaces Method for Subspace Clustering
| 2 |
icml
| 0 | 0 |
2023-06-17 04:55:44.268000
|
https://github.com/peng8wang/icml2022-k-subspaces
| 1 |
Convergence and recovery guarantees of the k-subspaces method for subspace clustering
|
https://scholar.google.com/scholar?cluster=4190201275040810423&hl=en&as_sdt=0,15
| 1 | 2,022 |
NP-Match: When Neural Processes meet Semi-Supervised Learning
| 11 |
icml
| 20 | 0 |
2023-06-17 04:55:44.475000
|
https://github.com/jianf-wang/np-match
| 126 |
Np-match: When neural processes meet semi-supervised learning
|
https://scholar.google.com/scholar?cluster=13863868059773263765&hl=en&as_sdt=0,5
| 14 | 2,022 |
Improving Task-free Continual Learning by Distributionally Robust Memory Evolution
| 11 |
icml
| 0 | 0 |
2023-06-17 04:55:44.680000
|
https://github.com/joey-wang123/DRO-Task-free
| 10 |
Improving task-free continual learning by distributionally robust memory evolution
|
https://scholar.google.com/scholar?cluster=14894776006626228965&hl=en&as_sdt=0,47
| 1 | 2,022 |
Provable Domain Generalization via Invariant-Feature Subspace Recovery
| 10 |
icml
| 3 | 0 |
2023-06-17 04:55:44.887000
|
https://github.com/haoxiang-wang/isr
| 15 |
Provable domain generalization via invariant-feature subspace recovery
|
https://scholar.google.com/scholar?cluster=16846223791215545357&hl=en&as_sdt=0,46
| 3 | 2,022 |
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
| 15 |
icml
| 5 | 1 |
2023-06-17 04:55:45.093000
|
https://github.com/a514514772/progfed
| 14 |
ProgFed: effective, communication, and computation efficient federated learning by progressive training
|
https://scholar.google.com/scholar?cluster=14093452975120098193&hl=en&as_sdt=0,5
| 2 | 2,022 |
Approximately Equivariant Networks for Imperfectly Symmetric Dynamics
| 25 |
icml
| 0 | 0 |
2023-06-17 04:55:45.299000
|
https://github.com/rose-stl-lab/approximately-equivariant-nets
| 7 |
Approximately equivariant networks for imperfectly symmetric dynamics
|
https://scholar.google.com/scholar?cluster=5872423159806810171&hl=en&as_sdt=0,10
| 1 | 2,022 |
Understanding Instance-Level Impact of Fairness Constraints
| 6 |
icml
| 0 | 1 |
2023-06-17 04:55:45.505000
|
https://github.com/ucsc-real/fairinfl
| 5 |
Understanding instance-level impact of fairness constraints
|
https://scholar.google.com/scholar?cluster=3186856282017277340&hl=en&as_sdt=0,4
| 1 | 2,022 |
Causal Dynamics Learning for Task-Independent State Abstraction
| 11 |
icml
| 4 | 2 |
2023-06-17 04:55:45.711000
|
https://github.com/wangzizhao/causaldynamicslearning
| 16 |
Causal dynamics learning for task-independent state abstraction
|
https://scholar.google.com/scholar?cluster=7092132108841275612&hl=en&as_sdt=0,33
| 1 | 2,022 |
Generative Coarse-Graining of Molecular Conformations
| 14 |
icml
| 5 | 0 |
2023-06-17 04:55:45.918000
|
https://github.com/wwang2/coarsegrainingvae
| 22 |
Generative coarse-graining of molecular conformations
|
https://scholar.google.com/scholar?cluster=6589570772523921711&hl=en&as_sdt=0,44
| 4 | 2,022 |
How Powerful are Spectral Graph Neural Networks
| 34 |
icml
| 9 | 0 |
2023-06-17 04:55:46.123000
|
https://github.com/graphpku/jacobiconv
| 56 |
How powerful are spectral graph neural networks
|
https://scholar.google.com/scholar?cluster=17960766448265380456&hl=en&as_sdt=0,33
| 1 | 2,022 |
Thompson Sampling for Robust Transfer in Multi-Task Bandits
| 1 |
icml
| 0 | 0 |
2023-06-17 04:55:46.329000
|
https://github.com/zhiwang123/eps-mpmab-ts
| 0 |
Thompson Sampling for Robust Transfer in Multi-Task Bandits
|
https://scholar.google.com/scholar?cluster=9498764153726193190&hl=en&as_sdt=0,15
| 2 | 2,022 |
Removing Batch Normalization Boosts Adversarial Training
| 12 |
icml
| 0 | 1 |
2023-06-17 04:55:46.534000
|
https://github.com/amazon-research/normalizer-free-robust-training
| 17 |
Removing batch normalization boosts adversarial training
|
https://scholar.google.com/scholar?cluster=4233277386290159249&hl=en&as_sdt=0,39
| 4 | 2,022 |
Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition
| 11 |
icml
| 5 | 5 |
2023-06-17 04:55:46.740000
|
https://github.com/amazon-research/long-tailed-ood-detection
| 29 |
Partial and asymmetric contrastive learning for out-of-distribution detection in long-tailed recognition
|
https://scholar.google.com/scholar?cluster=14212057730611759763&hl=en&as_sdt=0,33
| 6 | 2,022 |
Certifying Out-of-Domain Generalization for Blackbox Functions
| 7 |
icml
| 0 | 0 |
2023-06-17 04:55:46.946000
|
https://github.com/ds3lab/certified-generalization
| 2 |
Certifying out-of-domain generalization for blackbox functions
|
https://scholar.google.com/scholar?cluster=5540253257951212310&hl=en&as_sdt=0,5
| 6 | 2,022 |
To Smooth or Not? When Label Smoothing Meets Noisy Labels
| 11 |
icml
| 9 | 2 |
2023-06-17 04:55:47.152000
|
https://github.com/ucsc-real/negative-label-smoothing
| 75 |
To smooth or not? when label smoothing meets noisy labels
|
https://scholar.google.com/scholar?cluster=18297648993704774023&hl=en&as_sdt=0,5
| 10 | 2,022 |
Mitigating Neural Network Overconfidence with Logit Normalization
| 45 |
icml
| 12 | 3 |
2023-06-17 04:55:47.359000
|
https://github.com/hongxin001/logitnorm_ood
| 113 |
Mitigating neural network overconfidence with logit normalization
|
https://scholar.google.com/scholar?cluster=3765768230173383060&hl=en&as_sdt=0,19
| 1 | 2,022 |
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
| 27 |
icml
| 37 | 0 |
2023-06-17 04:55:47.565000
|
https://github.com/JonasGeiping/breaching
| 178 |
Fishing for user data in large-batch federated learning via gradient magnification
|
https://scholar.google.com/scholar?cluster=11388041584211331417&hl=en&as_sdt=0,34
| 3 | 2,022 |
Measure Estimation in the Barycentric Coding Model
| 2 |
icml
| 0 | 0 |
2023-06-17 04:55:47.771000
|
https://github.com/mattwerenski/bcm
| 2 |
Measure Estimation in the Barycentric Coding Model
|
https://scholar.google.com/scholar?cluster=3529680784651732155&hl=en&as_sdt=0,3
| 2 | 2,022 |
COLA: Consistent Learning with Opponent-Learning Awareness
| 19 |
icml
| 0 | 0 |
2023-06-17 04:55:47.977000
|
https://github.com/aidandos/cola
| 5 |
COLA: consistent learning with opponent-learning awareness
|
https://scholar.google.com/scholar?cluster=14450342073245803366&hl=en&as_sdt=0,33
| 2 | 2,022 |
Easy Variational Inference for Categorical Models via an Independent Binary Approximation
| 0 |
icml
| 0 | 0 |
2023-06-17 04:55:48.184000
|
https://github.com/tufts-ml/categorical-from-binary
| 2 |
Easy Variational Inference for Categorical Models via an Independent Binary Approximation
|
https://scholar.google.com/scholar?cluster=13180457782658047792&hl=en&as_sdt=0,36
| 4 | 2,022 |
Continual Learning with Guarantees via Weight Interval Constraints
| 1 |
icml
| 0 | 0 |
2023-06-17 04:55:48.390000
|
https://github.com/gmum/intercontinet
| 2 |
Continual Learning with Guarantees via Weight Interval Constraints
|
https://scholar.google.com/scholar?cluster=12644818321484154250&hl=en&as_sdt=0,33
| 5 | 2,022 |
A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications
| 0 |
icml
| 0 | 0 |
2023-06-17 04:55:48.596000
|
https://github.com/lu-wo/detrtime
| 12 |
A Deep Learning Approach for the Segmentation of Electroencephalography Data in Eye Tracking Applications
|
https://scholar.google.com/scholar?cluster=561665774245262907&hl=en&as_sdt=0,10
| 2 | 2,022 |
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
| 221 |
icml
| 21 | 2 |
2023-06-17 04:55:48.820000
|
https://github.com/mlfoundations/model-soups
| 236 |
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
|
https://scholar.google.com/scholar?cluster=16922194924900565989&hl=en&as_sdt=0,5
| 10 | 2,022 |
Structural Entropy Guided Graph Hierarchical Pooling
| 11 |
icml
| 5 | 3 |
2023-06-17 04:55:49.030000
|
https://github.com/wu-junran/sep
| 20 |
Structural entropy guided graph hierarchical pooling
|
https://scholar.google.com/scholar?cluster=15391796189805731538&hl=en&as_sdt=0,26
| 1 | 2,022 |
Characterizing and Overcoming the Greedy Nature of Learning in Multi-modal Deep Neural Networks
| 15 |
icml
| 1 | 1 |
2023-06-17 04:55:49.236000
|
https://github.com/nyukat/greedy_multimodal_learning
| 19 |
Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks
|
https://scholar.google.com/scholar?cluster=12235200636315362810&hl=en&as_sdt=0,44
| 2 | 2,022 |
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum
| 4 |
icml
| 0 | 0 |
2023-06-17 04:55:49.457000
|
https://github.com/jlwu002/bcl
| 4 |
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum
|
https://scholar.google.com/scholar?cluster=6530213985097280080&hl=en&as_sdt=0,31
| 1 | 2,022 |
Flowformer: Linearizing Transformers with Conservation Flows
| 13 |
icml
| 27 | 0 |
2023-06-17 04:55:49.663000
|
https://github.com/thuml/Flowformer
| 237 |
Flowformer: Linearizing transformers with conservation flows
|
https://scholar.google.com/scholar?cluster=13534095276250575794&hl=en&as_sdt=0,5
| 8 | 2,022 |
ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning
| 29 |
icml
| 2 | 4 |
2023-06-17 04:55:49.872000
|
https://github.com/junxia97/progcl
| 32 |
Progcl: Rethinking hard negative mining in graph contrastive learning
|
https://scholar.google.com/scholar?cluster=3134502444981244972&hl=en&as_sdt=0,33
| 1 | 2,022 |
Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations
| 18 |
icml
| 0 | 0 |
2023-06-17 04:55:50.079000
|
https://github.com/ryanxhr/dwbc
| 25 |
Discriminator-weighted offline imitation learning from suboptimal demonstrations
|
https://scholar.google.com/scholar?cluster=12184701455253705252&hl=en&as_sdt=0,21
| 1 | 2,022 |
Adversarial Attack and Defense for Non-Parametric Two-Sample Tests
| 1 |
icml
| 0 | 0 |
2023-06-17 04:55:50.285000
|
https://github.com/godxuxilie/robust-tst
| 3 |
Adversarial Attack and Defense for Non-Parametric Two-Sample Tests
|
https://scholar.google.com/scholar?cluster=16006347209208499674&hl=en&as_sdt=0,5
| 2 | 2,022 |
A Theoretical Analysis on Independence-driven Importance Weighting for Covariate-shift Generalization
| 4 |
icml
| 0 | 0 |
2023-06-17 04:55:50.492000
|
https://github.com/windxrz/independence-driven-iw
| 9 |
A Theoretical Analysis on Independence-driven Importance Weighting for Covariate-shift Generalization
|
https://scholar.google.com/scholar?cluster=14134137266916397351&hl=en&as_sdt=0,5
| 1 | 2,022 |
Langevin Monte Carlo for Contextual Bandits
| 6 |
icml
| 3 | 0 |
2023-06-17 04:55:50.699000
|
https://github.com/devzhk/lmcts
| 8 |
Langevin monte carlo for contextual bandits
|
https://scholar.google.com/scholar?cluster=17947059462373456392&hl=en&as_sdt=0,5
| 1 | 2,022 |
Diversified Adversarial Attacks based on Conjugate Gradient Method
| 6 |
icml
| 2 | 0 |
2023-06-17 04:55:50.906000
|
https://github.com/yamamura-k/ACG
| 5 |
Diversified Adversarial Attacks based on Conjugate Gradient Method
|
https://scholar.google.com/scholar?cluster=13855220363786968422&hl=en&as_sdt=0,33
| 2 | 2,022 |
Cycle Representation Learning for Inductive Relation Prediction
| 4 |
icml
| 2 | 2 |
2023-06-17 04:55:51.112000
|
https://github.com/pkuyzy/cbgnn
| 4 |
Cycle Representation Learning for Inductive Relation Prediction
|
https://scholar.google.com/scholar?cluster=2061126116449549118&hl=en&as_sdt=0,41
| 1 | 2,022 |
Optimally Controllable Perceptual Lossy Compression
| 2 |
icml
| 2 | 1 |
2023-06-17 04:55:51.318000
|
https://github.com/zeyuyan/controllable-perceptual-compression
| 9 |
Optimally Controllable Perceptual Lossy Compression
|
https://scholar.google.com/scholar?cluster=15214339197144115082&hl=en&as_sdt=0,32
| 3 | 2,022 |
Self-Organized Polynomial-Time Coordination Graphs
| 3 |
icml
| 0 | 0 |
2023-06-17 04:55:51.524000
|
https://github.com/yanQval/SOP-CG
| 3 |
Self-organized polynomial-time coordination graphs
|
https://scholar.google.com/scholar?cluster=10295867697115976866&hl=en&as_sdt=0,19
| 1 | 2,022 |
Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning
| 4 |
icml
| 0 | 0 |
2023-06-17 04:55:51.730000
|
https://github.com/shentao-yang/sdm-gan_icml2022
| 2 |
Regularizing a model-based policy stationary distribution to stabilize offline reinforcement learning
|
https://scholar.google.com/scholar?cluster=1188226225988660555&hl=en&as_sdt=0,33
| 1 | 2,022 |
Does the Data Induce Capacity Control in Deep Learning?
| 13 |
icml
| 0 | 0 |
2023-06-17 04:55:51.935000
|
https://github.com/grasp-lyrl/sloppy
| 1 |
Does the data induce capacity control in deep learning?
|
https://scholar.google.com/scholar?cluster=884919534291840762&hl=en&as_sdt=0,36
| 0 | 2,022 |
A New Perspective on the Effects of Spectrum in Graph Neural Networks
| 5 |
icml
| 6 | 0 |
2023-06-17 04:55:52.141000
|
https://github.com/qslim/gnn-spectrum
| 16 |
A new perspective on the effects of spectrum in graph neural networks
|
https://scholar.google.com/scholar?cluster=12355104145181167707&hl=en&as_sdt=0,5
| 1 | 2,022 |
A Study of Face Obfuscation in ImageNet
| 90 |
icml
| 12 | 1 |
2023-06-17 04:55:52.349000
|
https://github.com/princetonvisualai/imagenet-face-obfuscation
| 40 |
A study of face obfuscation in imagenet
|
https://scholar.google.com/scholar?cluster=18170664845630332563&hl=en&as_sdt=0,33
| 7 | 2,022 |
Improving Out-of-Distribution Robustness via Selective Augmentation
| 49 |
icml
| 5 | 2 |
2023-06-17 04:55:52.555000
|
https://github.com/huaxiuyao/LISA
| 35 |
Improving out-of-distribution robustness via selective augmentation
|
https://scholar.google.com/scholar?cluster=4894079975600009568&hl=en&as_sdt=0,31
| 1 | 2,022 |
NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
| 22 |
icml
| 21 | 8 |
2023-06-17 04:55:52.761000
|
https://github.com/yaoxingcheng/TLM
| 240 |
Nlp from scratch without large-scale pretraining: A simple and efficient framework
|
https://scholar.google.com/scholar?cluster=3254978626719045112&hl=en&as_sdt=0,5
| 5 | 2,022 |
Feature Space Particle Inference for Neural Network Ensembles
| 4 |
icml
| 0 | 0 |
2023-06-17 04:55:52.966000
|
https://github.com/densoitlab/featurepi
| 4 |
Feature space particle inference for neural network ensembles
|
https://scholar.google.com/scholar?cluster=11870961066098934714&hl=en&as_sdt=0,44
| 3 | 2,022 |
ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
| 5 |
icml
| 1 | 1 |
2023-06-17 04:55:53.172000
|
https://github.com/rice-eic/shiftaddnas
| 11 |
ShiftAddNAS: Hardware-inspired search for more accurate and efficient neural networks
|
https://scholar.google.com/scholar?cluster=17026416337828414455&hl=en&as_sdt=0,33
| 2 | 2,022 |
Molecular Representation Learning via Heterogeneous Motif Graph Neural Networks
| 7 |
icml
| 10 | 0 |
2023-06-17 04:55:53.378000
|
https://github.com/zhaoningyu1996/hm-gnn
| 23 |
Molecular representation learning via heterogeneous motif graph neural networks
|
https://scholar.google.com/scholar?cluster=16142260161361576450&hl=en&as_sdt=0,33
| 2 | 2,022 |
Understanding Robust Overfitting of Adversarial Training and Beyond
| 13 |
icml
| 0 | 1 |
2023-06-17 04:55:53.584000
|
https://github.com/chaojianyu/understanding-robust-overfitting
| 10 |
Understanding robust overfitting of adversarial training and beyond
|
https://scholar.google.com/scholar?cluster=4696544864566467358&hl=en&as_sdt=0,6
| 1 | 2,022 |
Reachability Constrained Reinforcement Learning
| 10 |
icml
| 2 | 0 |
2023-06-17 04:55:53.791000
|
https://github.com/mahaitongdae/Reachability_Constrained_RL
| 13 |
Reachability constrained reinforcement learning
|
https://scholar.google.com/scholar?cluster=2404570936990332675&hl=en&as_sdt=0,31
| 3 | 2,022 |
Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning
| 12 |
icml
| 10 | 1 |
2023-06-17 04:55:53.996000
|
https://github.com/yusx-swapp/gnn-rl-model-compression
| 36 |
Topology-aware network pruning using multi-stage graph embedding and reinforcement learning
|
https://scholar.google.com/scholar?cluster=9807843131373835884&hl=en&as_sdt=0,47
| 2 | 2,022 |
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
| 14 |
icml
| 1 | 0 |
2023-06-17 04:55:54.202000
|
https://github.com/yuxwind/cbs
| 8 |
The combinatorial brain surgeon: Pruning weights that cancel one another in neural networks
|
https://scholar.google.com/scholar?cluster=2256443788852509146&hl=en&as_sdt=0,11
| 1 | 2,022 |
GraphFM: Improving Large-Scale GNN Training via Feature Momentum
| 8 |
icml
| 239 | 19 |
2023-06-17 04:55:54.407000
|
https://github.com/divelab/DIG
| 1,503 |
GraphFM: Improving large-scale GNN training via feature momentum
|
https://scholar.google.com/scholar?cluster=14093235266162728639&hl=en&as_sdt=0,33
| 33 | 2,022 |
Predicting Out-of-Distribution Error with the Projection Norm
| 9 |
icml
| 0 | 0 |
2023-06-17 04:55:54.613000
|
https://github.com/yaodongyu/projnorm
| 13 |
Predicting out-of-distribution error with the projection norm
|
https://scholar.google.com/scholar?cluster=14580458746203726066&hl=en&as_sdt=0,14
| 2 | 2,022 |
Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning
| 5 |
icml
| 3 | 3 |
2023-06-17 04:55:54.819000
|
https://github.com/pku-ai-edge/corro
| 15 |
Robust task representations for offline meta-reinforcement learning via contrastive learning
|
https://scholar.google.com/scholar?cluster=5539110127380539643&hl=en&as_sdt=0,34
| 0 | 2,022 |
Time Is MattEr: Temporal Self-supervision for Video Transformers
| 3 |
icml
| 4 | 1 |
2023-06-17 04:55:55.024000
|
https://github.com/alinlab/temporal-selfsupervision
| 26 |
Time is matter: Temporal self-supervision for video transformers
|
https://scholar.google.com/scholar?cluster=10001737047837090145&hl=en&as_sdt=0,33
| 2 | 2,022 |
Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images
| 5 |
icml
| 0 | 3 |
2023-06-17 04:55:55.231000
|
https://github.com/shiranzada/pure-noise
| 9 |
Pure noise to the rescue of insufficient data: Improving imbalanced classification by training on random noise images
|
https://scholar.google.com/scholar?cluster=13535908408356605995&hl=en&as_sdt=0,5
| 2 | 2,022 |
Adaptive Conformal Predictions for Time Series
| 28 |
icml
| 10 | 1 |
2023-06-17 04:55:55.437000
|
https://github.com/mzaffran/adaptiveconformalpredictionstimeseries
| 30 |
Adaptive conformal predictions for time series
|
https://scholar.google.com/scholar?cluster=6242332424381793143&hl=en&as_sdt=0,33
| 1 | 2,022 |
Multi Resolution Analysis (MRA) for Approximate Self-Attention
| 2 |
icml
| 2 | 0 |
2023-06-17 04:55:55.643000
|
https://github.com/mlpen/mra-attention
| 6 |
Multi Resolution Analysis (MRA) for Approximate Self-Attention
|
https://scholar.google.com/scholar?cluster=184055539633336213&hl=en&as_sdt=0,44
| 1 | 2,022 |
Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts
| 91 |
icml
| 49 | 15 |
2023-06-17 04:55:55.850000
|
https://github.com/zengyan-97/x-vlm
| 365 |
Multi-grained vision language pre-training: Aligning texts with visual concepts
|
https://scholar.google.com/scholar?cluster=8119995839638175849&hl=en&as_sdt=0,5
| 5 | 2,022 |
PDE-Based Optimal Strategy for Unconstrained Online Learning
| 5 |
icml
| 0 | 0 |
2023-06-17 04:55:56.056000
|
https://github.com/zhiyuzz/icml2022-pde-potential
| 0 |
PDE-based optimal strategy for unconstrained online learning
|
https://scholar.google.com/scholar?cluster=2664380085986514830&hl=en&as_sdt=0,44
| 1 | 2,022 |
Revisiting End-to-End Speech-to-Text Translation From Scratch
| 11 |
icml
| 21 | 0 |
2023-06-17 04:55:56.269000
|
https://github.com/bzhangGo/zero
| 135 |
Revisiting end-to-end speech-to-text translation from scratch
|
https://scholar.google.com/scholar?cluster=1521111115547925534&hl=en&as_sdt=0,34
| 6 | 2,022 |
GALAXY: Graph-based Active Learning at the Extreme
| 5 |
icml
| 0 | 0 |
2023-06-17 04:55:56.476000
|
https://github.com/jifanz/GALAXY
| 6 |
GALAXY: graph-based active learning at the extreme
|
https://scholar.google.com/scholar?cluster=10022632741658948627&hl=en&as_sdt=0,33
| 1 | 2,022 |
A Langevin-like Sampler for Discrete Distributions
| 9 |
icml
| 3 | 0 |
2023-06-17 04:55:56.682000
|
https://github.com/ruqizhang/discrete-langevin
| 18 |
A Langevin-like sampler for discrete distributions
|
https://scholar.google.com/scholar?cluster=3541239242626478838&hl=en&as_sdt=0,33
| 3 | 2,022 |
Rich Feature Construction for the Optimization-Generalization Dilemma
| 13 |
icml
| 1 | 1 |
2023-06-17 04:55:56.889000
|
https://github.com/tjujianyu/rfc
| 8 |
Rich feature construction for the optimization-generalization dilemma
|
https://scholar.google.com/scholar?cluster=4651591858912243934&hl=en&as_sdt=0,33
| 2 | 2,022 |
Generative Flow Networks for Discrete Probabilistic Modeling
| 21 |
icml
| 16 | 0 |
2023-06-17 04:55:57.094000
|
https://github.com/zdhnarsil/eb_gfn
| 62 |
Generative flow networks for discrete probabilistic modeling
|
https://scholar.google.com/scholar?cluster=5719959167998853445&hl=en&as_sdt=0,43
| 2 | 2,022 |
Neurotoxin: Durable Backdoors in Federated Learning
| 19 |
icml
| 3 | 5 |
2023-06-17 04:55:57.300000
|
https://github.com/jhcknzzm/federated-learning-backdoor
| 39 |
Neurotoxin: Durable backdoors in federated learning
|
https://scholar.google.com/scholar?cluster=15130248935781363426&hl=en&as_sdt=0,5
| 3 | 2,022 |
Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
| 40 |
icml
| 4 | 0 |
2023-06-17 04:55:57.506000
|
https://github.com/HazyResearch/correct-n-contrast
| 14 |
Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations
|
https://scholar.google.com/scholar?cluster=8960959356014477531&hl=en&as_sdt=0,33
| 19 | 2,022 |
Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning approach
| 25 |
icml
| 1 | 0 |
2023-06-17 04:55:57.712000
|
https://github.com/yudasong/briee
| 11 |
Efficient reinforcement learning in block mdps: A model-free representation learning approach
|
https://scholar.google.com/scholar?cluster=10850889224658556483&hl=en&as_sdt=0,33
| 2 | 2,022 |
Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets
| 0 |
icml
| 2 | 0 |
2023-06-17 04:55:57.918000
|
https://github.com/rajesh-lab/deep_permutation_invariant
| 10 |
Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets
|
https://scholar.google.com/scholar?cluster=8359318767015654610&hl=en&as_sdt=0,31
| 2 | 2,022 |
Learning to Estimate and Refine Fluid Motion with Physical Dynamics
| 6 |
icml
| 4 | 1 |
2023-06-17 04:55:58.125000
|
https://github.com/erizmr/learn-to-estimate-fluid-motion
| 11 |
Learning to estimate and refine fluid motion with physical dynamics
|
https://scholar.google.com/scholar?cluster=7117659598027113757&hl=en&as_sdt=0,31
| 2 | 2,022 |
Low-Precision Stochastic Gradient Langevin Dynamics
| 2 |
icml
| 1 | 0 |
2023-06-17 04:55:58.332000
|
https://github.com/ruqizhang/low-precision-sgld
| 5 |
Low-Precision Stochastic Gradient Langevin Dynamics
|
https://scholar.google.com/scholar?cluster=5250731865302553140&hl=en&as_sdt=0,34
| 2 | 2,022 |
Expression might be enough: representing pressure and demand for reinforcement learning based traffic signal control
| 10 |
icml
| 3 | 0 |
2023-06-17 04:55:58.545000
|
https://github.com/LiangZhang1996/Advanced_XLight
| 14 |
Expression might be enough: representing pressure and demand for reinforcement learning based traffic signal control
|
https://scholar.google.com/scholar?cluster=995321608406249380&hl=en&as_sdt=0,33
| 1 | 2,022 |
Building Robust Ensembles via Margin Boosting
| 8 |
icml
| 0 | 1 |
2023-06-17 04:55:58.751000
|
https://github.com/zdhnarsil/margin-boosting
| 7 |
Building robust ensembles via margin boosting
|
https://scholar.google.com/scholar?cluster=13608655782211931186&hl=en&as_sdt=0,47
| 2 | 2,022 |
ROCK: Causal Inference Principles for Reasoning about Commonsense Causality
| 2 |
icml
| 1 | 1 |
2023-06-17 04:55:58.958000
|
https://github.com/zjiayao/ccr_rock
| 7 |
ROCK: Causal Inference Principles for Reasoning about Commonsense Causality
|
https://scholar.google.com/scholar?cluster=4757630172142505662&hl=en&as_sdt=0,41
| 1 | 2,022 |
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
| 14 |
icml
| 4 | 1 |
2023-06-17 04:55:59.168000
|
https://github.com/qingruzhang/platon
| 21 |
Platon: Pruning large transformer models with upper confidence bound of weight importance
|
https://scholar.google.com/scholar?cluster=17654209064614422018&hl=en&as_sdt=0,34
| 2 | 2,022 |
Learning from Counterfactual Links for Link Prediction
| 31 |
icml
| 6 | 1 |
2023-06-17 04:55:59.378000
|
https://github.com/DM2-ND/CFLP
| 49 |
Learning from counterfactual links for link prediction
|
https://scholar.google.com/scholar?cluster=12649708640262432051&hl=en&as_sdt=0,33
| 2 | 2,022 |
Certified Robustness Against Natural Language Attacks by Causal Intervention
| 4 |
icml
| 3 | 0 |
2023-06-17 04:55:59.591000
|
https://github.com/zhao-ht/convexcertify
| 7 |
Certified Robustness Against Natural Language Attacks by Causal Intervention
|
https://scholar.google.com/scholar?cluster=16167491038280669708&hl=en&as_sdt=0,10
| 1 | 2,022 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.