title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group
| 155 |
icml
| 19 | 1 |
2023-06-17 03:10:24.214000
|
https://github.com/Lezcano/expRNN
| 116 |
Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group
|
https://scholar.google.com/scholar?cluster=17536814525953471769&hl=en&as_sdt=0,5
| 6 | 2,019 |
Are Generative Classifiers More Robust to Adversarial Attacks?
| 85 |
icml
| 9 | 0 |
2023-06-17 03:10:24.429000
|
https://github.com/deepgenerativeclassifier/DeepBayes
| 22 |
Are generative classifiers more robust to adversarial attacks?
|
https://scholar.google.com/scholar?cluster=10770378244624939531&hl=en&as_sdt=0,45
| 2 | 2,019 |
LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning
| 97 |
icml
| 24 | 3 |
2023-06-17 03:10:24.645000
|
https://github.com/likesiwell/LGM-Net
| 84 |
LGM-Net: Learning to generate matching networks for few-shot learning
|
https://scholar.google.com/scholar?cluster=17373853660485197406&hl=en&as_sdt=0,5
| 4 | 2,019 |
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
| 217 |
icml
| 14 | 0 |
2023-06-17 03:10:24.859000
|
https://github.com/Cold-Winter/Nattack
| 46 |
Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks
|
https://scholar.google.com/scholar?cluster=1133340624710172210&hl=en&as_sdt=0,5
| 5 | 2,019 |
Bayesian Joint Spike-and-Slab Graphical Lasso
| 20 |
icml
| 2 | 0 |
2023-06-17 03:10:25.075000
|
https://github.com/richardli/SSJGL
| 7 |
Bayesian joint spike-and-slab graphical lasso
|
https://scholar.google.com/scholar?cluster=11980207298770096957&hl=en&as_sdt=0,48
| 4 | 2,019 |
Adversarial camera stickers: A physical camera-based attack on deep learning systems
| 115 |
icml
| 2 | 0 |
2023-06-17 03:10:25.290000
|
https://github.com/yoheikikuta/adversarial-camera-stickers
| 8 |
Adversarial camera stickers: A physical camera-based attack on deep learning systems
|
https://scholar.google.com/scholar?cluster=8454184380086098103&hl=en&as_sdt=0,33
| 3 | 2,019 |
Feature-Critic Networks for Heterogeneous Domain Generalization
| 200 |
icml
| 10 | 9 |
2023-06-17 03:10:25.505000
|
https://github.com/liyiying/Feature_Critic
| 42 |
Feature-critic networks for heterogeneous domain generalization
|
https://scholar.google.com/scholar?cluster=15160705294700481017&hl=en&as_sdt=0,33
| 5 | 2,019 |
Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations
| 45 |
icml
| 6 | 0 |
2023-06-17 03:10:25.720000
|
https://github.com/yorkerlin/VB-MixEF
| 15 |
Fast and simple natural-gradient variational inference with mixture of exponential-family approximations
|
https://scholar.google.com/scholar?cluster=9800018690635650774&hl=en&as_sdt=0,5
| 4 | 2,019 |
Acceleration of SVRG and Katyusha X by Inexact Preconditioning
| 7 |
icml
| 3 | 0 |
2023-06-17 03:10:25.934000
|
https://github.com/uclaopt/IPSVRG
| 8 |
Acceleration of svrg and katyusha x by inexact preconditioning
|
https://scholar.google.com/scholar?cluster=13059368819279986289&hl=en&as_sdt=0,45
| 5 | 2,019 |
Rao-Blackwellized Stochastic Gradients for Discrete Distributions
| 33 |
icml
| 3 | 0 |
2023-06-17 03:10:26.161000
|
https://github.com/Runjing-Liu120/RaoBlackwellizedSGD
| 22 |
Rao-Blackwellized stochastic gradients for discrete distributions
|
https://scholar.google.com/scholar?cluster=12116217648667930393&hl=en&as_sdt=0,23
| 2 | 2,019 |
Understanding and Accelerating Particle-Based Variational Inference
| 75 |
icml
| 5 | 0 |
2023-06-17 03:10:26.375000
|
https://github.com/chang-ml-thu/AWGF
| 17 |
Understanding and accelerating particle-based variational inference
|
https://scholar.google.com/scholar?cluster=7410249710967287826&hl=en&as_sdt=0,11
| 2 | 2,019 |
Understanding MCMC Dynamics as Flows on the Wasserstein Space
| 21 |
icml
| 4 | 0 |
2023-06-17 03:10:26.592000
|
https://github.com/chang-ml-thu/FGH-flow
| 11 |
Understanding mcmc dynamics as flows on the wasserstein space
|
https://scholar.google.com/scholar?cluster=16148000850438563191&hl=en&as_sdt=0,14
| 3 | 2,019 |
Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions
| 98 |
icml
| 5 | 1 |
2023-06-17 03:10:26.808000
|
https://github.com/aliutkus/swf
| 14 |
Sliced-Wasserstein flows: Nonparametric generative modeling via optimal transport and diffusions
|
https://scholar.google.com/scholar?cluster=7685202431169756099&hl=en&as_sdt=0,5
| 6 | 2,019 |
CoT: Cooperative Training for Generative Modeling of Discrete Data
| 24 |
icml
| 28 | 1 |
2023-06-17 03:10:27.023000
|
https://github.com/desire2020/Cooperative-Training
| 75 |
Cot: Cooperative training for generative modeling of discrete data
|
https://scholar.google.com/scholar?cluster=4231322493080735140&hl=en&as_sdt=0,10
| 11 | 2,019 |
High-Fidelity Image Generation With Fewer Labels
| 145 |
icml
| 322 | 16 |
2023-06-17 03:10:27.237000
|
https://github.com/google/compare_gan
| 1,814 |
High-fidelity image generation with fewer labels
|
https://scholar.google.com/scholar?cluster=13622749687496052538&hl=en&as_sdt=0,11
| 52 | 2,019 |
Variational Implicit Processes
| 57 |
icml
| 3 | 0 |
2023-06-17 03:10:27.452000
|
https://github.com/LaurantChao/VIP
| 8 |
Variational implicit processes
|
https://scholar.google.com/scholar?cluster=11479270094313825180&hl=en&as_sdt=0,6
| 1 | 2,019 |
EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE
| 110 |
icml
| 15 | 0 |
2023-06-17 03:10:27.667000
|
https://github.com/microsoft/EDDI
| 37 |
Eddi: Efficient dynamic discovery of high-value information with partial vae
|
https://scholar.google.com/scholar?cluster=7932877212524867960&hl=en&as_sdt=0,5
| 8 | 2,019 |
Guided evolutionary strategies: augmenting random search with surrogate gradients
| 79 |
icml
| 25 | 2 |
2023-06-17 03:10:27.882000
|
https://github.com/brain-research/guided-evolutionary-strategies
| 262 |
Guided evolutionary strategies: Augmenting random search with surrogate gradients
|
https://scholar.google.com/scholar?cluster=13097058951649931158&hl=en&as_sdt=0,5
| 15 | 2,019 |
Adversarial Generation of Time-Frequency Features with application in audio synthesis
| 70 |
icml
| 13 | 5 |
2023-06-17 03:10:28.096000
|
https://github.com/tifgan/stftGAN
| 105 |
Adversarial generation of time-frequency features with application in audio synthesis
|
https://scholar.google.com/scholar?cluster=7293234438017145749&hl=en&as_sdt=0,5
| 7 | 2,019 |
Decomposing feature-level variation with Covariate Gaussian Process Latent Variable Models
| 12 |
icml
| 4 | 0 |
2023-06-17 03:10:28.311000
|
https://github.com/kasparmartens/c-GPLVM
| 24 |
Decomposing feature-level variation with covariate Gaussian process latent variable models
|
https://scholar.google.com/scholar?cluster=3291712378520398367&hl=en&as_sdt=0,15
| 2 | 2,019 |
Disentangling Disentanglement in Variational Autoencoders
| 232 |
icml
| 13 | 0 |
2023-06-17 03:10:28.525000
|
https://github.com/iffsid/disentangling-disentanglement
| 86 |
Disentangling disentanglement in variational autoencoders
|
https://scholar.google.com/scholar?cluster=4865252587822770331&hl=en&as_sdt=0,5
| 15 | 2,019 |
Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems
| 18 |
icml
| 18 | 0 |
2023-06-17 03:10:28.739000
|
https://github.com/Microsoft/vi-hds
| 45 |
Efficient amortised Bayesian inference for hierarchical and nonlinear dynamical systems
|
https://scholar.google.com/scholar?cluster=1247732699292692241&hl=en&as_sdt=0,44
| 8 | 2,019 |
Toward Controlling Discrimination in Online Ad Auctions
| 48 |
icml
| 4 | 0 |
2023-06-17 03:10:28.956000
|
https://github.com/AnayMehrotra/Fair-Online-Advertising
| 2 |
Toward controlling discrimination in online ad auctions
|
https://scholar.google.com/scholar?cluster=3881350113786532991&hl=en&as_sdt=0,43
| 2 | 2,019 |
Imputing Missing Events in Continuous-Time Event Streams
| 31 |
icml
| 17 | 2 |
2023-06-17 03:10:29.192000
|
https://github.com/HMEIatJHU/neural-hawkes-particle-smoothing
| 41 |
Imputing missing events in continuous-time event streams
|
https://scholar.google.com/scholar?cluster=8012453208848277577&hl=en&as_sdt=0,14
| 4 | 2,019 |
Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks
| 125 |
icml
| 28 | 12 |
2023-06-17 03:10:29.407000
|
https://github.com/psg-mit/Ithemal
| 139 |
Ithemal: Accurate, portable and fast basic block throughput estimation using deep neural networks
|
https://scholar.google.com/scholar?cluster=6452183013544894818&hl=en&as_sdt=0,33
| 14 | 2,019 |
On Dropout and Nuclear Norm Regularization
| 19 |
icml
| 2 | 0 |
2023-06-17 03:10:29.621000
|
https://github.com/r3831/dln_dropout
| 3 |
On dropout and nuclear norm regularization
|
https://scholar.google.com/scholar?cluster=2540515501706995243&hl=en&as_sdt=0,40
| 2 | 2,019 |
Flat Metric Minimization with Applications in Generative Modeling
| 3 |
icml
| 4 | 0 |
2023-06-17 03:10:29.836000
|
https://github.com/moellenh/flatgan
| 18 |
Flat metric minimization with applications in generative modeling
|
https://scholar.google.com/scholar?cluster=16621113036066180234&hl=en&as_sdt=0,33
| 2 | 2,019 |
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
| 113 |
icml
| 14 | 0 |
2023-06-17 03:10:30.050000
|
https://github.com/snu-mllab/parsimonious-blackbox-attack
| 35 |
Parsimonious black-box adversarial attacks via efficient combinatorial optimization
|
https://scholar.google.com/scholar?cluster=16009538798728740698&hl=en&as_sdt=0,5
| 6 | 2,019 |
Relational Pooling for Graph Representations
| 185 |
icml
| 7 | 0 |
2023-06-17 03:10:30.265000
|
https://github.com/PurdueMINDS/RelationalPooling
| 34 |
Relational pooling for graph representations
|
https://scholar.google.com/scholar?cluster=6145744994249893945&hl=en&as_sdt=0,31
| 6 | 2,019 |
A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based Learning
| 81 |
icml
| 6 | 0 |
2023-06-17 03:10:30.481000
|
https://github.com/pfnet-research/hyperbolic_wrapped_distribution
| 28 |
A wrapped normal distribution on hyperbolic space for gradient-based learning
|
https://scholar.google.com/scholar?cluster=11277639546038701066&hl=en&as_sdt=0,47
| 22 | 2,019 |
Dropout as a Structured Shrinkage Prior
| 39 |
icml
| 5 | 0 |
2023-06-17 03:10:30.697000
|
https://github.com/enalisnick/dropout_icml2019
| 8 |
Dropout as a structured shrinkage prior
|
https://scholar.google.com/scholar?cluster=16208195687877220296&hl=en&as_sdt=0,44
| 1 | 2,019 |
Zero-Shot Knowledge Distillation in Deep Networks
| 182 |
icml
| 7 | 0 |
2023-06-17 03:10:30.910000
|
https://github.com/vcl-iisc/ZSKD
| 59 |
Zero-shot knowledge distillation in deep networks
|
https://scholar.google.com/scholar?cluster=6513271489867205724&hl=en&as_sdt=0,23
| 7 | 2,019 |
Safe Grid Search with Optimal Complexity
| 41 |
icml
| 3 | 0 |
2023-06-17 03:10:31.127000
|
https://github.com/EugeneNdiaye/safe_grid_search
| 7 |
Safe grid search with optimal complexity
|
https://scholar.google.com/scholar?cluster=1378644094816844028&hl=en&as_sdt=0,5
| 5 | 2,019 |
Rotation Invariant Householder Parameterization for Bayesian PCA
| 9 |
icml
| 3 | 10 |
2023-06-17 03:10:31.343000
|
https://github.com/RSNirwan/HouseholderBPCA
| 13 |
Rotation invariant householder parameterization for Bayesian PCA
|
https://scholar.google.com/scholar?cluster=6089302904183911614&hl=en&as_sdt=0,41
| 5 | 2,019 |
Training Neural Networks with Local Error Signals
| 169 |
icml
| 34 | 3 |
2023-06-17 03:10:31.558000
|
https://github.com/anokland/local-loss
| 150 |
Training neural networks with local error signals
|
https://scholar.google.com/scholar?cluster=11332176056919584070&hl=en&as_sdt=0,5
| 10 | 2,019 |
Remember and Forget for Experience Replay
| 80 |
icml
| 44 | 6 |
2023-06-17 03:10:31.777000
|
https://github.com/cselab/smarties
| 103 |
Remember and forget for experience replay
|
https://scholar.google.com/scholar?cluster=13050806613216384530&hl=en&as_sdt=0,5
| 11 | 2,019 |
Learning to Infer Program Sketches
| 92 |
icml
| 10 | 6 |
2023-06-17 03:10:31.992000
|
https://github.com/mtensor/neural_sketch
| 21 |
Learning to infer program sketches
|
https://scholar.google.com/scholar?cluster=17303764643585588375&hl=en&as_sdt=0,23
| 8 | 2,019 |
Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models
| 118 |
icml
| 10 | 0 |
2023-06-17 03:10:32.207000
|
https://github.com/clinicalml/gumbel-max-scm
| 39 |
Counterfactual off-policy evaluation with gumbel-max structural causal models
|
https://scholar.google.com/scholar?cluster=3302653893277553179&hl=en&as_sdt=0,26
| 19 | 2,019 |
Orthogonal Random Forest for Causal Inference
| 79 |
icml
| 614 | 301 |
2023-06-17 03:10:32.422000
|
https://github.com/Microsoft/EconML
| 3,004 |
Orthogonal random forest for causal inference
|
https://scholar.google.com/scholar?cluster=1871181716543524277&hl=en&as_sdt=0,5
| 70 | 2,019 |
Inferring Heterogeneous Causal Effects in Presence of Spatial Confounding
| 8 |
icml
| 2 | 0 |
2023-06-17 03:10:32.638000
|
https://github.com/Muhammad-Osama/Inferring-Heterogeneous-Causal-Effects-in-Presence-of-Spatial-Confounding
| 1 |
Inferring heterogeneous causal effects in presence of spatial confounding
|
https://scholar.google.com/scholar?cluster=14041258610340953811&hl=en&as_sdt=0,5
| 0 | 2,019 |
Improving Adversarial Robustness via Promoting Ensemble Diversity
| 348 |
icml
| 13 | 2 |
2023-06-17 03:10:32.853000
|
https://github.com/P2333/Adaptive-Diversity-Promoting
| 60 |
Improving adversarial robustness via promoting ensemble diversity
|
https://scholar.google.com/scholar?cluster=16568032932303177237&hl=en&as_sdt=0,33
| 3 | 2,019 |
Nonparametric Bayesian Deep Networks with Local Competition
| 30 |
icml
| 1 | 0 |
2023-06-17 03:10:33.067000
|
https://github.com/konpanousis/SB-LWTA
| 6 |
Nonparametric Bayesian deep networks with local competition
|
https://scholar.google.com/scholar?cluster=6949349876007421452&hl=en&as_sdt=0,5
| 1 | 2,019 |
Deep Residual Output Layers for Neural Language Generation
| 7 |
icml
| 3 | 0 |
2023-06-17 03:10:33.282000
|
https://github.com/idiap/drill
| 10 |
Deep residual output layers for neural language generation
|
https://scholar.google.com/scholar?cluster=6336276005436023906&hl=en&as_sdt=0,1
| 8 | 2,019 |
Self-Supervised Exploration via Disagreement
| 272 |
icml
| 23 | 3 |
2023-06-17 03:10:33.496000
|
https://github.com/pathak22/exploration-by-disagreement
| 120 |
Self-supervised exploration via disagreement
|
https://scholar.google.com/scholar?cluster=13780996231531586358&hl=en&as_sdt=0,47
| 4 | 2,019 |
Domain Agnostic Learning with Disentangled Representations
| 194 |
icml
| 28 | 3 |
2023-06-17 03:10:33.710000
|
https://github.com/VisionLearningGroup/DAL
| 133 |
Domain agnostic learning with disentangled representations
|
https://scholar.google.com/scholar?cluster=10085135045247935679&hl=en&as_sdt=0,33
| 8 | 2,019 |
Temporal Gaussian Mixture Layer for Videos
| 77 |
icml
| 14 | 5 |
2023-06-17 03:10:33.924000
|
https://github.com/piergiaj/tgm-icml19
| 99 |
Temporal gaussian mixture layer for videos
|
https://scholar.google.com/scholar?cluster=7515216755463628280&hl=en&as_sdt=0,47
| 5 | 2,019 |
AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss
| 343 |
icml
| 95 | 8 |
2023-06-17 03:10:34.139000
|
https://github.com/liusongxiang/StarGAN-Voice-Conversion
| 460 |
Autovc: Zero-shot voice style transfer with only autoencoder loss
|
https://scholar.google.com/scholar?cluster=16861313448156905141&hl=en&as_sdt=0,33
| 20 | 2,019 |
On the Spectral Bias of Neural Networks
| 653 |
icml
| 18 | 0 |
2023-06-17 03:10:34.355000
|
https://github.com/nasimrahaman/SpectralBias
| 87 |
On the spectral bias of neural networks
|
https://scholar.google.com/scholar?cluster=6023723620228240592&hl=en&as_sdt=0,5
| 5 | 2,019 |
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
| 487 |
icml
| 116 | 10 |
2023-06-17 03:10:34.571000
|
https://github.com/katerakelly/oyster
| 422 |
Efficient off-policy meta-reinforcement learning via probabilistic context variables
|
https://scholar.google.com/scholar?cluster=15379570585451726919&hl=en&as_sdt=0,31
| 22 | 2,019 |
Topological Data Analysis of Decision Boundaries with Application to Model Selection
| 45 |
icml
| 5 | 1 |
2023-06-17 03:10:34.786000
|
https://github.com/nrkarthikeyan/topology-decision-boundaries
| 25 |
Topological data analysis of decision boundaries with application to model selection
|
https://scholar.google.com/scholar?cluster=16310684424372533537&hl=en&as_sdt=0,47
| 3 | 2,019 |
Do ImageNet Classifiers Generalize to ImageNet?
| 1,011 |
icml
| 19 | 3 |
2023-06-17 03:10:35.002000
|
https://github.com/modestyachts/ImageNetV2
| 200 |
Do imagenet classifiers generalize to imagenet?
|
https://scholar.google.com/scholar?cluster=9642974458829870490&hl=en&as_sdt=0,5
| 9 | 2,019 |
Separating value functions across time-scales
| 20 |
icml
| 8 | 1 |
2023-06-17 03:10:35.217000
|
https://github.com/facebookresearch/td-delta
| 17 |
Separating value functions across time-scales
|
https://scholar.google.com/scholar?cluster=4770640199000017982&hl=en&as_sdt=0,23
| 5 | 2,019 |
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
| 162 |
icml
| 12 | 4 |
2023-06-17 03:10:35.432000
|
https://github.com/yk/icml19_public
| 21 |
The odds are odd: A statistical test for detecting adversarial examples
|
https://scholar.google.com/scholar?cluster=6673355422445965167&hl=en&as_sdt=0,10
| 2 | 2,019 |
A Contrastive Divergence for Combining Variational Inference and MCMC
| 69 |
icml
| 7 | 0 |
2023-06-17 03:10:35.647000
|
https://github.com/franrruiz/vcd_divergence
| 27 |
A contrastive divergence for combining variational inference and mcmc
|
https://scholar.google.com/scholar?cluster=10765853948406678619&hl=en&as_sdt=0,43
| 5 | 2,019 |
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
| 243 |
icml
| 18 | 4 |
2023-06-17 03:10:35.863000
|
https://github.com/uclaopt/Provable_Plug_and_Play
| 60 |
Plug-and-play methods provably converge with properly trained denoisers
|
https://scholar.google.com/scholar?cluster=11121192984446474149&hl=en&as_sdt=0,5
| 6 | 2,019 |
Deep Gaussian Processes with Importance-Weighted Variational Inference
| 43 |
icml
| 4 | 1 |
2023-06-17 03:10:36.078000
|
https://github.com/hughsalimbeni/DGPs_with_IWVI
| 36 |
Deep Gaussian processes with importance-weighted variational inference
|
https://scholar.google.com/scholar?cluster=17591045211502754804&hl=en&as_sdt=0,11
| 4 | 2,019 |
Exploration Conscious Reinforcement Learning Revisited
| 11 |
icml
| 4 | 0 |
2023-06-17 03:10:36.293000
|
https://github.com/shanlior/ExplorationConsciousRL
| 6 |
Exploration conscious reinforcement learning revisited
|
https://scholar.google.com/scholar?cluster=2069086734091208368&hl=en&as_sdt=0,5
| 2 | 2,019 |
Mixture Models for Diverse Machine Translation: Tricks of the Trade
| 101 |
icml
| 5,878 | 1,031 |
2023-06-17 03:10:36.509000
|
https://github.com/pytorch/fairseq
| 26,482 |
Mixture models for diverse machine translation: Tricks of the trade
|
https://scholar.google.com/scholar?cluster=10713606322116851955&hl=en&as_sdt=0,50
| 411 | 2,019 |
Replica Conditional Sequential Monte Carlo
| 2 |
icml
| 1 | 0 |
2023-06-17 03:10:36.723000
|
https://github.com/ayshestopaloff/replicacsmc
| 2 |
Replica Conditional Sequential Monte Carlo
|
https://scholar.google.com/scholar?cluster=8937563905514647283&hl=en&as_sdt=0,36
| 0 | 2,019 |
Scalable Training of Inference Networks for Gaussian-Process Models
| 19 |
icml
| 4 | 1 |
2023-06-17 03:10:36.938000
|
https://github.com/thjashin/gp-infer-net
| 41 |
Scalable training of inference networks for gaussian-process models
|
https://scholar.google.com/scholar?cluster=18315311533765480343&hl=en&as_sdt=0,5
| 4 | 2,019 |
Model-Based Active Exploration
| 157 |
icml
| 16 | 1 |
2023-06-17 03:10:37.167000
|
https://github.com/nnaisense/max
| 72 |
Model-based active exploration
|
https://scholar.google.com/scholar?cluster=4949040749673510686&hl=en&as_sdt=0,5
| 5 | 2,019 |
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension
| 127 |
icml
| 6 | 0 |
2023-06-17 03:10:37.382000
|
https://github.com/facebookresearch/AdversarialAndDimensionality
| 16 |
First-order adversarial vulnerability of neural networks and input dimension
|
https://scholar.google.com/scholar?cluster=577929050796401765&hl=en&as_sdt=0,36
| 4 | 2,019 |
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
| 42 |
icml
| 3 | 0 |
2023-06-17 03:10:37.597000
|
https://github.com/singlasahil14/CASO
| 13 |
Understanding impacts of high-order loss approximations and features in deep learning interpretation
|
https://scholar.google.com/scholar?cluster=17624808507201697872&hl=en&as_sdt=0,22
| 3 | 2,019 |
GEOMetrics: Exploiting Geometric Structure for Graph-Encoded Objects
| 90 |
icml
| 12 | 1 |
2023-06-17 03:10:37.814000
|
https://github.com/EdwardSmith1884/GEOMetrics
| 117 |
Geometrics: Exploiting geometric structure for graph-encoded objects
|
https://scholar.google.com/scholar?cluster=15300382945837912303&hl=en&as_sdt=0,5
| 9 | 2,019 |
The Evolved Transformer
| 420 |
icml
| 3,290 | 589 |
2023-06-17 03:10:38.029000
|
https://github.com/tensorflow/tensor2tensor
| 13,764 |
The evolved transformer
|
https://scholar.google.com/scholar?cluster=12069106626021161148&hl=en&as_sdt=0,38
| 461 | 2,019 |
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
| 540 |
icml
| 15 | 3 |
2023-06-17 03:10:38.244000
|
https://github.com/Sonkyunghwan/QTRAN
| 64 |
Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning
|
https://scholar.google.com/scholar?cluster=8081563128599106489&hl=en&as_sdt=0,44
| 1 | 2,019 |
Revisiting the Softmax Bellman Operator: New Benefits and New Perspective
| 46 |
icml
| 3 | 0 |
2023-06-17 03:10:38.461000
|
https://github.com/zhao-song/Softmax-DQN
| 6 |
Revisiting the softmax bellman operator: New benefits and new perspective
|
https://scholar.google.com/scholar?cluster=12009633864988483522&hl=en&as_sdt=0,39
| 1 | 2,019 |
MASS: Masked Sequence to Sequence Pre-training for Language Generation
| 910 |
icml
| 209 | 67 |
2023-06-17 03:10:38.676000
|
https://github.com/microsoft/MASS
| 1,103 |
Mass: Masked sequence to sequence pre-training for language generation
|
https://scholar.google.com/scholar?cluster=9265562426073523323&hl=en&as_sdt=0,26
| 37 | 2,019 |
Compressing Gradient Optimizers via Count-Sketches
| 29 |
icml
| 13 | 0 |
2023-06-17 03:10:38.891000
|
https://github.com/rdspring1/Count-Sketch-Optimizers
| 26 |
Compressing gradient optimizers via count-sketches
|
https://scholar.google.com/scholar?cluster=1104222702149426557&hl=en&as_sdt=0,5
| 4 | 2,019 |
BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning
| 192 |
icml
| 25 | 1 |
2023-06-17 03:10:39.107000
|
https://github.com/AsaCooperStickland/Bert-n-Pals
| 74 |
Bert and pals: Projected attention layers for efficient adaptation in multi-task learning
|
https://scholar.google.com/scholar?cluster=3136454913064441910&hl=en&as_sdt=0,38
| 3 | 2,019 |
Provably Efficient Imitation Learning from Observation Alone
| 82 |
icml
| 5 | 0 |
2023-06-17 03:10:39.322000
|
https://github.com/wensun/Imitation-Learning-from-Observation
| 20 |
Provably efficient imitation learning from observation alone
|
https://scholar.google.com/scholar?cluster=12068954688266237988&hl=en&as_sdt=0,5
| 3 | 2,019 |
Hyperbolic Disk Embeddings for Directed Acyclic Graphs
| 41 |
icml
| 5 | 0 |
2023-06-17 03:10:39.537000
|
https://github.com/lapras-inc/disk-embedding
| 17 |
Hyperbolic disk embeddings for directed acyclic graphs
|
https://scholar.google.com/scholar?cluster=15999788633415414766&hl=en&as_sdt=0,34
| 18 | 2,019 |
Equivariant Transformer Networks
| 67 |
icml
| 7 | 1 |
2023-06-17 03:10:39.760000
|
https://github.com/stanford-futuredata/equivariant-transformers
| 82 |
Equivariant transformer networks
|
https://scholar.google.com/scholar?cluster=740882376854558881&hl=en&as_sdt=0,36
| 10 | 2,019 |
Correlated Variational Auto-Encoders
| 19 |
icml
| 4 | 0 |
2023-06-17 03:10:39.976000
|
https://github.com/datang1992/Correlated-VAEs
| 14 |
Correlated variational auto-encoders
|
https://scholar.google.com/scholar?cluster=14520356175099829641&hl=en&as_sdt=0,33
| 5 | 2,019 |
The Variational Predictive Natural Gradient
| 3 |
icml
| 1 | 0 |
2023-06-17 03:10:40.191000
|
https://github.com/datang1992/VPNG
| 8 |
The variational predictive natural gradient
|
https://scholar.google.com/scholar?cluster=6073859204913275725&hl=en&as_sdt=0,47
| 2 | 2,019 |
Adaptive Neural Trees
| 151 |
icml
| 22 | 2 |
2023-06-17 03:10:40.406000
|
https://github.com/rtanno21609/AdaptiveNeuralTrees
| 140 |
Adaptive neural trees
|
https://scholar.google.com/scholar?cluster=10252139245277017232&hl=en&as_sdt=0,20
| 8 | 2,019 |
Combating Label Noise in Deep Learning using Abstention
| 146 |
icml
| 9 | 6 |
2023-06-17 03:10:40.621000
|
https://github.com/thulas/dac-label-noise
| 56 |
Combating label noise in deep learning using abstention
|
https://scholar.google.com/scholar?cluster=13352196764325122860&hl=en&as_sdt=0,5
| 5 | 2,019 |
ELF OpenGo: an analysis and open reimplementation of AlphaZero
| 101 |
icml
| 577 | 44 |
2023-06-17 03:10:40.836000
|
https://github.com/pytorch/ELF
| 3,316 |
Elf opengo: An analysis and open reimplementation of alphazero
|
https://scholar.google.com/scholar?cluster=9736512126040760893&hl=en&as_sdt=0,5
| 191 | 2,019 |
Metropolis-Hastings Generative Adversarial Networks
| 85 |
icml
| 24 | 4 |
2023-06-17 03:10:41.051000
|
https://github.com/uber-research/metropolis-hastings-gans
| 112 |
Metropolis-hastings generative adversarial networks
|
https://scholar.google.com/scholar?cluster=18080915212804537296&hl=en&as_sdt=0,26
| 7 | 2,019 |
Model Comparison for Semantic Grouping
| 1 |
icml
| 4 | 0 |
2023-06-17 03:10:41.266000
|
https://github.com/Babylonpartners/MCSG
| 8 |
Model comparison for semantic grouping
|
https://scholar.google.com/scholar?cluster=18345833118099808380&hl=en&as_sdt=0,5
| 10 | 2,019 |
Manifold Mixup: Better Representations by Interpolating Hidden States
| 889 |
icml
| 65 | 8 |
2023-06-17 03:10:41.482000
|
https://github.com/vikasverma1077/manifold_mixup
| 457 |
Manifold mixup: Better representations by interpolating hidden states
|
https://scholar.google.com/scholar?cluster=5005853392111011711&hl=en&as_sdt=0,15
| 12 | 2,019 |
Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation
| 60 |
icml
| 6 | 2 |
2023-06-17 03:10:41.697000
|
https://github.com/RuohanW/RED
| 28 |
Random expert distillation: Imitation learning via expert policy support estimation
|
https://scholar.google.com/scholar?cluster=2838461363780817206&hl=en&as_sdt=0,44
| 2 | 2,019 |
Improving Neural Language Modeling via Adversarial Training
| 93 |
icml
| 3 | 3 |
2023-06-17 03:10:41.913000
|
https://github.com/ChengyueGongR/advsoft
| 40 |
Improving neural language modeling via adversarial training
|
https://scholar.google.com/scholar?cluster=13673209609848344447&hl=en&as_sdt=0,39
| 3 | 2,019 |
EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis
| 84 |
icml
| 18 | 1 |
2023-06-17 03:10:42.127000
|
https://github.com/alecwangcq/EigenDamage-Pytorch
| 108 |
Eigendamage: Structured pruning in the kronecker-factored eigenbasis
|
https://scholar.google.com/scholar?cluster=15048467937573583684&hl=en&as_sdt=0,48
| 5 | 2,019 |
Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
| 66 |
icml
| 2 | 1 |
2023-06-17 03:10:42.342000
|
https://github.com/ustunb/ctfdist
| 10 |
Repairing without retraining: Avoiding disparate impact with counterfactual distributions
|
https://scholar.google.com/scholar?cluster=16561986856093629430&hl=en&as_sdt=0,5
| 5 | 2,019 |
Non-Monotonic Sequential Text Generation
| 105 |
icml
| 11 | 2 |
2023-06-17 03:10:42.557000
|
https://github.com/wellecks/nonmonotonic_text
| 73 |
Non-monotonic sequential text generation
|
https://scholar.google.com/scholar?cluster=16018486661840997659&hl=en&as_sdt=0,5
| 7 | 2,019 |
Learning deep kernels for exponential family densities
| 70 |
icml
| 2 | 0 |
2023-06-17 03:10:42.791000
|
https://github.com/kevin-w-li/deep-kexpfam
| 22 |
Learning deep kernels for exponential family densities
|
https://scholar.google.com/scholar?cluster=18438114656627425154&hl=en&as_sdt=0,43
| 3 | 2,019 |
Partially Exchangeable Networks and Architectures for Learning Summary Statistics in Approximate Bayesian Computation
| 28 |
icml
| 1 | 0 |
2023-06-17 03:10:43.006000
|
https://github.com/SamuelWiqvist/PENs-and-ABC
| 5 |
Partially exchangeable networks and architectures for learning summary statistics in approximate Bayesian computation
|
https://scholar.google.com/scholar?cluster=16942332521272083058&hl=en&as_sdt=0,44
| 5 | 2,019 |
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
| 197 |
icml
| 13 | 1 |
2023-06-17 03:10:43.223000
|
https://github.com/locuslab/projected_sinkhorn
| 86 |
Wasserstein adversarial examples via projected sinkhorn iterations
|
https://scholar.google.com/scholar?cluster=4087808921541648707&hl=en&as_sdt=0,33
| 7 | 2,019 |
Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling
| 55 |
icml
| 5 | 1 |
2023-06-17 03:10:43.439000
|
https://github.com/wushanshan/L1AE
| 18 |
Learning a compressed sensing measurement matrix via gradient unrolling
|
https://scholar.google.com/scholar?cluster=7047806265254435189&hl=en&as_sdt=0,5
| 4 | 2,019 |
Simplifying Graph Convolutional Networks
| 2,063 |
icml
| 146 | 1 |
2023-06-17 03:10:43.654000
|
https://github.com/Tiiiger/SGC
| 766 |
Simplifying graph convolutional networks
|
https://scholar.google.com/scholar?cluster=17348071344751182786&hl=en&as_sdt=0,23
| 19 | 2,019 |
Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance
| 158 |
icml
| 5 | 0 |
2023-06-17 03:10:43.870000
|
https://github.com/xcgoner/icml2019_zeno
| 13 |
Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance
|
https://scholar.google.com/scholar?cluster=10331500453771682409&hl=en&as_sdt=0,14
| 2 | 2,019 |
Differentiable Linearized ADMM
| 53 |
icml
| 9 | 0 |
2023-06-17 03:10:44.085000
|
https://github.com/zzs1994/D-LADMM
| 27 |
Differentiable linearized ADMM
|
https://scholar.google.com/scholar?cluster=7429496083508800871&hl=en&as_sdt=0,41
| 4 | 2,019 |
Gromov-Wasserstein Learning for Graph Matching and Node Embedding
| 181 |
icml
| 17 | 0 |
2023-06-17 03:10:44.301000
|
https://github.com/HongtengXu/gwl
| 63 |
Gromov-wasserstein learning for graph matching and node embedding
|
https://scholar.google.com/scholar?cluster=17323824579705471287&hl=en&as_sdt=0,10
| 5 | 2,019 |
Supervised Hierarchical Clustering with Exponential Linkage
| 27 |
icml
| 6 | 0 |
2023-06-17 03:10:44.517000
|
https://github.com/iesl/expLinkage
| 9 |
Supervised hierarchical clustering with exponential linkage
|
https://scholar.google.com/scholar?cluster=14591272843062718088&hl=en&as_sdt=0,5
| 12 | 2,019 |
Learning to Prove Theorems via Interacting with Proof Assistants
| 79 |
icml
| 46 | 0 |
2023-06-17 03:10:44.731000
|
https://github.com/princeton-vl/CoqGym
| 319 |
Learning to prove theorems via interacting with proof assistants
|
https://scholar.google.com/scholar?cluster=14925207938076962028&hl=en&as_sdt=0,4
| 17 | 2,019 |
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation
| 145 |
icml
| 10 | 0 |
2023-06-17 03:10:44.946000
|
https://github.com/YyzHarry/ME-Net
| 51 |
Me-net: Towards effective adversarial robustness with matrix estimation
|
https://scholar.google.com/scholar?cluster=15543482510654180189&hl=en&as_sdt=0,34
| 3 | 2,019 |
Hierarchically Structured Meta-learning
| 197 |
icml
| 13 | 1 |
2023-06-17 03:10:45.197000
|
https://github.com/huaxiuyao/HSML
| 48 |
Hierarchically structured meta-learning
|
https://scholar.google.com/scholar?cluster=3487980416117206371&hl=en&as_sdt=0,31
| 5 | 2,019 |
Rademacher Complexity for Adversarially Robust Generalization
| 234 |
icml
| 1 | 0 |
2023-06-17 03:10:45.412000
|
https://github.com/dongyin92/adversarially-robust-generalization
| 9 |
Rademacher complexity for adversarially robust generalization
|
https://scholar.google.com/scholar?cluster=3771850404643054723&hl=en&as_sdt=0,5
| 1 | 2,019 |
ARSM: Augment-REINFORCE-Swap-Merge Estimator for Gradient Backpropagation Through Categorical Variables
| 24 |
icml
| 10 | 0 |
2023-06-17 03:10:45.628000
|
https://github.com/ARM-gradient/ARSM
| 18 |
ARSM: Augment-REINFORCE-swap-merge estimator for gradient backpropagation through categorical variables
|
https://scholar.google.com/scholar?cluster=18117321206953712314&hl=en&as_sdt=0,5
| 1 | 2,019 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.