title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Interpretable, Multidimensional, Multimodal Anomaly Detection with Negative Sampling for Detection of Device Failure
| 45 |
icml
| 21 | 1 |
2023-06-17 03:57:39.829000
|
https://github.com/google/madi
| 62 |
Interpretable, multidimensional, multimodal anomaly detection with negative sampling for detection of device failure
|
https://scholar.google.com/scholar?cluster=3739930474828740815&hl=en&as_sdt=0,33
| 10 | 2,020 |
Multiclass Neural Network Minimization via Tropical Newton Polytope Approximation
| 10 |
icml
| 0 | 1 |
2023-06-17 03:57:40.031000
|
https://github.com/GeorgiosSmyrnis/multiclass_minimization_icml2020
| 1 |
Multiclass neural network minimization via tropical newton polytope approximation
|
https://scholar.google.com/scholar?cluster=2547708256108168456&hl=en&as_sdt=0,31
| 2 | 2,020 |
Bridging the Gap Between f-GANs and Wasserstein GANs
| 36 |
icml
| 4 | 0 |
2023-06-17 03:57:40.234000
|
https://github.com/ermongroup/f-wgan
| 14 |
Bridging the gap between f-gans and wasserstein gans
|
https://scholar.google.com/scholar?cluster=15572821134317773979&hl=en&as_sdt=0,44
| 6 | 2,020 |
Hypernetwork approach to generating point clouds
| 25 |
icml
| 4 | 1 |
2023-06-17 03:57:40.435000
|
https://github.com/gmum/3d-point-clouds-HyperCloud
| 26 |
Hypernetwork approach to generating point clouds
|
https://scholar.google.com/scholar?cluster=1381462816428622645&hl=en&as_sdt=0,10
| 7 | 2,020 |
Which Tasks Should Be Learned Together in Multi-task Learning?
| 333 |
icml
| 13 | 7 |
2023-06-17 03:57:40.637000
|
https://github.com/tstandley/taskgrouping
| 89 |
Which tasks should be learned together in multi-task learning?
|
https://scholar.google.com/scholar?cluster=11792880914150945674&hl=en&as_sdt=0,5
| 2 | 2,020 |
Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information
| 8 |
icml
| 1 | 0 |
2023-06-17 03:57:40.839000
|
https://github.com/karlstratos/ammi
| 11 |
Learning discrete structured representations by adversarially maximizing mutual information
|
https://scholar.google.com/scholar?cluster=10269620235757517949&hl=en&as_sdt=0,10
| 2 | 2,020 |
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
| 101 |
icml
| 0 | 0 |
2023-06-17 03:57:41.041000
|
https://github.com/davidstutz/icml2020-confidence-calibrated-adversarial-training
| 9 |
Confidence-calibrated adversarial training: Generalizing to unseen attacks
|
https://scholar.google.com/scholar?cluster=14154958119332735093&hl=en&as_sdt=0,5
| 4 | 2,020 |
Adaptive Estimator Selection for Off-Policy Evaluation
| 23 |
icml
| 2 | 0 |
2023-06-17 03:57:41.249000
|
https://github.com/VowpalWabbit/slope-experiments
| 3 |
Adaptive estimator selection for off-policy evaluation
|
https://scholar.google.com/scholar?cluster=578911518697866009&hl=en&as_sdt=0,49
| 4 | 2,020 |
Multi-Agent Routing Value Iteration Network
| 33 |
icml
| 14 | 0 |
2023-06-17 03:57:41.451000
|
https://github.com/uber/MARVIN
| 50 |
Multi-agent routing value iteration network
|
https://scholar.google.com/scholar?cluster=16960600258669760447&hl=en&as_sdt=0,5
| 5 | 2,020 |
Distinguishing Cause from Effect Using Quantiles: Bivariate Quantile Causal Discovery
| 18 |
icml
| 2 | 0 |
2023-06-17 03:57:41.652000
|
https://github.com/tagas/bQCD
| 2 |
Distinguishing cause from effect using quantiles: Bivariate quantile causal discovery
|
https://scholar.google.com/scholar?cluster=15617920136874649205&hl=en&as_sdt=0,5
| 1 | 2,020 |
DropNet: Reducing Neural Network Complexity via Iterative Pruning
| 25 |
icml
| 7 | 0 |
2023-06-17 03:57:41.854000
|
https://github.com/tanchongmin/DropNet
| 14 |
Dropnet: Reducing neural network complexity via iterative pruning
|
https://scholar.google.com/scholar?cluster=5847979658470311835&hl=en&as_sdt=0,5
| 1 | 2,020 |
Clinician-in-the-Loop Decision Making: Reinforcement Learning with Near-Optimal Set-Valued Policies
| 13 |
icml
| 3 | 0 |
2023-06-17 03:57:42.056000
|
https://github.com/MLD3/RL-Set-Valued-Policy
| 12 |
Clinician-in-the-loop decision making: Reinforcement learning with near-optimal set-valued policies
|
https://scholar.google.com/scholar?cluster=2625470057202017453&hl=en&as_sdt=0,5
| 2 | 2,020 |
Variational Imitation Learning with Diverse-quality Demonstrations
| 26 |
icml
| 3 | 0 |
2023-06-17 03:57:42.258000
|
https://github.com/voot-t/vild_code
| 13 |
Variational imitation learning with diverse-quality demonstrations
|
https://scholar.google.com/scholar?cluster=17459982405311544718&hl=en&as_sdt=0,5
| 2 | 2,020 |
Inductive Relation Prediction by Subgraph Reasoning
| 213 |
icml
| 50 | 9 |
2023-06-17 03:57:42.460000
|
https://github.com/kkteru/grail
| 166 |
Inductive relation prediction by subgraph reasoning
|
https://scholar.google.com/scholar?cluster=14042316464156946923&hl=en&as_sdt=0,33
| 4 | 2,020 |
Few-shot Domain Adaptation by Causal Mechanism Transfer
| 71 |
icml
| 13 | 41 |
2023-06-17 03:57:42.662000
|
https://github.com/takeshi-teshima/few-shot-domain-adaptation-by-causal-mechanism-transfer
| 34 |
Few-shot domain adaptation by causal mechanism transfer
|
https://scholar.google.com/scholar?cluster=15173839596303603057&hl=en&as_sdt=0,5
| 3 | 2,020 |
Convolutional dictionary learning based auto-encoders for natural exponential-family distributions
| 22 |
icml
| 1 | 0 |
2023-06-17 03:57:42.864000
|
https://github.com/ds2p/dea
| 2 |
Convolutional dictionary learning based auto-encoders for natural exponential-family distributions
|
https://scholar.google.com/scholar?cluster=17717998361857407154&hl=en&as_sdt=0,47
| 3 | 2,020 |
Choice Set Optimization Under Discrete Choice Models of Group Decisions
| 6 |
icml
| 1 | 0 |
2023-06-17 03:57:43.086000
|
https://github.com/tomlinsonk/choice-set-opt
| 9 |
Choice set optimization under discrete choice models of group decisions
|
https://scholar.google.com/scholar?cluster=9509628446146574324&hl=en&as_sdt=0,5
| 5 | 2,020 |
TrajectoryNet: A Dynamic Optimal Transport Network for Modeling Cellular Dynamics
| 69 |
icml
| 12 | 6 |
2023-06-17 03:57:43.288000
|
https://github.com/KrishnaswamyLab/TrajectoryNet
| 72 |
Trajectorynet: A dynamic optimal transport network for modeling cellular dynamics
|
https://scholar.google.com/scholar?cluster=13927969516648778690&hl=en&as_sdt=0,33
| 8 | 2,020 |
Bayesian Learning from Sequential Data using Gaussian Processes with Signature Covariances
| 29 |
icml
| 9 | 3 |
2023-06-17 03:57:43.490000
|
https://github.com/tgcsaba/GPSig
| 37 |
Bayesian learning from sequential data using gaussian processes with signature covariances
|
https://scholar.google.com/scholar?cluster=5665279431482036771&hl=en&as_sdt=0,33
| 3 | 2,020 |
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
| 75 |
icml
| 5 | 0 |
2023-06-17 03:57:43.693000
|
https://github.com/ftramer/Excessive-Invariance
| 25 |
Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations
|
https://scholar.google.com/scholar?cluster=12838198146332206865&hl=en&as_sdt=0,47
| 6 | 2,020 |
Bayesian Differential Privacy for Machine Learning
| 58 |
icml
| 4 | 0 |
2023-06-17 03:57:43.895000
|
https://github.com/AlekseiTriastcyn/bayesian-differential-privacy
| 16 |
Bayesian differential privacy for machine learning
|
https://scholar.google.com/scholar?cluster=2037504457051740866&hl=en&as_sdt=0,5
| 2 | 2,020 |
Single Point Transductive Prediction
| 2 |
icml
| 0 | 0 |
2023-06-17 03:57:44.098000
|
https://github.com/nileshtrip/SPTransducPredCode
| 3 |
Single point transductive prediction
|
https://scholar.google.com/scholar?cluster=4391877212575021385&hl=en&as_sdt=0,36
| 2 | 2,020 |
From ImageNet to Image Classification: Contextualizing Progress on Benchmarks
| 111 |
icml
| 2 | 0 |
2023-06-17 03:57:44.299000
|
https://github.com/MadryLab/ImageNetMultiLabel
| 28 |
From imagenet to image classification: Contextualizing progress on benchmarks
|
https://scholar.google.com/scholar?cluster=17622651192510371827&hl=en&as_sdt=0,5
| 9 | 2,020 |
Approximating Stacked and Bidirectional Recurrent Architectures with the Delayed Recurrent Neural Network
| 11 |
icml
| 0 | 0 |
2023-06-17 03:57:44.502000
|
https://github.com/TuKo/dRNN
| 5 |
Approximating stacked and bidirectional recurrent architectures with the delayed recurrent neural network
|
https://scholar.google.com/scholar?cluster=1436978091908679295&hl=en&as_sdt=0,14
| 3 | 2,020 |
Uncertainty Estimation Using a Single Deep Deterministic Neural Network
| 304 |
icml
| 32 | 2 |
2023-06-17 03:57:44.703000
|
https://github.com/y0ast/deterministic-uncertainty-quantification
| 239 |
Uncertainty estimation using a single deep deterministic neural network
|
https://scholar.google.com/scholar?cluster=16222536793080297152&hl=en&as_sdt=0,32
| 7 | 2,020 |
Born-Again Tree Ensembles
| 50 |
icml
| 5 | 6 |
2023-06-17 03:57:44.937000
|
https://github.com/vidalt/BA-Trees
| 56 |
Born-again tree ensembles
|
https://scholar.google.com/scholar?cluster=16560127278940498393&hl=en&as_sdt=0,5
| 4 | 2,020 |
New Oracle-Efficient Algorithms for Private Synthetic Data Release
| 45 |
icml
| 2 | 0 |
2023-06-17 03:57:45.141000
|
https://github.com/giusevtr/fem
| 7 |
New oracle-efficient algorithms for private synthetic data release
|
https://scholar.google.com/scholar?cluster=18163576365323257065&hl=en&as_sdt=0,36
| 2 | 2,020 |
Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
| 275 |
icml
| 53 | 16 |
2023-06-17 03:57:45.343000
|
https://github.com/anvoynov/GANLatentDiscovery
| 406 |
Unsupervised discovery of interpretable directions in the gan latent space
|
https://scholar.google.com/scholar?cluster=13408893088338762457&hl=en&as_sdt=0,5
| 10 | 2,020 |
Safe Reinforcement Learning in Constrained Markov Decision Processes
| 87 |
icml
| 8 | 0 |
2023-06-17 03:57:45.552000
|
https://github.com/akifumi-wachi-4/safe_near_optimal_mdp
| 38 |
Safe reinforcement learning in constrained Markov decision processes
|
https://scholar.google.com/scholar?cluster=13376476556539351032&hl=en&as_sdt=0,44
| 1 | 2,020 |
Towards Accurate Post-training Network Quantization via Bit-Split and Stitching
| 76 |
icml
| 7 | 0 |
2023-06-17 03:57:45.755000
|
https://github.com/PeisongWang/BitSplit
| 38 |
Towards accurate post-training network quantization via bit-split and stitching
|
https://scholar.google.com/scholar?cluster=958273940309910649&hl=en&as_sdt=0,5
| 2 | 2,020 |
ROMA: Multi-Agent Reinforcement Learning with Emergent Roles
| 137 |
icml
| 32 | 14 |
2023-06-17 03:57:45.958000
|
https://github.com/TonghanWang/ROMA
| 136 |
Roma: Multi-agent reinforcement learning with emergent roles
|
https://scholar.google.com/scholar?cluster=10158010923788252116&hl=en&as_sdt=0,5
| 4 | 2,020 |
Continuously Indexed Domain Adaptation
| 77 |
icml
| 18 | 3 |
2023-06-17 03:57:46.161000
|
https://github.com/hehaodele/CIDA
| 108 |
Continuously indexed domain adaptation
|
https://scholar.google.com/scholar?cluster=3441708260891083426&hl=en&as_sdt=0,33
| 6 | 2,020 |
Frustratingly Simple Few-Shot Object Detection
| 306 |
icml
| 215 | 56 |
2023-06-17 03:57:46.362000
|
https://github.com/ucbdrive/few-shot-object-detection
| 961 |
Frustratingly simple few-shot object detection
|
https://scholar.google.com/scholar?cluster=13847197306360708920&hl=en&as_sdt=0,5
| 28 | 2,020 |
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
| 946 |
icml
| 34 | 0 |
2023-06-17 03:57:46.578000
|
https://github.com/SsnL/align_uniform
| 354 |
Understanding contrastive representation learning through alignment and uniformity on the hypersphere
|
https://scholar.google.com/scholar?cluster=5122266742982340747&hl=en&as_sdt=0,3
| 11 | 2,020 |
Enhanced POET: Open-ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions
| 73 |
icml
| 51 | 5 |
2023-06-17 03:57:46.781000
|
https://github.com/uber-research/poet
| 233 |
Enhanced poet: Open-ended reinforcement learning through unbounded invention of learning challenges and their solutions
|
https://scholar.google.com/scholar?cluster=17583648324422024748&hl=en&as_sdt=0,44
| 15 | 2,020 |
Haar Graph Pooling
| 62 |
icml
| 5 | 6 |
2023-06-17 03:57:46.983000
|
https://github.com/YuGuangWang/HaarPool
| 9 |
Haar graph pooling
|
https://scholar.google.com/scholar?cluster=196487871230108211&hl=en&as_sdt=0,34
| 2 | 2,020 |
Deep Streaming Label Learning
| 29 |
icml
| 2 | 1 |
2023-06-17 03:57:47.187000
|
https://github.com/DSLLcode/DSLL
| 5 |
Deep streaming label learning
|
https://scholar.google.com/scholar?cluster=13962185185630699460&hl=en&as_sdt=0,5
| 1 | 2,020 |
BoXHED: Boosted eXact Hazard Estimator with Dynamic covariates
| 7 |
icml
| 0 | 0 |
2023-06-17 03:57:47.389000
|
https://github.com/BoXHED/BoXHED1.0
| 6 |
BoXHED: Boosted eXact hazard estimator with dynamic covariates
|
https://scholar.google.com/scholar?cluster=4269847056654945250&hl=en&as_sdt=0,3
| 1 | 2,020 |
Optimizing Data Usage via Differentiable Rewards
| 41 |
icml
| 0 | 0 |
2023-06-17 03:57:47.591000
|
https://github.com/cindyxinyiwang/DataSelection
| 2 |
Optimizing data usage via differentiable rewards
|
https://scholar.google.com/scholar?cluster=4407582239871274683&hl=en&as_sdt=0,11
| 1 | 2,020 |
Loss Function Search for Face Recognition
| 45 |
icml
| 8 | 5 |
2023-06-17 03:57:47.794000
|
https://github.com/tiandunx/loss_function_search
| 37 |
Loss function search for face recognition
|
https://scholar.google.com/scholar?cluster=4661570129688704480&hl=en&as_sdt=0,31
| 3 | 2,020 |
Striving for Simplicity and Performance in Off-Policy DRL: Output Normalization and Non-Uniform Sampling
| 20 |
icml
| 6 | 2 |
2023-06-17 03:57:47.996000
|
https://github.com/AutumnWu/Streamlined-Off-Policy-Learning
| 18 |
Striving for simplicity and performance in off-policy DRL: Output normalization and non-uniform sampling
|
https://scholar.google.com/scholar?cluster=11197578875286418478&hl=en&as_sdt=0,5
| 4 | 2,020 |
Thompson Sampling via Local Uncertainty
| 16 |
icml
| 2 | 1 |
2023-06-17 03:57:48.199000
|
https://github.com/Zhendong-Wang/Thompson-Sampling-via-Local-Uncertainty
| 3 |
Thompson sampling via local uncertainty
|
https://scholar.google.com/scholar?cluster=15106467344904481899&hl=en&as_sdt=0,10
| 1 | 2,020 |
The Implicit and Explicit Regularization Effects of Dropout
| 91 |
icml
| 2 | 0 |
2023-06-17 03:57:48.400000
|
https://github.com/cwein3/dropout-analytical
| 4 |
The implicit and explicit regularization effects of dropout
|
https://scholar.google.com/scholar?cluster=7315580872864689276&hl=en&as_sdt=0,44
| 2 | 2,020 |
How Good is the Bayes Posterior in Deep Neural Networks Really?
| 274 |
icml
| 7,322 | 1,026 |
2023-06-17 03:57:48.601000
|
https://github.com/google-research/google-research
| 29,791 |
How good is the bayes posterior in deep neural networks really?
|
https://scholar.google.com/scholar?cluster=11185773961293705941&hl=en&as_sdt=0,36
| 727 | 2,020 |
State Space Expectation Propagation: Efficient Inference Schemes for Temporal Gaussian Processes
| 12 |
icml
| 12 | 2 |
2023-06-17 03:57:48.804000
|
https://github.com/AaltoML/kalman-jax
| 86 |
State space expectation propagation: Efficient inference schemes for temporal Gaussian processes
|
https://scholar.google.com/scholar?cluster=3634962580178312612&hl=en&as_sdt=0,5
| 10 | 2,020 |
Efficiently sampling functions from Gaussian process posteriors
| 107 |
icml
| 16 | 0 |
2023-06-17 03:57:49.006000
|
https://github.com/j-wilson/GPflowSampling
| 57 |
Efficiently sampling functions from Gaussian process posteriors
|
https://scholar.google.com/scholar?cluster=15698699983460471132&hl=en&as_sdt=0,39
| 3 | 2,020 |
Obtaining Adjustable Regularization for Free via Iterate Averaging
| 4 |
icml
| 1 | 0 |
2023-06-17 03:57:49.208000
|
https://github.com/uuujf/IterAvg
| 3 |
Obtaining adjustable regularization for free via iterate averaging
|
https://scholar.google.com/scholar?cluster=8907876046676470481&hl=en&as_sdt=0,23
| 1 | 2,020 |
DeltaGrad: Rapid retraining of machine learning models
| 94 |
icml
| 1 | 1 |
2023-06-17 03:57:49.410000
|
https://github.com/thuwuyinjun/DeltaGrad
| 19 |
Deltagrad: Rapid retraining of machine learning models
|
https://scholar.google.com/scholar?cluster=5989632010826923243&hl=en&as_sdt=0,5
| 1 | 2,020 |
On the Noisy Gradient Descent that Generalizes as SGD
| 66 |
icml
| 2 | 0 |
2023-06-17 03:57:49.612000
|
https://github.com/uuujf/MultiNoise
| 4 |
On the noisy gradient descent that generalizes as sgd
|
https://scholar.google.com/scholar?cluster=7998772173539396288&hl=en&as_sdt=0,5
| 2 | 2,020 |
Stronger and Faster Wasserstein Adversarial Attacks
| 18 |
icml
| 9 | 1 |
2023-06-17 03:57:49.813000
|
https://github.com/watml/fast-wasserstein-adversarial
| 21 |
Stronger and faster wasserstein adversarial attacks
|
https://scholar.google.com/scholar?cluster=5877536134148697532&hl=en&as_sdt=0,31
| 5 | 2,020 |
On the Generalization Effects of Linear Transformations in Data Augmentation
| 57 |
icml
| 6 | 3 |
2023-06-17 03:57:50.016000
|
https://github.com/SenWu/dauphin
| 28 |
On the generalization effects of linear transformations in data augmentation
|
https://scholar.google.com/scholar?cluster=18304073580439494047&hl=en&as_sdt=0,5
| 5 | 2,020 |
Generative Flows with Matrix Exponential
| 4 |
icml
| 0 | 0 |
2023-06-17 03:57:50.218000
|
https://github.com/changyi7231/MEF
| 10 |
Generative flows with matrix exponential
|
https://scholar.google.com/scholar?cluster=5544738884567808407&hl=en&as_sdt=0,5
| 1 | 2,020 |
Maximum-and-Concatenation Networks
| 1 |
icml
| 0 | 0 |
2023-06-17 03:57:50.422000
|
https://github.com/XingyuXie/Maximum-and-Concatenation-Networks
| 3 |
Maximum-and-concatenation networks
|
https://scholar.google.com/scholar?cluster=6894098060248560789&hl=en&as_sdt=0,24
| 3 | 2,020 |
Zeno++: Robust Fully Asynchronous SGD
| 74 |
icml
| 2 | 0 |
2023-06-17 03:57:50.623000
|
https://github.com/xcgoner/iclr2020_zeno_async
| 11 |
Zeno++: Robust fully asynchronous SGD
|
https://scholar.google.com/scholar?cluster=6498141081528459239&hl=en&as_sdt=0,44
| 3 | 2,020 |
On Variational Learning of Controllable Representations for Text without Supervision
| 42 |
icml
| 7 | 2 |
2023-06-17 03:57:50.825000
|
https://github.com/BorealisAI/CP-VAE
| 26 |
On variational learning of controllable representations for text without supervision
|
https://scholar.google.com/scholar?cluster=2089630781496630830&hl=en&as_sdt=0,7
| 5 | 2,020 |
Class-Weighted Classification: Trade-offs and Robust Approaches
| 27 |
icml
| 1 | 0 |
2023-06-17 03:57:51.027000
|
https://github.com/neilzxu/robust_weighted_classification
| 6 |
Class-weighted classification: Trade-offs and robust approaches
|
https://scholar.google.com/scholar?cluster=11254113557179327347&hl=en&as_sdt=0,33
| 3 | 2,020 |
Learning Autoencoders with Relational Regularization
| 42 |
icml
| 5 | 1 |
2023-06-17 03:57:51.230000
|
https://github.com/HongtengXu/Relational-AutoEncoders
| 39 |
Learning autoencoders with relational regularization
|
https://scholar.google.com/scholar?cluster=12327328629265717488&hl=en&as_sdt=0,5
| 3 | 2,020 |
Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control
| 61 |
icml
| 22 | 2 |
2023-06-17 03:57:51.434000
|
https://github.com/mit-gfx/PGMORL
| 75 |
Prediction-guided multi-objective reinforcement learning for continuous robot control
|
https://scholar.google.com/scholar?cluster=7336223321111703903&hl=en&as_sdt=0,21
| 18 | 2,020 |
MetaFun: Meta-Learning with Iterative Functional Updates
| 53 |
icml
| 1 | 0 |
2023-06-17 03:57:51.637000
|
https://github.com/jinxu06/metafun-tensorflow
| 15 |
Metafun: Meta-learning with iterative functional updates
|
https://scholar.google.com/scholar?cluster=4986964761080027704&hl=en&as_sdt=0,5
| 3 | 2,020 |
Amortized Finite Element Analysis for Fast PDE-Constrained Optimization
| 29 |
icml
| 3 | 1 |
2023-06-17 03:57:51.839000
|
https://github.com/tianjuxue/AmorFEA
| 10 |
Amortized finite element analysis for fast pde-constrained optimization
|
https://scholar.google.com/scholar?cluster=14411842717926650131&hl=en&as_sdt=0,44
| 3 | 2,020 |
Feature Selection using Stochastic Gates
| 83 |
icml
| 20 | 4 |
2023-06-17 03:57:52.041000
|
https://github.com/runopti/stg
| 74 |
Feature selection using stochastic gates
|
https://scholar.google.com/scholar?cluster=3895875359750859329&hl=en&as_sdt=0,34
| 4 | 2,020 |
Energy-Based Processes for Exchangeable Data
| 8 |
icml
| 7,322 | 1,026 |
2023-06-17 03:57:52.244000
|
https://github.com/google-research/google-research
| 29,791 |
Energy-based processes for exchangeable data
|
https://scholar.google.com/scholar?cluster=11717820488260195326&hl=en&as_sdt=0,5
| 727 | 2,020 |
Randomized Smoothing of All Shapes and Sizes
| 141 |
icml
| 6 | 1 |
2023-06-17 03:57:52.446000
|
https://github.com/tonyduan/rs4a
| 48 |
Randomized smoothing of all shapes and sizes
|
https://scholar.google.com/scholar?cluster=4321255830555154678&hl=en&as_sdt=0,21
| 2 | 2,020 |
Improving Molecular Design by Stochastic Iterative Target Augmentation
| 14 |
icml
| 4 | 0 |
2023-06-17 03:57:52.648000
|
https://github.com/yangkevin2/icml2020-stochastic-iterative-target-augmentation
| 8 |
Improving molecular design by stochastic iterative target augmentation
|
https://scholar.google.com/scholar?cluster=13262578872318506866&hl=en&as_sdt=0,5
| 3 | 2,020 |
Multi-Agent Determinantal Q-Learning
| 60 |
icml
| 7 | 12 |
2023-06-17 03:57:52.850000
|
https://github.com/QDPP-GitHub/QDPP
| 40 |
Multi-agent determinantal q-learning
|
https://scholar.google.com/scholar?cluster=15130986787127087305&hl=en&as_sdt=0,33
| 2 | 2,020 |
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
| 135 |
icml
| 7 | 2 |
2023-06-17 03:57:53.052000
|
https://github.com/yaodongyu/Rethink-BiasVariance-Tradeoff
| 51 |
Rethinking bias-variance trade-off for generalization of neural networks
|
https://scholar.google.com/scholar?cluster=7345683172232852767&hl=en&as_sdt=0,25
| 4 | 2,020 |
Unsupervised Transfer Learning for Spatiotemporal Predictive Networks
| 20 |
icml
| 4 | 1 |
2023-06-17 03:57:53.254000
|
https://github.com/thuml/transferable-memory
| 20 |
Unsupervised transfer learning for spatiotemporal predictive networks
|
https://scholar.google.com/scholar?cluster=11334443058124456085&hl=en&as_sdt=0,21
| 4 | 2,020 |
Pretrained Generalized Autoregressive Model with Adaptive Probabilistic Label Clusters for Extreme Multi-label Text Classification
| 30 |
icml
| 2 | 3 |
2023-06-17 03:57:53.457000
|
https://github.com/huiyegit/APLC_XLNet
| 14 |
Pretrained generalized autoregressive model with adaptive probabilistic label clusters for extreme multi-label text classification
|
https://scholar.google.com/scholar?cluster=11309810770103233080&hl=en&as_sdt=0,5
| 1 | 2,020 |
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
| 81 |
icml
| 7 | 1 |
2023-06-17 03:57:53.660000
|
https://github.com/lushleaf/Network-Pruning-Greedy-Forward-Selection
| 20 |
Good subnetworks provably exist: Pruning via greedy forward selection
|
https://scholar.google.com/scholar?cluster=9077539701453917687&hl=en&as_sdt=0,5
| 2 | 2,020 |
Data Valuation using Reinforcement Learning
| 109 |
icml
| 7,322 | 1,026 |
2023-06-17 03:57:53.862000
|
https://github.com/google-research/google-research
| 29,791 |
Data valuation using reinforcement learning
|
https://scholar.google.com/scholar?cluster=12792068149668296468&hl=en&as_sdt=0,5
| 727 | 2,020 |
XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning
| 40 |
icml
| 8 | 2 |
2023-06-17 03:57:54.063000
|
https://github.com/EdwinKim3069/XtarNet
| 27 |
Xtarnet: Learning to extract task-adaptive representation for incremental few-shot learning
|
https://scholar.google.com/scholar?cluster=14540039022540446073&hl=en&as_sdt=0,5
| 3 | 2,020 |
When Does Self-Supervision Help Graph Convolutional Networks?
| 161 |
icml
| 26 | 0 |
2023-06-17 03:57:54.266000
|
https://github.com/Shen-Lab/SS-GCNs
| 105 |
When does self-supervision help graph convolutional networks?
|
https://scholar.google.com/scholar?cluster=8359089573172587095&hl=en&as_sdt=0,33
| 4 | 2,020 |
Graph Structure of Neural Networks
| 108 |
icml
| 33 | 0 |
2023-06-17 03:57:54.469000
|
https://github.com/facebookresearch/graph2nn
| 142 |
Graph structure of neural networks
|
https://scholar.google.com/scholar?cluster=4649234253279793186&hl=en&as_sdt=0,5
| 15 | 2,020 |
Intrinsic Reward Driven Imitation Learning via Generative Model
| 33 |
icml
| 4 | 0 |
2023-06-17 03:57:54.671000
|
https://github.com/xingruiyu/GIRIL
| 12 |
Intrinsic reward driven imitation learning via generative model
|
https://scholar.google.com/scholar?cluster=3469994683333919574&hl=en&as_sdt=0,16
| 3 | 2,020 |
Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters
| 63 |
icml
| 22 | 5 |
2023-06-17 03:57:54.873000
|
https://github.com/Wenhui-Yu/LCFN
| 67 |
Graph convolutional network for recommendation with low-pass collaborative filters
|
https://scholar.google.com/scholar?cluster=1889227241401545976&hl=en&as_sdt=0,44
| 1 | 2,020 |
Training Deep Energy-Based Models with f-Divergence Minimization
| 34 |
icml
| 6 | 4 |
2023-06-17 03:57:55.093000
|
https://github.com/ermongroup/f-EBM
| 35 |
Training deep energy-based models with f-divergence minimization
|
https://scholar.google.com/scholar?cluster=2539049001962282394&hl=en&as_sdt=0,45
| 7 | 2,020 |
Graph Random Neural Features for Distance-Preserving Graph Representations
| 11 |
icml
| 0 | 0 |
2023-06-17 03:57:55.295000
|
https://github.com/dzambon/graph-random-neural-features
| 6 |
Graph random neural features for distance-preserving graph representations
|
https://scholar.google.com/scholar?cluster=2137393059005426125&hl=en&as_sdt=0,34
| 2 | 2,020 |
Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic Constraints via Message Passing
| 9 |
icml
| 0 | 0 |
2023-06-17 03:57:55.497000
|
https://github.com/UCLA-StarAI/mpwmi
| 4 |
Scaling up hybrid probabilistic inference with logical and arithmetic constraints via message passing
|
https://scholar.google.com/scholar?cluster=11266053605918005936&hl=en&as_sdt=0,5
| 5 | 2,020 |
Learning Calibratable Policies using Programmatic Style-Consistency
| 12 |
icml
| 3 | 0 |
2023-06-17 03:57:55.702000
|
https://github.com/ezhan94/calibratable-style-consistency
| 7 |
Learning calibratable policies using programmatic style-consistency
|
https://scholar.google.com/scholar?cluster=14384068625001787252&hl=en&as_sdt=0,14
| 3 | 2,020 |
Robustness to Programmable String Transformations via Augmented Abstract Training
| 12 |
icml
| 1 | 0 |
2023-06-17 03:57:55.905000
|
https://github.com/ForeverZyh/A3T
| 2 |
Robustness to programmable string transformations via augmented abstract training
|
https://scholar.google.com/scholar?cluster=8464081788378179758&hl=en&as_sdt=0,5
| 2 | 2,020 |
Mix-n-Match : Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning
| 119 |
icml
| 4 | 2 |
2023-06-17 03:57:56.107000
|
https://github.com/zhang64-llnl/Mix-n-Match-Calibration
| 28 |
Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning
|
https://scholar.google.com/scholar?cluster=11733441465519935785&hl=en&as_sdt=0,5
| 4 | 2,020 |
Self-Attentive Hawkes Process
| 135 |
icml
| 13 | 4 |
2023-06-17 03:57:56.310000
|
https://github.com/QiangAIResearcher/sahp_repo
| 41 |
Self-attentive Hawkes process
|
https://scholar.google.com/scholar?cluster=10015751221024050727&hl=en&as_sdt=0,47
| 2 | 2,020 |
GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values
| 69 |
icml
| 658 | 6 |
2023-06-17 03:57:56.512000
|
https://github.com/ShangtongZhang/DeepRL
| 2,943 |
Gradientdice: Rethinking generalized offline estimation of stationary values
|
https://scholar.google.com/scholar?cluster=13399124962585883315&hl=en&as_sdt=0,5
| 93 | 2,020 |
Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation
| 39 |
icml
| 658 | 6 |
2023-06-17 03:57:56.714000
|
https://github.com/ShangtongZhang/DeepRL
| 2,943 |
Provably convergent two-timescale off-policy actor-critic with function approximation
|
https://scholar.google.com/scholar?cluster=13566441396966994806&hl=en&as_sdt=0,44
| 93 | 2,020 |
Invariant Causal Prediction for Block MDPs
| 82 |
icml
| 9 | 0 |
2023-06-17 03:57:56.916000
|
https://github.com/facebookresearch/icp-block-mdp
| 43 |
Invariant causal prediction for block mdps
|
https://scholar.google.com/scholar?cluster=18252595177085256687&hl=en&as_sdt=0,5
| 8 | 2,020 |
CAUSE: Learning Granger Causality from Event Sequences using Attribution Methods
| 28 |
icml
| 8 | 4 |
2023-06-17 03:57:57.119000
|
https://github.com/razhangwei/CAUSE
| 22 |
Cause: Learning granger causality from event sequences using attribution methods
|
https://scholar.google.com/scholar?cluster=1620742205028282603&hl=en&as_sdt=0,5
| 1 | 2,020 |
Perceptual Generative Autoencoders
| 28 |
icml
| 1 | 0 |
2023-06-17 03:57:57.321000
|
https://github.com/zj10/PGA
| 23 |
Perceptual generative autoencoders
|
https://scholar.google.com/scholar?cluster=8244017166037108075&hl=en&as_sdt=0,5
| 2 | 2,020 |
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
| 1,245 |
icml
| 309 | 101 |
2023-06-17 03:57:57.524000
|
https://github.com/google-research/pegasus
| 1,505 |
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization
|
https://scholar.google.com/scholar?cluster=6497734628006555281&hl=en&as_sdt=0,23
| 49 | 2,020 |
On Leveraging Pretrained GANs for Generation with Limited Data
| 65 |
icml
| 6 | 2 |
2023-06-17 03:57:57.726000
|
https://github.com/MiaoyunZhao/GANTransferLimitedData
| 59 |
On leveraging pretrained GANs for generation with limited data
|
https://scholar.google.com/scholar?cluster=16391058196447072580&hl=en&as_sdt=0,10
| 3 | 2,020 |
Feature Quantization Improves GAN Training
| 33 |
icml
| 30 | 6 |
2023-06-17 03:57:57.930000
|
https://github.com/YangNaruto/FQ-GAN
| 169 |
Feature quantization improves gan training
|
https://scholar.google.com/scholar?cluster=18271199409635968326&hl=en&as_sdt=0,31
| 11 | 2,020 |
Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion
| 11 |
icml
| 1 | 0 |
2023-06-17 03:57:58.132000
|
https://github.com/enosair/gdp-edgeworth
| 1 |
Sharp composition bounds for Gaussian differential privacy via edgeworth expansion
|
https://scholar.google.com/scholar?cluster=9890314862207483858&hl=en&as_sdt=0,33
| 2 | 2,020 |
Error-Bounded Correction of Noisy Labels
| 76 |
icml
| 5 | 3 |
2023-06-17 03:57:58.334000
|
https://github.com/pingqingsheng/LRT
| 15 |
Error-bounded correction of noisy labels
|
https://scholar.google.com/scholar?cluster=16003512579511208211&hl=en&as_sdt=0,33
| 2 | 2,020 |
MoNet3D: Towards Accurate Monocular 3D Object Localization in Real Time
| 11 |
icml
| 6 | 3 |
2023-06-17 03:57:58.536000
|
https://github.com/CQUlearningsystemgroup/YicongPeng
| 35 |
Monet3d: Towards accurate monocular 3d object localization in real time
|
https://scholar.google.com/scholar?cluster=16905032404731743832&hl=en&as_sdt=0,11
| 6 | 2,020 |
Nonparametric Score Estimators
| 20 |
icml
| 1 | 0 |
2023-06-17 03:57:58.738000
|
https://github.com/miskcoo/kscore
| 34 |
Nonparametric score estimators
|
https://scholar.google.com/scholar?cluster=497538758665413874&hl=en&as_sdt=0,14
| 5 | 2,020 |
Robust Outlier Arm Identification
| 2 |
icml
| 0 | 0 |
2023-06-17 03:57:58.941000
|
https://github.com/yinglunz/ROAI_ICML2020
| 1 |
Robust outlier arm identification
|
https://scholar.google.com/scholar?cluster=11900711973456670658&hl=en&as_sdt=0,11
| 1 | 2,020 |
Causal Effect Estimation and Optimal Dose Suggestions in Mobile Health
| 9 |
icml
| 1 | 0 |
2023-06-17 03:57:59.144000
|
https://github.com/lz2379/Mhealth
| 1 |
Causal effect estimation and optimal dose suggestions in mobile health
|
https://scholar.google.com/scholar?cluster=15932963727789756281&hl=en&as_sdt=0,39
| 1 | 2,020 |
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
| 21 |
icml
| 4 | 1 |
2023-06-17 03:57:59.346000
|
https://github.com/schzhu/learning-adversarially-robust-representations
| 20 |
Learning adversarially robust representations via worst-case mutual information maximization
|
https://scholar.google.com/scholar?cluster=16073902151794610018&hl=en&as_sdt=0,5
| 4 | 2,020 |
Laplacian Regularized Few-Shot Learning
| 123 |
icml
| 8 | 2 |
2023-06-17 03:57:59.547000
|
https://github.com/imtiazziko/LaplacianShot
| 76 |
Laplacian regularized few-shot learning
|
https://scholar.google.com/scholar?cluster=1752522898167620276&hl=en&as_sdt=0,5
| 4 | 2,020 |
Transformer Hawkes Process
| 153 |
icml
| 43 | 14 |
2023-06-17 03:57:59.749000
|
https://github.com/SimiaoZuo/Transformer-Hawkes-Process
| 129 |
Transformer hawkes process
|
https://scholar.google.com/scholar?cluster=16348815210194084709&hl=en&as_sdt=0,33
| 7 | 2,020 |
Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling
| 33 |
icml
| 2 | 2 |
2023-06-17 04:13:07.614000
|
https://github.com/cair/PyTsetlinMachineCUDA
| 37 |
Massively parallel and asynchronous tsetlin machine architecture supporting almost constant-time scaling
|
https://scholar.google.com/scholar?cluster=14399815899714278833&hl=en&as_sdt=0,5
| 8 | 2,021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.