title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Mixed Cross Entropy Loss for Neural Machine Translation
| 5 |
icml
| 1 | 0 |
2023-06-17 04:13:48.677000
|
https://github.com/haorannlp/mix
| 17 |
Mixed cross entropy loss for neural machine translation
|
https://scholar.google.com/scholar?cluster=16791533551271975512&hl=en&as_sdt=0,5
| 1 | 2,021 |
Distributionally Robust Optimization with Markovian Data
| 6 |
icml
| 0 | 0 |
2023-06-17 04:13:48.878000
|
https://github.com/mkvdro/DRO_Markov
| 2 |
Distributionally robust optimization with Markovian data
|
https://scholar.google.com/scholar?cluster=13967502296963435329&hl=en&as_sdt=0,5
| 1 | 2,021 |
Communication-Efficient Distributed SVD via Local Power Iterations
| 10 |
icml
| 0 | 0 |
2023-06-17 04:13:49.080000
|
https://github.com/lx10077/LocalPower
| 0 |
Communication-efficient distributed SVD via local power iterations
|
https://scholar.google.com/scholar?cluster=1741371435444323515&hl=en&as_sdt=0,5
| 1 | 2,021 |
FILTRA: Rethinking Steerable CNN by Filter Transform
| 2 |
icml
| 1 | 0 |
2023-06-17 04:13:49.284000
|
https://github.com/prclibo/filtra
| 7 |
Filtra: Rethinking steerable CNN by filter transform
|
https://scholar.google.com/scholar?cluster=12773800134537729615&hl=en&as_sdt=0,5
| 1 | 2,021 |
TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models
| 40 |
icml
| 3 | 5 |
2023-06-17 04:13:49.486000
|
https://github.com/zhuohan123/terapipe
| 45 |
Terapipe: Token-level pipeline parallelism for training large-scale language models
|
https://scholar.google.com/scholar?cluster=9109745061137409325&hl=en&as_sdt=0,6
| 3 | 2,021 |
Towards Understanding and Mitigating Social Biases in Language Models
| 123 |
icml
| 8 | 0 |
2023-06-17 04:13:49.689000
|
https://github.com/pliang279/LM_bias
| 48 |
Towards understanding and mitigating social biases in language models
|
https://scholar.google.com/scholar?cluster=16764320017418997560&hl=en&as_sdt=0,5
| 4 | 2,021 |
Information Obfuscation of Graph Neural Networks
| 27 |
icml
| 7 | 2 |
2023-06-17 04:13:49.891000
|
https://github.com/liaopeiyuan/GAL
| 35 |
Information obfuscation of graph neural networks
|
https://scholar.google.com/scholar?cluster=17996715912972296815&hl=en&as_sdt=0,5
| 5 | 2,021 |
Guided Exploration with Proximal Policy Optimization using a Single Demonstration
| 7 |
icml
| 1 | 0 |
2023-06-17 04:13:50.094000
|
https://github.com/compsciencelab/ppo_D
| 10 |
Guided exploration with proximal policy optimization using a single demonstration
|
https://scholar.google.com/scholar?cluster=1058578842192260735&hl=en&as_sdt=0,26
| 2 | 2,021 |
Debiasing a First-order Heuristic for Approximate Bi-level Optimization
| 3 |
icml
| 1 | 0 |
2023-06-17 04:13:50.297000
|
https://github.com/xingyousong/ufom
| 3 |
Debiasing a first-order heuristic for approximate bi-level optimization
|
https://scholar.google.com/scholar?cluster=11037305189679806516&hl=en&as_sdt=0,50
| 2 | 2,021 |
Making transport more robust and interpretable by moving data through a small number of anchor points
| 13 |
icml
| 3 | 1 |
2023-06-17 04:13:50.498000
|
https://github.com/nerdslab/latentOT
| 13 |
Making transport more robust and interpretable by moving data through a small number of anchor points
|
https://scholar.google.com/scholar?cluster=14045713528225441550&hl=en&as_sdt=0,47
| 2 | 2,021 |
Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation
| 8 |
icml
| 4 | 0 |
2023-06-17 04:13:50.700000
|
https://github.com/shawnlimn/ScaleGrad
| 13 |
Straight to the gradient: Learning to use novel tokens for neural text generation
|
https://scholar.google.com/scholar?cluster=743520526432802506&hl=en&as_sdt=0,36
| 1 | 2,021 |
Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data
| 57 |
icml
| 3 | 1 |
2023-06-17 04:13:50.903000
|
https://github.com/epfml/quasi-global-momentum
| 7 |
Quasi-global momentum: Accelerating decentralized deep learning on heterogeneous data
|
https://scholar.google.com/scholar?cluster=11090813795485624273&hl=en&as_sdt=0,5
| 5 | 2,021 |
Learning by Turning: Neural Architecture Aware Optimisation
| 11 |
icml
| 6 | 1 |
2023-06-17 04:13:51.106000
|
https://github.com/jxbz/nero
| 19 |
Learning by turning: Neural architecture aware optimisation
|
https://scholar.google.com/scholar?cluster=9218227008920600415&hl=en&as_sdt=0,15
| 3 | 2,021 |
Just Train Twice: Improving Group Robustness without Training Group Information
| 213 |
icml
| 14 | 1 |
2023-06-17 04:13:51.308000
|
https://github.com/anniesch/jtt
| 58 |
Just train twice: Improving group robustness without training group information
|
https://scholar.google.com/scholar?cluster=13173846618257909762&hl=en&as_sdt=0,5
| 1 | 2,021 |
Event Outlier Detection in Continuous Time
| 7 |
icml
| 0 | 0 |
2023-06-17 04:13:51.520000
|
https://github.com/siqil/CPPOD
| 9 |
Event outlier detection in continuous time
|
https://scholar.google.com/scholar?cluster=11315185602040849494&hl=en&as_sdt=0,7
| 1 | 2,021 |
Heterogeneous Risk Minimization
| 58 |
icml
| 8 | 0 |
2023-06-17 04:13:51.733000
|
https://github.com/ljsthu/hrm
| 19 |
Heterogeneous risk minimization
|
https://scholar.google.com/scholar?cluster=12299879840182415633&hl=en&as_sdt=0,5
| 1 | 2,021 |
Elastic Graph Neural Networks
| 70 |
icml
| 7 | 0 |
2023-06-17 04:13:51.936000
|
https://github.com/lxiaorui/ElasticGNN
| 35 |
Elastic graph neural networks
|
https://scholar.google.com/scholar?cluster=7978714464929950404&hl=en&as_sdt=0,5
| 4 | 2,021 |
Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition
| 22 |
icml
| 5 | 1 |
2023-06-17 04:13:52.138000
|
https://github.com/cranial-xix/marl-copa
| 13 |
Coach-player multi-agent reinforcement learning for dynamic team composition
|
https://scholar.google.com/scholar?cluster=16222834590436839078&hl=en&as_sdt=0,47
| 3 | 2,021 |
Selfish Sparse RNN Training
| 29 |
icml
| 3 | 3 |
2023-06-17 04:13:52.340000
|
https://github.com/Shiweiliuiiiiiii/Selfish-RNN
| 10 |
Selfish sparse rnn training
|
https://scholar.google.com/scholar?cluster=14857851775115975297&hl=en&as_sdt=0,5
| 1 | 2,021 |
Leveraging Public Data for Practical Private Query Release
| 36 |
icml
| 0 | 0 |
2023-06-17 04:13:52.542000
|
https://github.com/terranceliu/pmw-pub
| 3 |
Leveraging public data for practical private query release
|
https://scholar.google.com/scholar?cluster=10819180564771632569&hl=en&as_sdt=0,22
| 2 | 2,021 |
Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training
| 56 |
icml
| 6 | 0 |
2023-06-17 04:13:52.744000
|
https://github.com/Shiweiliuiiiiiii/In-Time-Over-Parameterization
| 38 |
Do we actually need dense over-parameterization? in-time over-parameterization in sparse training
|
https://scholar.google.com/scholar?cluster=17950677328551432354&hl=en&as_sdt=0,5
| 2 | 2,021 |
Group Fisher Pruning for Practical Network Compression
| 66 |
icml
| 12 | 5 |
2023-06-17 04:13:52.947000
|
https://github.com/jshilong/FisherPruning
| 138 |
Group fisher pruning for practical network compression
|
https://scholar.google.com/scholar?cluster=7436704720048829343&hl=en&as_sdt=0,44
| 6 | 2,021 |
Relative Positional Encoding for Transformers with Linear Complexity
| 22 |
icml
| 7 | 3 |
2023-06-17 04:13:53.150000
|
https://github.com/aliutkus/spe
| 58 |
Relative positional encoding for transformers with linear complexity
|
https://scholar.google.com/scholar?cluster=16520451235518396778&hl=en&as_sdt=0,37
| 4 | 2,021 |
Symmetric Spaces for Graph Embeddings: A Finsler-Riemannian Approach
| 8 |
icml
| 7 | 0 |
2023-06-17 04:13:53.352000
|
https://github.com/fedelopez77/sympa
| 25 |
Symmetric spaces for graph embeddings: A finsler-riemannian approach
|
https://scholar.google.com/scholar?cluster=12337649232069613673&hl=en&as_sdt=0,33
| 2 | 2,021 |
Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification
| 9 |
icml
| 0 | 0 |
2023-06-17 04:13:53.555000
|
https://github.com/leishida/Um-Classification
| 5 |
Binary classification from multiple unlabeled datasets via surrogate set classification
|
https://scholar.google.com/scholar?cluster=8249584082478727878&hl=en&as_sdt=0,7
| 1 | 2,021 |
Meta-Cal: Well-controlled Post-hoc Calibration by Ranking
| 15 |
icml
| 2 | 0 |
2023-06-17 04:13:53.758000
|
https://github.com/maxc01/metacal
| 6 |
Meta-cal: Well-controlled post-hoc calibration by ranking
|
https://scholar.google.com/scholar?cluster=4779443102063826651&hl=en&as_sdt=0,14
| 2 | 2,021 |
Local Algorithms for Finding Densely Connected Clusters
| 5 |
icml
| 3 | 0 |
2023-06-17 04:13:53.960000
|
https://github.com/pmacg/local-densely-connected-clusters
| 4 |
Local algorithms for finding densely connected clusters
|
https://scholar.google.com/scholar?cluster=2599205940153817748&hl=en&as_sdt=0,25
| 1 | 2,021 |
Learning to Generate Noise for Multi-Attack Robustness
| 11 |
icml
| 2 | 1 |
2023-06-17 04:13:54.163000
|
https://github.com/divyam3897/MNG_AC
| 8 |
Learning to generate noise for multi-attack robustness
|
https://scholar.google.com/scholar?cluster=10029031126071377800&hl=en&as_sdt=0,5
| 1 | 2,021 |
Domain Generalization using Causal Matching
| 153 |
icml
| 30 | 11 |
2023-06-17 04:13:54.365000
|
https://github.com/microsoft/robustdg
| 159 |
Domain generalization using causal matching
|
https://scholar.google.com/scholar?cluster=7680827305765663856&hl=en&as_sdt=0,5
| 10 | 2,021 |
Nonparametric Hamiltonian Monte Carlo
| 5 |
icml
| 2 | 1 |
2023-06-17 04:13:54.567000
|
https://github.com/fzaiser/nonparametric-hmc
| 12 |
Nonparametric Hamiltonian Monte Carlo
|
https://scholar.google.com/scholar?cluster=15980590487021793124&hl=en&as_sdt=0,26
| 1 | 2,021 |
KO codes: inventing nonlinear encoding and decoding for reliable wireless communication via deep-learning
| 21 |
icml
| 8 | 1 |
2023-06-17 04:13:54.771000
|
https://github.com/deepcomm/kocodes
| 12 |
Ko codes: inventing nonlinear encoding and decoding for reliable wireless communication via deep-learning
|
https://scholar.google.com/scholar?cluster=6409739785381196000&hl=en&as_sdt=0,31
| 4 | 2,021 |
Inverse Constrained Reinforcement Learning
| 24 |
icml
| 3 | 2 |
2023-06-17 04:13:54.974000
|
https://github.com/shehryar-malik/icrl
| 13 |
Inverse constrained reinforcement learning
|
https://scholar.google.com/scholar?cluster=6882447057123293006&hl=en&as_sdt=0,5
| 2 | 2,021 |
A Sampling-Based Method for Tensor Ring Decomposition
| 20 |
icml
| 1 | 0 |
2023-06-17 04:13:55.177000
|
https://github.com/OsmanMalik/tr-als-sampled
| 6 |
A sampling-based method for tensor ring decomposition
|
https://scholar.google.com/scholar?cluster=9925150278480736841&hl=en&as_sdt=0,10
| 2 | 2,021 |
Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers
| 18 |
icml
| 820 | 36 |
2023-06-17 04:13:55.380000
|
https://github.com/deepmind/open_spiel
| 3,698 |
Multi-agent training beyond zero-sum with correlated equilibrium meta-solvers
|
https://scholar.google.com/scholar?cluster=13991149676180937828&hl=en&as_sdt=0,9
| 106 | 2,021 |
Neural Architecture Search without Training
| 214 |
icml
| 58 | 7 |
2023-06-17 04:13:55.583000
|
https://github.com/BayesWatch/nas-without-training
| 432 |
Neural architecture search without training
|
https://scholar.google.com/scholar?cluster=12821590639566718193&hl=en&as_sdt=0,34
| 15 | 2,021 |
UCB Momentum Q-learning: Correcting the bias without forgetting
| 28 |
icml
| 1 | 1 |
2023-06-17 04:13:55.791000
|
https://github.com/omardrwch/ucbmq_code
| 2 |
UCB Momentum Q-learning: Correcting the bias without forgetting
|
https://scholar.google.com/scholar?cluster=13418224994694979040&hl=en&as_sdt=0,5
| 2 | 2,021 |
An Integer Linear Programming Framework for Mining Constraints from Data
| 4 |
icml
| 0 | 0 |
2023-06-17 04:13:55.994000
|
https://github.com/uclanlp/ILPLearning
| 2 |
An Integer Linear Programming Framework for Mining Constraints from Data
|
https://scholar.google.com/scholar?cluster=15134580706124032020&hl=en&as_sdt=0,26
| 7 | 2,021 |
Signatured Deep Fictitious Play for Mean Field Games with Common Noise
| 18 |
icml
| 2 | 0 |
2023-06-17 04:13:56.196000
|
https://github.com/mmin0/SigDFP
| 1 |
Signatured deep fictitious play for mean field games with common noise
|
https://scholar.google.com/scholar?cluster=5737626410689821885&hl=en&as_sdt=0,5
| 2 | 2,021 |
Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
| 67 |
icml
| 37 | 10 |
2023-06-17 04:13:56.405000
|
https://github.com/KevinMIN95/StyleSpeech
| 196 |
Meta-stylespeech: Multi-speaker adaptive text-to-speech generation
|
https://scholar.google.com/scholar?cluster=9200152829644981336&hl=en&as_sdt=0,33
| 8 | 2,021 |
Offline Meta-Reinforcement Learning with Advantage Weighting
| 57 |
icml
| 9 | 1 |
2023-06-17 04:13:56.607000
|
https://github.com/eric-mitchell/macaw
| 34 |
Offline meta-reinforcement learning with advantage weighting
|
https://scholar.google.com/scholar?cluster=17977945892617234025&hl=en&as_sdt=0,47
| 2 | 2,021 |
The Power of Log-Sum-Exp: Sequential Density Ratio Matrix Estimation for Speed-Accuracy Optimization
| 2 |
icml
| 1 | 0 |
2023-06-17 04:13:56.810000
|
https://github.com/TaikiMiyagawa/MSPRT-TANDEM
| 3 |
The Power of Log-Sum-Exp: Sequential Density Ratio Matrix Estimation for Speed-Accuracy Optimization
|
https://scholar.google.com/scholar?cluster=8968954885886250341&hl=en&as_sdt=0,14
| 2 | 2,021 |
Efficient Deviation Types and Learning for Hindsight Rationality in Extensive-Form Games
| 19 |
icml
| 2 | 0 |
2023-06-17 04:13:57.013000
|
https://github.com/dmorrill10/hr_edl_experiments
| 3 |
Efficient deviation types and learning for hindsight rationality in extensive-form games
|
https://scholar.google.com/scholar?cluster=2350651197115820142&hl=en&as_sdt=0,33
| 2 | 2,021 |
Connecting Interpretability and Robustness in Decision Trees through Separation
| 16 |
icml
| 0 | 0 |
2023-06-17 04:13:57.215000
|
https://github.com/yangarbiter/interpretable-robust-trees
| 12 |
Connecting interpretability and robustness in decision trees through separation
|
https://scholar.google.com/scholar?cluster=2331497214666374393&hl=en&as_sdt=0,37
| 3 | 2,021 |
Oblivious Sketching for Logistic Regression
| 10 |
icml
| 1 | 0 |
2023-06-17 04:13:57.418000
|
https://github.com/cxan96/oblivious-sketching-logreg
| 3 |
Oblivious sketching for logistic regression
|
https://scholar.google.com/scholar?cluster=16316892732322711108&hl=en&as_sdt=0,5
| 1 | 2,021 |
HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search
| 23 |
icml
| 5 | 3 |
2023-06-17 04:13:57.621000
|
https://github.com/Alibaba-MIIL/HardCoReNAS
| 30 |
Hardcore-nas: Hard constrained differentiable neural architecture search
|
https://scholar.google.com/scholar?cluster=12851686551366341896&hl=en&as_sdt=0,5
| 6 | 2,021 |
Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mutual Information
| 17 |
icml
| 6 | 0 |
2023-06-17 04:13:57.824000
|
https://github.com/willieneis/bayesian-algorithm-execution
| 40 |
Bayesian algorithm execution: Estimating computable properties of black-box functions using mutual information
|
https://scholar.google.com/scholar?cluster=10668214102939988393&hl=en&as_sdt=0,18
| 6 | 2,021 |
Incentivizing Compliance with Algorithmic Instruments
| 2 |
icml
| 0 | 0 |
2023-06-17 04:13:58.026000
|
https://github.com/DanielNgo207/Incentivizing-Compliance-with-Algorithmic-Instruments
| 0 |
Incentivizing compliance with algorithmic instruments
|
https://scholar.google.com/scholar?cluster=8032953671879607459&hl=en&as_sdt=0,39
| 1 | 2,021 |
Cross-model Back-translated Distillation for Unsupervised Machine Translation
| 7 |
icml
| 3 | 1 |
2023-06-17 04:13:58.228000
|
https://github.com/nxphi47/multiagent_crosstranslate
| 4 |
Cross-model back-translated distillation for unsupervised machine translation
|
https://scholar.google.com/scholar?cluster=12269896059746732525&hl=en&as_sdt=0,10
| 1 | 2,021 |
Optimal Transport Kernels for Sequential and Parallel Neural Architecture Search
| 24 |
icml
| 3 | 0 |
2023-06-17 04:13:58.430000
|
https://github.com/ntienvu/TW_NAS
| 4 |
Optimal transport kernels for sequential and parallel neural architecture search
|
https://scholar.google.com/scholar?cluster=12662732608463413645&hl=en&as_sdt=0,5
| 2 | 2,021 |
Interactive Learning from Activity Description
| 20 |
icml
| 0 | 0 |
2023-06-17 04:13:58.633000
|
https://github.com/khanhptnk/iliad
| 6 |
Interactive learning from activity description
|
https://scholar.google.com/scholar?cluster=6188595152759271430&hl=en&as_sdt=0,47
| 2 | 2,021 |
Data Augmentation for Meta-Learning
| 57 |
icml
| 5 | 0 |
2023-06-17 04:13:58.836000
|
https://github.com/RenkunNi/MetaAug
| 25 |
Data augmentation for meta-learning
|
https://scholar.google.com/scholar?cluster=2872867843367483483&hl=en&as_sdt=0,5
| 1 | 2,021 |
Improved Denoising Diffusion Probabilistic Models
| 754 |
icml
| 332 | 68 |
2023-06-17 04:13:59.039000
|
https://github.com/openai/improved-diffusion
| 1,891 |
Improved denoising diffusion probabilistic models
|
https://scholar.google.com/scholar?cluster=2227179395488568184&hl=en&as_sdt=0,5
| 99 | 2,021 |
AdaXpert: Adapting Neural Architecture for Growing Data
| 8 |
icml
| 2 | 2 |
2023-06-17 04:13:59.242000
|
https://github.com/mr-eggplant/adaxpert0
| 13 |
Adaxpert: Adapting neural architecture for growing data
|
https://scholar.google.com/scholar?cluster=1668694704547918132&hl=en&as_sdt=0,37
| 1 | 2,021 |
WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points
| 2 |
icml
| 0 | 0 |
2023-06-17 04:13:59.445000
|
https://github.com/sehyunkwon/Infinite-WGAN
| 3 |
Wgan with an infinitely wide generator has no spurious stationary points
|
https://scholar.google.com/scholar?cluster=2540862355442244934&hl=en&as_sdt=0,5
| 2 | 2,021 |
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting
| 16 |
icml
| 678 | 58 |
2023-06-17 04:13:59.648000
|
https://github.com/interpretml/interpret
| 5,546 |
Accuracy, interpretability, and differential privacy via explainable boosting
|
https://scholar.google.com/scholar?cluster=3909488782505274678&hl=en&as_sdt=0,32
| 142 | 2,021 |
Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes
| 35 |
icml
| 6 | 2 |
2023-06-17 04:13:59.850000
|
https://github.com/LaurenceA/bayesfunc
| 12 |
Global inducing point variational posteriors for bayesian neural networks and deep gaussian processes
|
https://scholar.google.com/scholar?cluster=8024621603786330099&hl=en&as_sdt=0,14
| 3 | 2,021 |
Regularizing towards Causal Invariance: Linear Models with Proxies
| 18 |
icml
| 2 | 0 |
2023-06-17 04:14:00.054000
|
https://github.com/clinicalml/proxy-anchor-regression
| 9 |
Regularizing towards causal invariance: Linear models with proxies
|
https://scholar.google.com/scholar?cluster=5547608297314715512&hl=en&as_sdt=0,33
| 9 | 2,021 |
RNN with Particle Flow for Probabilistic Spatio-temporal Forecasting
| 15 |
icml
| 4 | 0 |
2023-06-17 04:14:00.257000
|
https://github.com/networkslab/rnn_flow
| 18 |
RNN with particle flow for probabilistic spatio-temporal forecasting
|
https://scholar.google.com/scholar?cluster=16256105255072962985&hl=en&as_sdt=0,43
| 4 | 2,021 |
Latent Space Energy-Based Model of Symbol-Vector Coupling for Text Generation and Classification
| 17 |
icml
| 3 | 1 |
2023-06-17 04:14:00.460000
|
https://github.com/bpucla/ibebm
| 8 |
Latent space energy-based model of symbol-vector coupling for text generation and classification
|
https://scholar.google.com/scholar?cluster=18132333076288060504&hl=en&as_sdt=0,5
| 2 | 2,021 |
Unsupervised Representation Learning via Neural Activation Coding
| 2 |
icml
| 1 | 0 |
2023-06-17 04:14:00.663000
|
https://github.com/yookoon/nac
| 5 |
Unsupervised Representation Learning via Neural Activation Coding
|
https://scholar.google.com/scholar?cluster=3527585526812184622&hl=en&as_sdt=0,22
| 2 | 2,021 |
Optimal Counterfactual Explanations in Tree Ensembles
| 30 |
icml
| 5 | 0 |
2023-06-17 04:14:00.866000
|
https://github.com/vidalt/OCEAN
| 16 |
Optimal counterfactual explanations in tree ensembles
|
https://scholar.google.com/scholar?cluster=1410339152566950271&hl=en&as_sdt=0,23
| 2 | 2,021 |
CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints
| 32 |
icml
| 10 | 1 |
2023-06-17 04:14:01.083000
|
https://github.com/martius-lab/CombOptNet
| 67 |
Comboptnet: Fit the right np-hard problem by learning integer programming constraints
|
https://scholar.google.com/scholar?cluster=13237034191144507355&hl=en&as_sdt=0,11
| 4 | 2,021 |
How could Neural Networks understand Programs?
| 37 |
icml
| 14 | 3 |
2023-06-17 04:14:01.286000
|
https://github.com/pdlan/OSCAR
| 116 |
How could neural networks understand programs?
|
https://scholar.google.com/scholar?cluster=16362826083131548815&hl=en&as_sdt=0,44
| 4 | 2,021 |
Rissanen Data Analysis: Examining Dataset Characteristics via Description Length
| 13 |
icml
| 0 | 0 |
2023-06-17 04:14:01.489000
|
https://github.com/ethanjperez/rda
| 33 |
Rissanen data analysis: Examining dataset characteristics via description length
|
https://scholar.google.com/scholar?cluster=5428264289372921149&hl=en&as_sdt=0,33
| 1 | 2,021 |
Megaverse: Simulating Embodied Agents at One Million Experiences per Second
| 12 |
icml
| 19 | 4 |
2023-06-17 04:14:01.691000
|
https://github.com/alex-petrenko/megaverse
| 201 |
Megaverse: Simulating embodied agents at one million experiences per second
|
https://scholar.google.com/scholar?cluster=3066110392358323524&hl=en&as_sdt=0,3
| 8 | 2,021 |
Towards Practical Mean Bounds for Small Samples
| 4 |
icml
| 1 | 0 |
2023-06-17 04:14:01.894000
|
https://github.com/myphan9/small_sample_mean_bounds
| 2 |
Towards practical mean bounds for small samples
|
https://scholar.google.com/scholar?cluster=108164015875257038&hl=en&as_sdt=0,5
| 2 | 2,021 |
GeomCA: Geometric Evaluation of Data Representations
| 8 |
icml
| 2 | 0 |
2023-06-17 04:14:02.098000
|
https://github.com/petrapoklukar/GeomCA
| 10 |
Geomca: Geometric evaluation of data representations
|
https://scholar.google.com/scholar?cluster=1763637443737261657&hl=en&as_sdt=0,5
| 1 | 2,021 |
Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech
| 167 |
icml
| 93 | 15 |
2023-06-17 04:14:02.300000
|
https://github.com/huawei-noah/Speech-Backbones
| 396 |
Grad-tts: A diffusion probabilistic model for text-to-speech
|
https://scholar.google.com/scholar?cluster=6905767521784147251&hl=en&as_sdt=0,5
| 26 | 2,021 |
Bias-Free Scalable Gaussian Processes via Randomized Truncations
| 13 |
icml
| 0 | 0 |
2023-06-17 04:14:02.503000
|
https://github.com/cunningham-lab/RTGPS
| 7 |
Bias-free scalable gaussian processes via randomized truncations
|
https://scholar.google.com/scholar?cluster=5236118263143002712&hl=en&as_sdt=0,5
| 4 | 2,021 |
Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
| 5 |
icml
| 0 | 0 |
2023-06-17 04:14:02.705000
|
https://github.com/IlanPrice/DCTpS
| 12 |
Dense for the price of sparse: Improved performance of sparsely initialized networks via a subspace offset
|
https://scholar.google.com/scholar?cluster=17879749331929716913&hl=en&as_sdt=0,36
| 1 | 2,021 |
Neural Transformation Learning for Deep Anomaly Detection Beyond Images
| 54 |
icml
| 11 | 0 |
2023-06-17 04:14:02.908000
|
https://github.com/boschresearch/NeuTraL-AD
| 35 |
Neural transformation learning for deep anomaly detection beyond images
|
https://scholar.google.com/scholar?cluster=1292087033558963213&hl=en&as_sdt=0,5
| 4 | 2,021 |
Optimization Planning for 3D ConvNets
| 8 |
icml
| 0 | 0 |
2023-06-17 04:14:03.111000
|
https://github.com/zhaofanqiu/optimization-planning-for-3d-convnets
| 2 |
Optimization planning for 3d convnets
|
https://scholar.google.com/scholar?cluster=17965785653886460675&hl=en&as_sdt=0,15
| 2 | 2,021 |
Learning Transferable Visual Models From Natural Language Supervision
| 5,987 |
icml
| 2,336 | 151 |
2023-06-17 04:14:03.314000
|
https://github.com/openai/CLIP
| 15,759 |
Learning transferable visual models from natural language supervision
|
https://scholar.google.com/scholar?cluster=15031020161691567042&hl=en&as_sdt=0,48
| 268 | 2,021 |
A General Framework For Detecting Anomalous Inputs to DNN Classifiers
| 21 |
icml
| 3 | 3 |
2023-06-17 04:14:03.516000
|
https://github.com/jayaram-r/adversarial-detection
| 16 |
A general framework for detecting anomalous inputs to dnn classifiers
|
https://scholar.google.com/scholar?cluster=7846344670241873650&hl=en&as_sdt=0,5
| 5 | 2,021 |
Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning
| 27 |
icml
| 6 | 0 |
2023-06-17 04:14:03.730000
|
https://github.com/uoe-agents/GPL
| 25 |
Towards open ad hoc teamwork using graph-based policy learning
|
https://scholar.google.com/scholar?cluster=13446293545265914898&hl=en&as_sdt=0,5
| 4 | 2,021 |
Decoupling Value and Policy for Generalization in Reinforcement Learning
| 62 |
icml
| 13 | 0 |
2023-06-17 04:14:03.932000
|
https://github.com/rraileanu/idaac
| 52 |
Decoupling value and policy for generalization in reinforcement learning
|
https://scholar.google.com/scholar?cluster=12990450966698605101&hl=en&as_sdt=0,5
| 3 | 2,021 |
Differentially Private Sliced Wasserstein Distance
| 9 |
icml
| 3 | 2 |
2023-06-17 04:14:04.136000
|
https://github.com/arakotom/dp_swd
| 5 |
Differentially private sliced wasserstein distance
|
https://scholar.google.com/scholar?cluster=11153564524741628543&hl=en&as_sdt=0,33
| 1 | 2,021 |
Zero-Shot Text-to-Image Generation
| 1,797 |
icml
| 1,904 | 65 |
2023-06-17 04:14:04.338000
|
https://github.com/openai/DALL-E
| 10,322 |
Zero-shot text-to-image generation
|
https://scholar.google.com/scholar?cluster=18428055834209091582&hl=en&as_sdt=0,5
| 230 | 2,021 |
Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting
| 78 |
icml
| 168 | 63 |
2023-06-17 04:14:04.541000
|
https://github.com/zalandoresearch/pytorch-ts
| 1,006 |
Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting
|
https://scholar.google.com/scholar?cluster=11453532699552258037&hl=en&as_sdt=0,14
| 24 | 2,021 |
Implicit Regularization in Tensor Factorization
| 26 |
icml
| 0 | 0 |
2023-06-17 04:14:04.744000
|
https://github.com/noamrazin/imp_reg_in_tf
| 3 |
Implicit regularization in tensor factorization
|
https://scholar.google.com/scholar?cluster=4594323532805369080&hl=en&as_sdt=0,5
| 2 | 2,021 |
Align, then memorise: the dynamics of learning with feedback alignment
| 17 |
icml
| 2 | 0 |
2023-06-17 04:14:04.947000
|
https://github.com/sdascoli/dfa-dynamics
| 8 |
Align, then memorise: the dynamics of learning with feedback alignment
|
https://scholar.google.com/scholar?cluster=10115011183031848291&hl=en&as_sdt=0,11
| 3 | 2,021 |
Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed
| 42 |
icml
| 0 | 0 |
2023-06-17 04:14:05.150000
|
https://github.com/mariaref/rfvs2lnn_GMM_online
| 5 |
Classifying high-dimensional gaussian mixtures: Where kernel methods fail and neural networks succeed
|
https://scholar.google.com/scholar?cluster=2175676811548405487&hl=en&as_sdt=0,5
| 2 | 2,021 |
Solving high-dimensional parabolic PDEs using the tensor train format
| 32 |
icml
| 6 | 1 |
2023-06-17 04:14:05.353000
|
https://github.com/lorenzrichter/PDE-backward-solver
| 11 |
Solving high-dimensional parabolic PDEs using the tensor train format
|
https://scholar.google.com/scholar?cluster=11792660313798176886&hl=en&as_sdt=0,5
| 2 | 2,021 |
Principled Simplicial Neural Networks for Trajectory Prediction
| 37 |
icml
| 2 | 0 |
2023-06-17 04:14:05.555000
|
https://github.com/nglaze00/SCoNe_GCN
| 9 |
Principled simplicial neural networks for trajectory prediction
|
https://scholar.google.com/scholar?cluster=4466528152103096087&hl=en&as_sdt=0,4
| 2 | 2,021 |
Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data
| 13 |
icml
| 1 | 0 |
2023-06-17 04:14:05.757000
|
https://github.com/estherrolf/representation-matters
| 3 |
Representation matters: Assessing the importance of subgroup allocations in training data
|
https://scholar.google.com/scholar?cluster=9213574703320829677&hl=en&as_sdt=0,11
| 3 | 2,021 |
TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL
| 14 |
icml
| 4 | 2 |
2023-06-17 04:14:05.960000
|
https://github.com/flowersteam/TeachMyAgent
| 56 |
Teachmyagent: a benchmark for automatic curriculum learning in deep rl
|
https://scholar.google.com/scholar?cluster=11016662361926634008&hl=en&as_sdt=0,5
| 9 | 2,021 |
Discretization Drift in Two-Player Games
| 6 |
icml
| 2,436 | 170 |
2023-06-17 04:14:06.162000
|
https://github.com/deepmind/deepmind-research
| 11,905 |
Discretization drift in two-player games
|
https://scholar.google.com/scholar?cluster=5098459478601130257&hl=en&as_sdt=0,5
| 336 | 2,021 |
Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement
| 8 |
icml
| 1 | 1 |
2023-06-17 04:14:06.364000
|
https://github.com/dtak/hierarchical-disentanglement
| 5 |
Benchmarks, algorithms, and metrics for hierarchical disentanglement
|
https://scholar.google.com/scholar?cluster=9234964175960458338&hl=en&as_sdt=0,5
| 2 | 2,021 |
PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
| 80 |
icml
| 12 | 4 |
2023-06-17 04:14:06.566000
|
https://github.com/jonasrothfuss/meta_learning_pacoh
| 23 |
PACOH: Bayes-optimal meta-learning with PAC-guarantees
|
https://scholar.google.com/scholar?cluster=12050746952935759142&hl=en&as_sdt=0,30
| 5 | 2,021 |
Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding
| 20 |
icml
| 0 | 0 |
2023-06-17 04:14:06.768000
|
https://github.com/ryoungj/mcbits
| 13 |
Improving lossless compression rates via monte carlo bits-back coding
|
https://scholar.google.com/scholar?cluster=1052321349567422387&hl=en&as_sdt=0,5
| 2 | 2,021 |
On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes
| 1 |
icml
| 1 | 0 |
2023-06-17 04:14:06.971000
|
https://github.com/timrudner/snr_issues_in_deep_gps
| 5 |
On signal-to-noise ratio issues in variational inference for deep Gaussian processes
|
https://scholar.google.com/scholar?cluster=16244183498083641614&hl=en&as_sdt=0,16
| 3 | 2,021 |
Tilting the playing field: Dynamical loss functions for machine learning
| 13 |
icml
| 0 | 2 |
2023-06-17 04:14:07.174000
|
https://github.com/miguel-rg/dynamical-loss-functions
| 3 |
Tilting the playing field: Dynamical loss functions for machine learning
|
https://scholar.google.com/scholar?cluster=1722474778051641263&hl=en&as_sdt=0,33
| 1 | 2,021 |
UnICORNN: A recurrent model for learning very long time dependencies
| 35 |
icml
| 3 | 0 |
2023-06-17 04:14:07.376000
|
https://github.com/tk-rusch/unicornn
| 23 |
UnICORNN: A recurrent model for learning very long time dependencies
|
https://scholar.google.com/scholar?cluster=16728515819525304575&hl=en&as_sdt=0,5
| 2 | 2,021 |
Simple and Effective VAE Training with Calibrated Decoders
| 52 |
icml
| 1 | 0 |
2023-06-17 04:14:07.579000
|
https://github.com/orybkin/sigma-vae
| 25 |
Simple and effective VAE training with calibrated decoders
|
https://scholar.google.com/scholar?cluster=16943299314546110740&hl=en&as_sdt=0,26
| 3 | 2,021 |
Model-Based Reinforcement Learning via Latent-Space Collocation
| 17 |
icml
| 0 | 0 |
2023-06-17 04:14:07.783000
|
https://github.com/zchuning/latco
| 26 |
Model-based reinforcement learning via latent-space collocation
|
https://scholar.google.com/scholar?cluster=2726935776109554696&hl=en&as_sdt=0,5
| 4 | 2,021 |
Training Data Subset Selection for Regression with Controlled Generalization Error
| 8 |
icml
| 1 | 0 |
2023-06-17 04:14:07.985000
|
https://github.com/abir-de/SELCON
| 7 |
Training data subset selection for regression with controlled generalization error
|
https://scholar.google.com/scholar?cluster=8877772987506172355&hl=en&as_sdt=0,5
| 2 | 2,021 |
Momentum Residual Neural Networks
| 39 |
icml
| 17 | 6 |
2023-06-17 04:14:08.188000
|
https://github.com/michaelsdr/momentumnet
| 204 |
Momentum residual neural networks
|
https://scholar.google.com/scholar?cluster=195539269682246494&hl=en&as_sdt=0,10
| 8 | 2,021 |
Recomposing the Reinforcement Learning Building Blocks with Hypernetworks
| 12 |
icml
| 2 | 2 |
2023-06-17 04:14:08.390000
|
https://github.com/keynans/HypeRL
| 16 |
Recomposing the reinforcement learning building blocks with hypernetworks
|
https://scholar.google.com/scholar?cluster=11431615300192492432&hl=en&as_sdt=0,7
| 2 | 2,021 |
A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning
| 12 |
icml
| 1 | 0 |
2023-06-17 04:14:08.593000
|
https://github.com/nsaunshi/meta_tr_val_split
| 2 |
A representation learning perspective on the importance of train-validation splitting in meta-learning
|
https://scholar.google.com/scholar?cluster=15485330124938854681&hl=en&as_sdt=0,14
| 2 | 2,021 |
Linear Transformers Are Secretly Fast Weight Programmers
| 77 |
icml
| 9 | 0 |
2023-06-17 04:14:08.796000
|
https://github.com/ischlag/fast-weight-transformers
| 80 |
Linear transformers are secretly fast weight programmers
|
https://scholar.google.com/scholar?cluster=7929763198773172485&hl=en&as_sdt=0,39
| 5 | 2,021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.