title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
RLlib: Abstractions for Distributed Reinforcement Learning
| 642 |
icml
| 4,893 | 2,935 |
2023-06-17 02:59:36.226000
|
https://github.com/ray-project/ray
| 26,195 |
RLlib: Abstractions for distributed reinforcement learning
|
https://scholar.google.com/scholar?cluster=9535249560181579239&hl=en&as_sdt=0,11
| 450 | 2,018 |
On the Spectrum of Random Features Maps of High Dimensional Data
| 48 |
icml
| 5 | 0 |
2023-06-17 02:59:36.440000
|
https://github.com/Zhenyu-LIAO/RMT4RFM
| 7 |
On the spectrum of random features maps of high dimensional data
|
https://scholar.google.com/scholar?cluster=4838372697610829936&hl=en&as_sdt=0,44
| 1 | 2,018 |
Reviving and Improving Recurrent Back-Propagation
| 95 |
icml
| 4 | 2 |
2023-06-17 02:59:36.654000
|
https://github.com/lrjconan/RBP
| 36 |
Reviving and improving recurrent back-propagation
|
https://scholar.google.com/scholar?cluster=8778638717316926195&hl=en&as_sdt=0,5
| 3 | 2,018 |
Generalized Robust Bayesian Committee Machine for Large-scale Gaussian Process Regression
| 79 |
icml
| 4 | 1 |
2023-06-17 02:59:36.869000
|
https://github.com/LiuHaiTao01/GRBCM
| 9 |
Generalized robust Bayesian committee machine for large-scale Gaussian process regression
|
https://scholar.google.com/scholar?cluster=8338496144713791124&hl=en&as_sdt=0,33
| 3 | 2,018 |
Delayed Impact of Fair Machine Learning
| 410 |
icml
| 8 | 1 |
2023-06-17 02:59:37.082000
|
https://github.com/lydiatliu/delayedimpact
| 12 |
Delayed impact of fair machine learning
|
https://scholar.google.com/scholar?cluster=5181623229195224544&hl=en&as_sdt=0,5
| 6 | 2,018 |
Open Category Detection with PAC Guarantees
| 81 |
icml
| 1 | 0 |
2023-06-17 02:59:37.295000
|
https://github.com/liusi2019/ocd
| 6 |
Open category detection with PAC guarantees
|
https://scholar.google.com/scholar?cluster=16442088261883676309&hl=en&as_sdt=0,3
| 1 | 2,018 |
Batch Bayesian Optimization via Multi-objective Acquisition Ensemble for Automated Analog Circuit Design
| 101 |
icml
| 7 | 1 |
2023-06-17 02:59:37.510000
|
https://github.com/Alaya-in-Matrix/MACE
| 18 |
Batch Bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design
|
https://scholar.google.com/scholar?cluster=18060661280078108001&hl=en&as_sdt=0,41
| 2 | 2,018 |
Celer: a Fast Solver for the Lasso with Dual Extrapolation
| 71 |
icml
| 29 | 23 |
2023-06-17 02:59:37.724000
|
https://github.com/mathurinm/celer
| 172 |
Celer: a fast solver for the lasso with dual extrapolation
|
https://scholar.google.com/scholar?cluster=5377261088300700033&hl=en&as_sdt=0,5
| 11 | 2,018 |
Dimensionality-Driven Learning with Noisy Labels
| 347 |
icml
| 13 | 7 |
2023-06-17 02:59:37.938000
|
https://github.com/xingjunm/dimensionality-driven-learning
| 54 |
Dimensionality-driven learning with noisy labels
|
https://scholar.google.com/scholar?cluster=13671594748199391279&hl=en&as_sdt=0,5
| 6 | 2,018 |
Orthogonal Machine Learning: Power and Limitations
| 32 |
icml
| 1 | 0 |
2023-06-17 02:59:38.152000
|
https://github.com/IliasZadik/double_orthogonal_ml
| 9 |
Orthogonal machine learning: Power and limitations
|
https://scholar.google.com/scholar?cluster=5809392484253895551&hl=en&as_sdt=0,14
| 3 | 2,018 |
Learning Adversarially Fair and Transferable Representations
| 552 |
icml
| 12 | 0 |
2023-06-17 02:59:38.366000
|
https://github.com/VectorInstitute/laftr
| 50 |
Learning adversarially fair and transferable representations
|
https://scholar.google.com/scholar?cluster=6932272369084023440&hl=en&as_sdt=0,47
| 6 | 2,018 |
Iterative Amortized Inference
| 139 |
icml
| 9 | 1 |
2023-06-17 02:59:38.579000
|
https://github.com/joelouismarino/iterative_inference
| 43 |
Iterative amortized inference
|
https://scholar.google.com/scholar?cluster=11655024897433506011&hl=en&as_sdt=0,43
| 3 | 2,018 |
Optimization, fast and slow: optimally switching between local and Bayesian optimization
| 33 |
icml
| 9 | 2 |
2023-06-17 02:59:38.792000
|
https://github.com/markm541374/gpbo
| 25 |
Optimization, fast and slow: optimally switching between local and Bayesian optimization
|
https://scholar.google.com/scholar?cluster=6241493477815440111&hl=en&as_sdt=0,5
| 2 | 2,018 |
Which Training Methods for GANs do actually Converge?
| 1,241 |
icml
| 115 | 12 |
2023-06-17 02:59:39.006000
|
https://github.com/LMescheder/GAN_stability
| 900 |
Which training methods for GANs do actually converge?
|
https://scholar.google.com/scholar?cluster=11334901664651510839&hl=en&as_sdt=0,5
| 22 | 2,018 |
prDeep: Robust Phase Retrieval with a Flexible Deep Network
| 151 |
icml
| 12 | 1 |
2023-06-17 02:59:39.219000
|
https://github.com/ricedsp/prDeep
| 37 |
prDeep: Robust phase retrieval with a flexible deep network
|
https://scholar.google.com/scholar?cluster=13840213498750434607&hl=en&as_sdt=0,44
| 3 | 2,018 |
One-Shot Segmentation in Clutter
| 39 |
icml
| 11 | 0 |
2023-06-17 02:59:39.433000
|
https://github.com/michaelisc/cluttered-omniglot
| 47 |
One-shot segmentation in clutter
|
https://scholar.google.com/scholar?cluster=14253967975584352267&hl=en&as_sdt=0,5
| 4 | 2,018 |
Differentiable plasticity: training plastic neural networks with backpropagation
| 151 |
icml
| 71 | 3 |
2023-06-17 02:59:39.646000
|
https://github.com/uber-common/differentiable-plasticity
| 389 |
Differentiable plasticity: training plastic neural networks with backpropagation
|
https://scholar.google.com/scholar?cluster=16849084099727983459&hl=en&as_sdt=0,5
| 27 | 2,018 |
DICOD: Distributed Convolutional Coordinate Descent for Convolutional Sparse Coding
| 21 |
icml
| 1 | 1 |
2023-06-17 02:59:39.860000
|
https://github.com/tomMoral/dicod
| 12 |
Dicod: Distributed convolutional coordinate descent for convolutional sparse coding
|
https://scholar.google.com/scholar?cluster=6841370809688839469&hl=en&as_sdt=0,25
| 3 | 2,018 |
Nearly Optimal Robust Subspace Tracking
| 30 |
icml
| 4 | 0 |
2023-06-17 02:59:40.073000
|
https://github.com/praneethmurthy/NORST
| 7 |
Nearly optimal robust subspace tracking
|
https://scholar.google.com/scholar?cluster=11197141106222317789&hl=en&as_sdt=0,14
| 3 | 2,018 |
Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry
| 341 |
icml
| 221 | 29 |
2023-06-17 02:59:40.286000
|
https://github.com/facebookresearch/poincare-embeddings
| 1,592 |
Learning continuous hierarchies in the lorentz model of hyperbolic geometry
|
https://scholar.google.com/scholar?cluster=5235601311596588081&hl=en&as_sdt=0,10
| 52 | 2,018 |
SparseMAP: Differentiable Sparse Structured Inference
| 119 |
icml
| 9 | 3 |
2023-06-17 02:59:40.501000
|
https://github.com/vene/sparsemap
| 109 |
Sparsemap: Differentiable sparse structured inference
|
https://scholar.google.com/scholar?cluster=16676407380618945031&hl=en&as_sdt=0,24
| 9 | 2,018 |
A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations
| 144 |
icml
| 0 | 0 |
2023-06-17 02:59:40.714000
|
https://github.com/weilinie/BackpropVis
| 5 |
A theoretical explanation for perplexing behaviors of backpropagation-based visualizations
|
https://scholar.google.com/scholar?cluster=7254168770426119962&hl=en&as_sdt=0,47
| 2 | 2,018 |
Self-Imitation Learning
| 274 |
icml
| 40 | 4 |
2023-06-17 02:59:40.928000
|
https://github.com/junhyukoh/self-imitation-learning
| 269 |
Self-imitation learning
|
https://scholar.google.com/scholar?cluster=6282132634766578030&hl=en&as_sdt=0,31
| 16 | 2,018 |
Learning Localized Spatio-Temporal Models From Streaming Data
| 1 |
icml
| 0 | 3 |
2023-06-17 02:59:41.142000
|
https://github.com/Muhammad-Osama/Localized-Spatio-temporal-Models
| 7 |
Learning localized spatio-temporal models from streaming data
|
https://scholar.google.com/scholar?cluster=6273621603567429406&hl=en&as_sdt=0,33
| 1 | 2,018 |
Efficient First-Order Algorithms for Adaptive Signal Denoising
| 5 |
icml
| 1 | 0 |
2023-06-17 02:59:41.355000
|
https://github.com/ostrodmit/AlgoRec
| 6 |
Efficient first-order algorithms for adaptive signal denoising
|
https://scholar.google.com/scholar?cluster=16164313069281185033&hl=en&as_sdt=0,10
| 3 | 2,018 |
Analyzing Uncertainty in Neural Machine Translation
| 748 |
icml
| 8 | 0 |
2023-06-17 02:59:41.569000
|
https://github.com/facebookresearch/analyzing-uncertainty-nmt
| 32 |
Analyzing uncertainty in neural machine translation
|
https://scholar.google.com/scholar?cluster=1522001537063991105&hl=en&as_sdt=0,5
| 56 | 2,018 |
Max-Mahalanobis Linear Discriminant Analysis Networks
| 46 |
icml
| 19 | 1 |
2023-06-17 02:59:41.783000
|
https://github.com/P2333/Max-Mahalanobis-Training
| 87 |
Max-mahalanobis linear discriminant analysis networks
|
https://scholar.google.com/scholar?cluster=1310490945606447616&hl=en&as_sdt=0,33
| 4 | 2,018 |
Stochastic Variance-Reduced Policy Gradient
| 145 |
icml
| 4 | 0 |
2023-06-17 02:59:41.997000
|
https://github.com/Dam930/rllab
| 3 |
Stochastic variance-reduced policy gradient
|
https://scholar.google.com/scholar?cluster=10229080169981298445&hl=en&as_sdt=0,38
| 3 | 2,018 |
PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos
| 56 |
icml
| 2 | 0 |
2023-06-17 02:59:42.211000
|
https://github.com/proppo/pipps_demo
| 0 |
PIPPS: Flexible model-based policy search robust to the curse of chaos
|
https://scholar.google.com/scholar?cluster=8640168252000745898&hl=en&as_sdt=0,5
| 2 | 2,018 |
Local Convergence Properties of SAGA/Prox-SVRG and Acceleration
| 38 |
icml
| 2 | 0 |
2023-06-17 02:59:42.426000
|
https://github.com/jliang993/Local-VRSGD
| 2 |
Local convergence properties of SAGA/Prox-SVRG and acceleration
|
https://scholar.google.com/scholar?cluster=12517501002751750903&hl=en&as_sdt=0,33
| 3 | 2,018 |
Learning Dynamics of Linear Denoising Autoencoders
| 24 |
icml
| 3 | 1 |
2023-06-17 02:59:42.640000
|
https://github.com/arnupretorius/lindaedynamics_icml2018
| 12 |
Learning dynamics of linear denoising autoencoders
|
https://scholar.google.com/scholar?cluster=11573052296697932394&hl=en&as_sdt=0,6
| 3 | 2,018 |
JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets
| 40 |
icml
| 8 | 1 |
2023-06-17 02:59:42.856000
|
https://github.com/sdai654416/Joint-GAN
| 19 |
Jointgan: Multi-domain joint distribution learning with generative adversarial nets
|
https://scholar.google.com/scholar?cluster=17442133177721066359&hl=en&as_sdt=0,25
| 1 | 2,018 |
Selecting Representative Examples for Program Synthesis
| 30 |
icml
| 3 | 1 |
2023-06-17 02:59:43.071000
|
https://github.com/evanthebouncy/icml2018_selecting_representative_examples
| 11 |
Selecting representative examples for program synthesis
|
https://scholar.google.com/scholar?cluster=18419281465561462811&hl=en&as_sdt=0,22
| 3 | 2,018 |
DCFNet: Deep Neural Network with Decomposed Convolutional Filters
| 58 |
icml
| 1 | 0 |
2023-06-17 02:59:43.284000
|
https://github.com/xycheng/DCFNet
| 11 |
DCFNet: Deep neural network with decomposed convolutional filters
|
https://scholar.google.com/scholar?cluster=6785841352849465563&hl=en&as_sdt=0,5
| 2 | 2,018 |
Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
| 31 |
icml
| 0 | 0 |
2023-06-17 02:59:43.498000
|
https://github.com/rubai5/ESS_Game
| 7 |
Can deep reinforcement learning solve Erdos-Selfridge-Spencer games?
|
https://scholar.google.com/scholar?cluster=5045759722516886464&hl=en&as_sdt=0,5
| 1 | 2,018 |
SAFFRON: an Adaptive Algorithm for Online Control of the False Discovery Rate
| 46 |
icml
| 3 | 0 |
2023-06-17 02:59:43.713000
|
https://github.com/tijana-zrnic/SAFFRONcode
| 7 |
SAFFRON: an adaptive algorithm for online control of the false discovery rate
|
https://scholar.google.com/scholar?cluster=3162538214212248602&hl=en&as_sdt=0,5
| 0 | 2,018 |
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
| 1,465 |
icml
| 360 | 54 |
2023-06-17 02:59:43.927000
|
https://github.com/oxwhirl/pymarl
| 1,474 |
Monotonic value function factorisation for deep multi-agent reinforcement learning
|
https://scholar.google.com/scholar?cluster=3975132673723125155&hl=en&as_sdt=0,33
| 32 | 2,018 |
Learning to Reweight Examples for Robust Deep Learning
| 1,199 |
icml
| 53 | 8 |
2023-06-17 02:59:44.141000
|
https://github.com/uber-research/learning-to-reweight-examples
| 266 |
Learning to reweight examples for robust deep learning
|
https://scholar.google.com/scholar?cluster=17871432661582272860&hl=en&as_sdt=0,5
| 11 | 2,018 |
Fast Information-theoretic Bayesian Optimisation
| 48 |
icml
| 2 | 0 |
2023-06-17 02:59:44.355000
|
https://github.com/rubinxin/FITBO
| 16 |
Fast information-theoretic Bayesian optimisation
|
https://scholar.google.com/scholar?cluster=12232335065092117172&hl=en&as_sdt=0,14
| 2 | 2,018 |
Probabilistic Boolean Tensor Decomposition
| 18 |
icml
| 5 | 0 |
2023-06-17 02:59:44.570000
|
https://github.com/TammoR/LogicalFactorisationMachines
| 20 |
Probabilistic boolean tensor decomposition
|
https://scholar.google.com/scholar?cluster=11732429422199282970&hl=en&as_sdt=0,5
| 3 | 2,018 |
Black-Box Variational Inference for Stochastic Differential Equations
| 62 |
icml
| 10 | 0 |
2023-06-17 02:59:44.787000
|
https://github.com/Tom-Ryder/VIforSDEs
| 41 |
Black-box variational inference for stochastic differential equations
|
https://scholar.google.com/scholar?cluster=771102464723698631&hl=en&as_sdt=0,33
| 6 | 2,018 |
Spurious Local Minima are Common in Two-Layer ReLU Neural Networks
| 254 |
icml
| 3 | 0 |
2023-06-17 02:59:45.001000
|
https://github.com/ItaySafran/OneLayerGDconvergence
| 1 |
Spurious local minima are common in two-layer relu neural networks
|
https://scholar.google.com/scholar?cluster=2602196713819367782&hl=en&as_sdt=0,18
| 0 | 2,018 |
TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service
| 113 |
icml
| 6 | 0 |
2023-06-17 02:59:45.217000
|
https://github.com/amartya18x/tapas
| 16 |
TAPAS: Tricks to accelerate (encrypted) prediction as a service
|
https://scholar.google.com/scholar?cluster=13862835131458070168&hl=en&as_sdt=0,5
| 8 | 2,018 |
Learning with Abandonment
| 9 |
icml
| 0 | 0 |
2023-06-17 02:59:45.432000
|
https://github.com/schmit/learning-abandonment
| 1 |
Learning with abandonment
|
https://scholar.google.com/scholar?cluster=5599696763306308098&hl=en&as_sdt=0,48
| 2 | 2,018 |
Not to Cry Wolf: Distantly Supervised Multitask Learning in Critical Care
| 24 |
icml
| 3 | 1 |
2023-06-17 02:59:45.647000
|
https://github.com/d909b/DSMT-Nets
| 10 |
Not to cry wolf: Distantly supervised multitask learning in critical care
|
https://scholar.google.com/scholar?cluster=13897538253893011334&hl=en&as_sdt=0,5
| 4 | 2,018 |
Overcoming Catastrophic Forgetting with Hard Attention to the Task
| 699 |
icml
| 49 | 1 |
2023-06-17 02:59:45.862000
|
https://github.com/joansj/hat
| 174 |
Overcoming catastrophic forgetting with hard attention to the task
|
https://scholar.google.com/scholar?cluster=11086231050694477723&hl=en&as_sdt=0,36
| 10 | 2,018 |
First Order Generative Adversarial Networks
| 6 |
icml
| 12 | 1 |
2023-06-17 02:59:46.076000
|
https://github.com/zalandoresearch/first_order_gan
| 35 |
First order generative adversarial networks
|
https://scholar.google.com/scholar?cluster=4229294235141796493&hl=en&as_sdt=0,5
| 7 | 2,018 |
Finding Influential Training Samples for Gradient Boosted Decision Trees
| 38 |
icml
| 18 | 0 |
2023-06-17 02:59:46.294000
|
https://github.com/bsharchilev/influence_boosting
| 63 |
Finding influential training samples for gradient boosted decision trees
|
https://scholar.google.com/scholar?cluster=16436473119957517587&hl=en&as_sdt=0,33
| 7 | 2,018 |
Solving Partial Assignment Problems using Random Clique Complexes
| 1 |
icml
| 3 | 0 |
2023-06-17 02:59:46.513000
|
https://github.com/charusharma1991/RandomCliqueComplexes_ICML2018
| 2 |
Solving partial assignment problems using random clique complexes
|
https://scholar.google.com/scholar?cluster=8378028426453482804&hl=en&as_sdt=0,36
| 2 | 2,018 |
A Spectral Approach to Gradient Estimation for Implicit Distributions
| 78 |
icml
| 9 | 2 |
2023-06-17 02:59:46.734000
|
https://github.com/thjashin/spectral-stein-grad
| 33 |
A spectral approach to gradient estimation for implicit distributions
|
https://scholar.google.com/scholar?cluster=34252178022681098&hl=en&as_sdt=0,9
| 4 | 2,018 |
Accelerating Natural Gradient with Higher-Order Invariance
| 13 |
icml
| 8 | 0 |
2023-06-17 02:59:46.955000
|
https://github.com/ermongroup/higher_order_invariance
| 30 |
Accelerating natural gradient with higher-order invariance
|
https://scholar.google.com/scholar?cluster=17686115985744822983&hl=en&as_sdt=0,33
| 6 | 2,018 |
Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search
| 92 |
icml
| 21 | 3 |
2023-06-17 02:59:47.175000
|
https://github.com/sg-nm/Evolutionary-Autoencoders
| 67 |
Exploiting the potential of standard convolutional autoencoders for image restoration by evolutionary search
|
https://scholar.google.com/scholar?cluster=4118394325034454915&hl=en&as_sdt=0,41
| 3 | 2,018 |
Scalable approximate Bayesian inference for particle tracking data
| 12 |
icml
| 0 | 2 |
2023-06-17 02:59:47.390000
|
https://github.com/SunRuoxi/Single_Particle_Tracking
| 1 |
Scalable approximate Bayesian inference for particle tracking data
|
https://scholar.google.com/scholar?cluster=8017063234741228178&hl=en&as_sdt=0,8
| 3 | 2,018 |
Learning the Reward Function for a Misspecified Model
| 12 |
icml
| 1 | 0 |
2023-06-17 02:59:47.606000
|
https://github.com/etalvitie/hdaggermc
| 8 |
Learning the reward function for a misspecified model
|
https://scholar.google.com/scholar?cluster=16036091820545871049&hl=en&as_sdt=0,5
| 1 | 2,018 |
Chi-square Generative Adversarial Network
| 40 |
icml
| 0 | 3 |
2023-06-17 02:59:47.820000
|
https://github.com/chenyang-tao/chi2gan
| 6 |
Chi-square generative adversarial network
|
https://scholar.google.com/scholar?cluster=3560140041128352974&hl=en&as_sdt=0,14
| 4 | 2,018 |
Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees
| 47 |
icml
| 1 | 0 |
2023-06-17 02:59:48.036000
|
https://github.com/QCGroup/quad-lyap-first-order
| 5 |
Lyapunov functions for first-order methods: Tight automated convergence guarantees
|
https://scholar.google.com/scholar?cluster=1395570422835062279&hl=en&as_sdt=0,5
| 3 | 2,018 |
Adversarial Regression with Multiple Learners
| 33 |
icml
| 1 | 0 |
2023-06-17 02:59:48.259000
|
https://github.com/marsplus/Adversarial-Regression-with-Multiple-Learners
| 2 |
Adversarial regression with multiple learners
|
https://scholar.google.com/scholar?cluster=11851981725937878010&hl=en&as_sdt=0,33
| 2 | 2,018 |
StrassenNets: Deep Learning with a Multiplication Budget
| 30 |
icml
| 10 | 0 |
2023-06-17 02:59:48.473000
|
https://github.com/mitscha/strassennets
| 45 |
StrassenNets: Deep learning with a multiplication budget
|
https://scholar.google.com/scholar?cluster=9065345888211174353&hl=en&as_sdt=0,44
| 4 | 2,018 |
PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning
| 350 |
icml
| 85 | 6 |
2023-06-17 02:59:48.687000
|
https://github.com/Yunbo426/predrnn-pp
| 221 |
Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning
|
https://scholar.google.com/scholar?cluster=16975551372418150051&hl=en&as_sdt=0,5
| 10 | 2,018 |
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
| 145 |
icml
| 1 | 0 |
2023-06-17 02:59:48.902000
|
https://github.com/EricYizhenWang/robust_nn_icml
| 6 |
Analyzing the robustness of nearest neighbors to adversarial examples
|
https://scholar.google.com/scholar?cluster=15228068536645268692&hl=en&as_sdt=0,5
| 3 | 2,018 |
A Fast and Scalable Joint Estimator for Integrating Additional Knowledge in Learning Multiple Related Sparse Gaussian Graphical Models
| 4 |
icml
| 0 | 1 |
2023-06-17 02:59:49.117000
|
https://github.com/QData/JEEK
| 1 |
A fast and scalable joint estimator for integrating additional knowledge in learning multiple related sparse Gaussian graphical models
|
https://scholar.google.com/scholar?cluster=12183443188962650844&hl=en&as_sdt=0,6
| 4 | 2,018 |
Adversarial Distillation of Bayesian Neural Network Posteriors
| 59 |
icml
| 2 | 1 |
2023-06-17 02:59:49.330000
|
https://github.com/wangkua1/apd_public
| 14 |
Adversarial distillation of bayesian neural network posteriors
|
https://scholar.google.com/scholar?cluster=8595967760145130464&hl=en&as_sdt=0,47
| 6 | 2,018 |
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
| 20 |
icml
| 2 | 1 |
2023-06-17 02:59:49.546000
|
https://github.com/wendazhou/alocv-package
| 6 |
Approximate leave-one-out for fast parameter tuning in high dimensions
|
https://scholar.google.com/scholar?cluster=7517160253492394187&hl=en&as_sdt=0,7
| 4 | 2,018 |
Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples
| 179 |
icml
| 19 | 0 |
2023-06-17 02:59:49.797000
|
https://github.com/tech-srl/lstar_extraction
| 63 |
Extracting automata from recurrent neural networks using queries and counterexamples
|
https://scholar.google.com/scholar?cluster=3812692831904479239&hl=en&as_sdt=0,31
| 7 | 2,018 |
Towards Fast Computation of Certified Robustness for ReLU Networks
| 641 |
icml
| 5 | 1 |
2023-06-17 02:59:50.011000
|
https://github.com/huanzhang12/CertifiedReLURobustness
| 29 |
Towards fast computation of certified robustness for relu networks
|
https://scholar.google.com/scholar?cluster=13154362274812885800&hl=en&as_sdt=0,39
| 5 | 2,018 |
Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope
| 1,370 |
icml
| 83 | 8 |
2023-06-17 02:59:50.226000
|
https://github.com/locuslab/convex_adversarial
| 357 |
Provable defenses against adversarial examples via the convex outer adversarial polytope
|
https://scholar.google.com/scholar?cluster=2593701021867797885&hl=en&as_sdt=0,47
| 16 | 2,018 |
SQL-Rank: A Listwise Approach to Collaborative Ranking
| 42 |
icml
| 8 | 0 |
2023-06-17 02:59:50.441000
|
https://github.com/wuliwei9278/SQL-Rank
| 16 |
Sql-rank: A listwise approach to collaborative ranking
|
https://scholar.google.com/scholar?cluster=3011153619955791541&hl=en&as_sdt=0,10
| 3 | 2,018 |
Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization
| 15 |
icml
| 0 | 1 |
2023-06-17 02:59:50.655000
|
https://github.com/hang-wu/VRCRM
| 2 |
Variance regularized counterfactual risk minimization via variational divergence minimization
|
https://scholar.google.com/scholar?cluster=16906275230514657049&hl=en&as_sdt=0,10
| 2 | 2,018 |
Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
| 122 |
icml
| 34 | 0 |
2023-06-17 02:59:50.870000
|
https://github.com/Sandbox3aster/Deep-K-Means
| 146 |
Deep k-means: Re-training and parameter sharing with harder cluster assignments for compressing deep convolutions
|
https://scholar.google.com/scholar?cluster=5421215697510972919&hl=en&as_sdt=0,44
| 13 | 2,018 |
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
| 293 |
icml
| 8 | 0 |
2023-06-17 02:59:51.084000
|
https://github.com/brain-research/mean-field-cnns
| 35 |
Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks
|
https://scholar.google.com/scholar?cluster=4327553153293253435&hl=en&as_sdt=0,47
| 7 | 2,018 |
Learning Semantic Representations for Unsupervised Domain Adaptation
| 455 |
icml
| 38 | 3 |
2023-06-17 02:59:51.299000
|
https://github.com/Mid-Push/Moving-Semantic-Transfer-Network
| 105 |
Learning semantic representations for unsupervised domain adaptation
|
https://scholar.google.com/scholar?cluster=3795243851386744123&hl=en&as_sdt=0,47
| 5 | 2,018 |
A Semantic Loss Function for Deep Learning with Symbolic Knowledge
| 362 |
icml
| 11 | 1 |
2023-06-17 02:59:51.513000
|
https://github.com/UCLA-StarAI/Semantic-Loss
| 52 |
A semantic loss function for deep learning with symbolic knowledge
|
https://scholar.google.com/scholar?cluster=2687938736648965063&hl=en&as_sdt=0,44
| 11 | 2,018 |
Mean Field Multi-Agent Reinforcement Learning
| 564 |
icml
| 92 | 20 |
2023-06-17 02:59:51.727000
|
https://github.com/mlii/mfrl
| 323 |
Mean field multi-agent reinforcement learning
|
https://scholar.google.com/scholar?cluster=18365585657208114611&hl=en&as_sdt=0,23
| 10 | 2,018 |
Yes, but Did It Work?: Evaluating Variational Inference
| 138 |
icml
| 2 | 1 |
2023-06-17 02:59:51.941000
|
https://github.com/yao-yl/Evaluating-Variational-Inference
| 12 |
Yes, but did it work?: Evaluating variational inference
|
https://scholar.google.com/scholar?cluster=16612262779014542273&hl=en&as_sdt=0,31
| 3 | 2,018 |
Semi-Implicit Variational Inference
| 122 |
icml
| 13 | 1 |
2023-06-17 02:59:52.156000
|
https://github.com/mingzhang-yin/SIVI
| 49 |
Semi-implicit variational inference
|
https://scholar.google.com/scholar?cluster=952314383686625023&hl=en&as_sdt=0,5
| 5 | 2,018 |
GAIN: Missing Data Imputation using Generative Adversarial Nets
| 784 |
icml
| 141 | 0 |
2023-06-17 02:59:52.371000
|
https://github.com/jsyoon0823/GAIN
| 307 |
Gain: Missing data imputation using generative adversarial nets
|
https://scholar.google.com/scholar?cluster=6024113526841994005&hl=en&as_sdt=0,6
| 11 | 2,018 |
GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models
| 753 |
icml
| 109 | 6 |
2023-06-17 02:59:52.586000
|
https://github.com/snap-stanford/GraphRNN
| 387 |
Graphrnn: Generating realistic graphs with deep auto-regressive models
|
https://scholar.google.com/scholar?cluster=18334516615969196433&hl=en&as_sdt=0,5
| 60 | 2,018 |
Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization
| 106 |
icml
| 3 | 1 |
2023-06-17 02:59:52.801000
|
https://github.com/zhangjiong724/spectral-RNN
| 12 |
Stabilizing gradients for deep neural networks via efficient svd parameterization
|
https://scholar.google.com/scholar?cluster=10623363336533108811&hl=en&as_sdt=0,9
| 2 | 2,018 |
Learning Long Term Dependencies via Fourier Recurrent Units
| 31 |
icml
| 8 | 0 |
2023-06-17 02:59:53.014000
|
https://github.com/limbo018/FRU
| 36 |
Learning long term dependencies via fourier recurrent units
|
https://scholar.google.com/scholar?cluster=16150244378271641439&hl=en&as_sdt=0,5
| 7 | 2,018 |
Inter and Intra Topic Structure Learning with Word Embeddings
| 16 |
icml
| 3 | 4 |
2023-06-17 02:59:53.228000
|
https://github.com/ethanhezhao/WEDTM
| 6 |
Inter and intra topic structure learning with word embeddings
|
https://scholar.google.com/scholar?cluster=11048244315815532986&hl=en&as_sdt=0,5
| 4 | 2,018 |
Adversarially Regularized Autoencoders
| 291 |
icml
| 93 | 19 |
2023-06-17 02:59:53.442000
|
https://github.com/jakezhaojb/ARAE
| 400 |
Adversarially regularized autoencoders
|
https://scholar.google.com/scholar?cluster=5024716526871945774&hl=en&as_sdt=0,11
| 20 | 2,018 |
Dynamic Weights in Multi-Objective Deep Reinforcement Learning
| 97 |
icml
| 16 | 0 |
2023-06-17 03:09:58.308000
|
https://github.com/axelabels/DynMORL
| 64 |
Dynamic weights in multi-objective deep reinforcement learning
|
https://scholar.google.com/scholar?cluster=12040121315464946458&hl=en&as_sdt=0,39
| 2 | 2,019 |
MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing
| 600 |
icml
| 34 | 4 |
2023-06-17 03:09:58.523000
|
https://github.com/samihaija/mixhop
| 113 |
Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing
|
https://scholar.google.com/scholar?cluster=8927230189965016671&hl=en&as_sdt=0,5
| 7 | 2,019 |
Understanding the Impact of Entropy on Policy Optimization
| 182 |
icml
| 14 | 4 |
2023-06-17 03:09:58.738000
|
https://github.com/zafarali/emdp
| 47 |
Understanding the impact of entropy on policy optimization
|
https://scholar.google.com/scholar?cluster=8905478721868235472&hl=en&as_sdt=0,36
| 5 | 2,019 |
Fairwashing: the risk of rationalization
| 118 |
icml
| 2 | 1 |
2023-06-17 03:09:58.954000
|
https://github.com/aivodji/LaundryML
| 15 |
Fairwashing: the risk of rationalization
|
https://scholar.google.com/scholar?cluster=2523692918696533409&hl=en&as_sdt=0,23
| 2 | 2,019 |
Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search
| 74 |
icml
| 12 | 2 |
2023-06-17 03:09:59.200000
|
https://github.com/shirakawas/ASNG-NAS
| 86 |
Adaptive stochastic natural gradient method for one-shot neural architecture search
|
https://scholar.google.com/scholar?cluster=8278729461791344602&hl=en&as_sdt=0,44
| 12 | 2,019 |
Graph Element Networks: adaptive, structured computation and memory
| 73 |
icml
| 18 | 0 |
2023-06-17 03:09:59.417000
|
https://github.com/FerranAlet/graph_element_networks
| 54 |
Graph element networks: adaptive, structured computation and memory
|
https://scholar.google.com/scholar?cluster=15635052566391015915&hl=en&as_sdt=0,47
| 4 | 2,019 |
Asynchronous Batch Bayesian Optimisation with Improved Local Penalisation
| 43 |
icml
| 5 | 1 |
2023-06-17 03:09:59.631000
|
https://github.com/a5a/asynchronous-BO
| 9 |
Asynchronous batch Bayesian optimisation with improved local penalisation
|
https://scholar.google.com/scholar?cluster=17891210137592442168&hl=en&as_sdt=0,11
| 2 | 2,019 |
Feature Grouping as a Stochastic Regularizer for High-Dimensional Structured Data
| 9 |
icml
| 6 | 0 |
2023-06-17 03:09:59.848000
|
https://github.com/sergulaydore/Feature-Grouping-Regularizer
| 20 |
Feature grouping as a stochastic regularizer for high-dimensional structured data
|
https://scholar.google.com/scholar?cluster=11613171711375782355&hl=en&as_sdt=0,47
| 3 | 2,019 |
Beyond the Chinese Restaurant and Pitman-Yor processes: Statistical Models with double power-law behavior
| 11 |
icml
| 0 | 0 |
2023-06-17 03:10:00.065000
|
https://github.com/OxCSML-BayesNP/doublepowerlaw
| 0 |
Beyond the Chinese Restaurant and Pitman-Yor processes: Statistical Models with double power-law behavior
|
https://scholar.google.com/scholar?cluster=7805425707346893329&hl=en&as_sdt=0,5
| 4 | 2,019 |
Scalable Fair Clustering
| 174 |
icml
| 6 | 0 |
2023-06-17 03:10:00.279000
|
https://github.com/talwagner/fair_clustering
| 16 |
Scalable fair clustering
|
https://scholar.google.com/scholar?cluster=16665021693225941817&hl=en&as_sdt=0,14
| 2 | 2,019 |
Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs
| 18 |
icml
| 4 | 1 |
2023-06-17 03:10:00.494000
|
https://github.com/yogeshbalaji/EntropicGANs_meet_VAEs
| 8 |
Entropic gans meet vaes: A statistical approach to compute sample likelihoods in gans
|
https://scholar.google.com/scholar?cluster=4502964466526434508&hl=en&as_sdt=0,10
| 2 | 2,019 |
Provable Guarantees for Gradient-Based Meta-Learning
| 136 |
icml
| 2 | 0 |
2023-06-17 03:10:00.712000
|
https://github.com/mkhodak/FMRL
| 3 |
Provable guarantees for gradient-based meta-learning
|
https://scholar.google.com/scholar?cluster=18333296959440727243&hl=en&as_sdt=0,33
| 3 | 2,019 |
Learning to Route in Similarity Graphs
| 23 |
icml
| 15 | 3 |
2023-06-17 03:10:00.937000
|
https://github.com/dbaranchuk/learning-to-route
| 50 |
Learning to route in similarity graphs
|
https://scholar.google.com/scholar?cluster=381431972230740194&hl=en&as_sdt=0,14
| 10 | 2,019 |
Noise2Self: Blind Denoising by Self-Supervision
| 439 |
icml
| 67 | 5 |
2023-06-17 03:10:01.181000
|
https://github.com/czbiohub/noise2self
| 292 |
Noise2self: Blind denoising by self-supervision
|
https://scholar.google.com/scholar?cluster=16484478987296907806&hl=en&as_sdt=0,43
| 16 | 2,019 |
Efficient optimization of loops and limits with randomized telescoping sums
| 22 |
icml
| 4 | 1 |
2023-06-17 03:10:01.396000
|
https://github.com/PrincetonLIPS/randomized_telescopes
| 27 |
Efficient optimization of loops and limits with randomized telescoping sums
|
https://scholar.google.com/scholar?cluster=3412668840791342029&hl=en&as_sdt=0,33
| 5 | 2,019 |
Greedy Layerwise Learning Can Scale To ImageNet
| 136 |
icml
| 11 | 0 |
2023-06-17 03:10:01.612000
|
https://github.com/eugenium/layerCNN
| 17 |
Greedy layerwise learning can scale to imagenet
|
https://scholar.google.com/scholar?cluster=17442726017389288785&hl=en&as_sdt=0,5
| 4 | 2,019 |
Optimal Kronecker-Sum Approximation of Real Time Recurrent Learning
| 21 |
icml
| 1 | 0 |
2023-06-17 03:10:01.826000
|
https://github.com/marcelomatheusgauy/optimal_kronecker_approximation
| 5 |
Optimal kronecker-sum approximation of real time recurrent learning
|
https://scholar.google.com/scholar?cluster=6902147836625554260&hl=en&as_sdt=0,50
| 3 | 2,019 |
Analyzing Federated Learning through an Adversarial Lens
| 750 |
icml
| 34 | 4 |
2023-06-17 03:10:02.041000
|
https://github.com/inspire-group/ModelPoisoning
| 133 |
Analyzing federated learning through an adversarial lens
|
https://scholar.google.com/scholar?cluster=16839948122426603319&hl=en&as_sdt=0,5
| 6 | 2,019 |
A Kernel Perspective for Regularizing Deep Neural Networks
| 67 |
icml
| 6 | 0 |
2023-06-17 03:10:02.256000
|
https://github.com/albietz/kernel_reg
| 22 |
A kernel perspective for regularizing deep neural networks
|
https://scholar.google.com/scholar?cluster=17149885341490741277&hl=en&as_sdt=0,5
| 3 | 2,019 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.