title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
NGBoost: Natural Gradient Boosting for Probabilistic Prediction
| 197 |
icml
| 207 | 47 |
2023-06-17 03:56:59.173000
|
https://github.com/stanfordmlgroup/ngboost
| 1,440 |
Ngboost: Natural gradient boosting for probabilistic prediction
|
https://scholar.google.com/scholar?cluster=4894543059596757711&hl=en&as_sdt=0,33
| 45 | 2,020 |
Familywise Error Rate Control by Interactive Unmasking
| 9 |
icml
| 0 | 0 |
2023-06-17 03:56:59.375000
|
https://github.com/duanby/i-FWER
| 0 |
Familywise error rate control by interactive unmasking
|
https://scholar.google.com/scholar?cluster=4720846503563749113&hl=en&as_sdt=0,41
| 1 | 2,020 |
On Contrastive Learning for Likelihood-free Inference
| 82 |
icml
| 10 | 0 |
2023-06-17 03:56:59.577000
|
https://github.com/conormdurkan/lfi
| 36 |
On contrastive learning for likelihood-free inference
|
https://scholar.google.com/scholar?cluster=2331371123661181745&hl=en&as_sdt=0,10
| 4 | 2,020 |
Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors
| 155 |
icml
| 79 | 73 |
2023-06-17 03:56:59.782000
|
https://github.com/google/edward2
| 644 |
Efficient and scalable bayesian neural nets with rank-1 factors
|
https://scholar.google.com/scholar?cluster=14999664725860521004&hl=en&as_sdt=0,5
| 20 | 2,020 |
Self-Concordant Analysis of Frank-Wolfe Algorithms
| 19 |
icml
| 1 | 0 |
2023-06-17 03:56:59.984000
|
https://github.com/kamil-safin/SCFW
| 3 |
Self-concordant analysis of Frank-Wolfe algorithms
|
https://scholar.google.com/scholar?cluster=10274753710668333699&hl=en&as_sdt=0,5
| 2 | 2,020 |
Decision Trees for Decision-Making under the Predict-then-Optimize Framework
| 88 |
icml
| 16 | 0 |
2023-06-17 03:57:00.187000
|
https://github.com/rtm2130/SPOTree
| 21 |
Decision trees for decision-making under the predict-then-optimize framework
|
https://scholar.google.com/scholar?cluster=2000494760504517215&hl=en&as_sdt=0,5
| 2 | 2,020 |
Identifying Statistical Bias in Dataset Replication
| 47 |
icml
| 5 | 0 |
2023-06-17 03:57:00.390000
|
https://github.com/MadryLab/dataset-replication-analysis
| 25 |
Identifying statistical bias in dataset replication
|
https://scholar.google.com/scholar?cluster=16322569355368565071&hl=en&as_sdt=0,5
| 9 | 2,020 |
Latent Bernoulli Autoencoder
| 5 |
icml
| 2 | 2 |
2023-06-17 03:57:00.591000
|
https://github.com/ok1zjf/lbae
| 18 |
Latent bernoulli autoencoder
|
https://scholar.google.com/scholar?cluster=8997104581865575542&hl=en&as_sdt=0,5
| 4 | 2,020 |
Growing Action Spaces
| 26 |
icml
| 128 | 12 |
2023-06-17 03:57:00.795000
|
https://github.com/TorchCraft/TorchCraftAI
| 640 |
Growing action spaces
|
https://scholar.google.com/scholar?cluster=2822509827640565136&hl=en&as_sdt=0,5
| 49 | 2,020 |
Why Are Learned Indexes So Effective?
| 25 |
icml
| 4 | 0 |
2023-06-17 03:57:00.997000
|
https://github.com/gvinciguerra/Learned-indexes-effectiveness
| 15 |
Why are learned indexes so effective?
|
https://scholar.google.com/scholar?cluster=10615073257658129787&hl=en&as_sdt=0,33
| 4 | 2,020 |
Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts?
| 125 |
icml
| 34 | 10 |
2023-06-17 03:57:01.200000
|
https://github.com/OATML/oatomobile
| 176 |
Can autonomous vehicles identify, recover from, and adapt to distribution shifts?
|
https://scholar.google.com/scholar?cluster=12116616826636126634&hl=en&as_sdt=0,5
| 12 | 2,020 |
Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
| 199 |
icml
| 28 | 2 |
2023-06-17 03:57:01.402000
|
https://github.com/mfinzi/LieConv
| 239 |
Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data
|
https://scholar.google.com/scholar?cluster=5464981001229463744&hl=en&as_sdt=0,5
| 10 | 2,020 |
Information Particle Filter Tree: An Online Algorithm for POMDPs with Belief-Based Rewards on Continuous Domains
| 24 |
icml
| 4 | 2 |
2023-06-17 03:57:01.604000
|
https://github.com/johannes-fischer/icml2020_ipft
| 9 |
Information particle filter tree: An online algorithm for pomdps with belief-based rewards on continuous domains
|
https://scholar.google.com/scholar?cluster=12906174048753061788&hl=en&as_sdt=0,21
| 2 | 2,020 |
p-Norm Flow Diffusion for Local Graph Clustering
| 12 |
icml
| 1 | 0 |
2023-06-17 03:57:01.806000
|
https://github.com/s-h-yang/pNormFlowDiffusion
| 1 |
p-Norm flow diffusion for local graph clustering
|
https://scholar.google.com/scholar?cluster=13045214522176891757&hl=en&as_sdt=0,44
| 1 | 2,020 |
Stochastic Latent Residual Video Prediction
| 110 |
icml
| 16 | 0 |
2023-06-17 03:57:02.010000
|
https://github.com/edouardelasalles/srvp
| 75 |
Stochastic latent residual video prediction
|
https://scholar.google.com/scholar?cluster=13364014516718772272&hl=en&as_sdt=0,34
| 3 | 2,020 |
Leveraging Frequency Analysis for Deep Fake Image Recognition
| 247 |
icml
| 21 | 9 |
2023-06-17 03:57:02.213000
|
https://github.com/RUB-SysSec/GANDCTAnalysis
| 141 |
Leveraging frequency analysis for deep fake image recognition
|
https://scholar.google.com/scholar?cluster=15424504685179897985&hl=en&as_sdt=0,33
| 8 | 2,020 |
No-Regret and Incentive-Compatible Online Learning
| 9 |
icml
| 0 | 0 |
2023-06-17 03:57:02.422000
|
https://github.com/charapod/noregr-and-ic
| 3 |
No-regret and incentive-compatible online learning
|
https://scholar.google.com/scholar?cluster=10101414388050703329&hl=en&as_sdt=0,3
| 3 | 2,020 |
Fast and Three-rious: Speeding Up Weak Supervision with Triplet Methods
| 79 |
icml
| 21 | 2 |
2023-06-17 03:57:02.630000
|
https://github.com/HazyResearch/flyingsquid
| 302 |
Fast and three-rious: Speeding up weak supervision with triplet methods
|
https://scholar.google.com/scholar?cluster=13381739478195543351&hl=en&as_sdt=0,21
| 26 | 2,020 |
AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks
| 78 |
icml
| 19 | 0 |
2023-06-17 03:57:02.832000
|
https://github.com/TAMU-VITA/AGD
| 101 |
Autogan-distiller: Searching to compress generative adversarial networks
|
https://scholar.google.com/scholar?cluster=1452214065033971023&hl=en&as_sdt=0,5
| 16 | 2,020 |
DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths
| 4 |
icml
| 5 | 0 |
2023-06-17 03:57:03.034000
|
https://github.com/DessiLBI2020/DessiLBI
| 35 |
Dessilbi: Exploring structural sparsity of deep networks via differential inclusion paths
|
https://scholar.google.com/scholar?cluster=10194996533073442340&hl=en&as_sdt=0,5
| 1 | 2,020 |
Characterizing Distribution Equivalence and Structure Learning for Cyclic and Acyclic Directed Graphs
| 14 |
icml
| 0 | 0 |
2023-06-17 03:57:03.237000
|
https://github.com/syanga/dglearn
| 6 |
Characterizing distribution equivalence and structure learning for cyclic and acyclic directed graphs
|
https://scholar.google.com/scholar?cluster=4341241833833873634&hl=en&as_sdt=0,10
| 2 | 2,020 |
Gradient Temporal-Difference Learning with Regularized Corrections
| 35 |
icml
| 9 | 1 |
2023-06-17 03:57:03.439000
|
https://github.com/rlai-lab/Regularized-GradientTD
| 32 |
Gradient temporal-difference learning with regularized corrections
|
https://scholar.google.com/scholar?cluster=8254675597355502028&hl=en&as_sdt=0,44
| 10 | 2,020 |
Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
| 62 |
icml
| 5 | 0 |
2023-06-17 03:57:03.641000
|
https://github.com/goldblum/FeatureClustering
| 12 |
Unraveling meta-learning: Understanding feature representations for few-shot tasks
|
https://scholar.google.com/scholar?cluster=17583362370632834127&hl=en&as_sdt=0,5
| 2 | 2,020 |
Towards a General Theory of Infinite-Width Limits of Neural Classifiers
| 9 |
icml
| 0 | 0 |
2023-06-17 03:57:03.843000
|
https://github.com/deepmipt/infinite-width_nets
| 3 |
Towards a general theory of infinite-width limits of neural classifiers
|
https://scholar.google.com/scholar?cluster=6182164811378755380&hl=en&as_sdt=0,5
| 4 | 2,020 |
Differentially Private Set Union
| 26 |
icml
| 1 | 0 |
2023-06-17 03:57:04.072000
|
https://github.com/heyyjudes/differentially-private-set-union
| 6 |
Differentially private set union
|
https://scholar.google.com/scholar?cluster=2482149851439545745&hl=en&as_sdt=0,41
| 3 | 2,020 |
The continuous categorical: a novel simplex-valued exponential family
| 15 |
icml
| 7 | 0 |
2023-06-17 03:57:04.273000
|
https://github.com/cunningham-lab/cb_and_cc
| 31 |
The continuous categorical: a novel simplex-valued exponential family
|
https://scholar.google.com/scholar?cluster=17174456964236691188&hl=en&as_sdt=0,44
| 4 | 2,020 |
Automatic Reparameterisation of Probabilistic Programs
| 21 |
icml
| 3 | 1 |
2023-06-17 03:57:04.475000
|
https://github.com/mgorinova/autoreparam
| 33 |
Automatic reparameterisation of probabilistic programs
|
https://scholar.google.com/scholar?cluster=1767777764184099722&hl=en&as_sdt=0,5
| 6 | 2,020 |
Learning to Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
| 82 |
icml
| 10 | 1 |
2023-06-17 03:57:04.677000
|
https://github.com/99andBeyond/Apollo1060
| 62 |
Learning to navigate the synthetically accessible chemical space using reinforcement learning
|
https://scholar.google.com/scholar?cluster=12254018210357831699&hl=en&as_sdt=0,5
| 7 | 2,020 |
Ordinal Non-negative Matrix Factorization for Recommendation
| 14 |
icml
| 5 | 0 |
2023-06-17 03:57:04.879000
|
https://github.com/Oligou/OrdNMF
| 9 |
Ordinal non-negative matrix factorization for recommendation
|
https://scholar.google.com/scholar?cluster=3251477194660112209&hl=en&as_sdt=0,33
| 3 | 2,020 |
PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
| 33 |
icml
| 13 | 3 |
2023-06-17 03:57:05.080000
|
https://github.com/IBM/PoWER-BERT
| 53 |
PoWER-BERT: Accelerating BERT inference via progressive word-vector elimination
|
https://scholar.google.com/scholar?cluster=306627104113108298&hl=en&as_sdt=0,33
| 7 | 2,020 |
PackIt: A Virtual Environment for Geometric Planning
| 7 |
icml
| 4 | 4 |
2023-06-17 03:57:05.282000
|
https://github.com/princeton-vl/PackIt
| 45 |
Packit: A virtual environment for geometric planning
|
https://scholar.google.com/scholar?cluster=5535871935242220151&hl=en&as_sdt=0,33
| 6 | 2,020 |
DROCC: Deep Robust One-Class Classification
| 121 |
icml
| 369 | 28 |
2023-06-17 03:57:05.484000
|
https://github.com/Microsoft/EdgeML
| 1,453 |
DROCC: Deep robust one-class classification
|
https://scholar.google.com/scholar?cluster=3986505951359290998&hl=en&as_sdt=0,29
| 87 | 2,020 |
Learning the Stein Discrepancy for Training and Evaluating Energy-Based Models without Sampling
| 63 |
icml
| 5 | 1 |
2023-06-17 03:57:05.686000
|
https://github.com/wgrathwohl/LSD
| 43 |
Learning the stein discrepancy for training and evaluating energy-based models without sampling
|
https://scholar.google.com/scholar?cluster=12824935271809632059&hl=en&as_sdt=0,5
| 2 | 2,020 |
On the Iteration Complexity of Hypergradient Computation
| 116 |
icml
| 17 | 1 |
2023-06-17 03:57:05.887000
|
https://github.com/prolearner/hypertorch
| 112 |
On the iteration complexity of hypergradient computation
|
https://scholar.google.com/scholar?cluster=3451320004072265708&hl=en&as_sdt=0,36
| 6 | 2,020 |
Robust Learning with the Hilbert-Schmidt Independence Criterion
| 33 |
icml
| 6 | 0 |
2023-06-17 03:57:06.088000
|
https://github.com/danielgreenfeld3/XIC
| 34 |
Robust learning with the hilbert-schmidt independence criterion
|
https://scholar.google.com/scholar?cluster=13054295788524587578&hl=en&as_sdt=0,29
| 2 | 2,020 |
Implicit Geometric Regularization for Learning Shapes
| 408 |
icml
| 35 | 5 |
2023-06-17 03:57:06.291000
|
https://github.com/amosgropp/IGR
| 331 |
Implicit geometric regularization for learning shapes
|
https://scholar.google.com/scholar?cluster=18082545558132742834&hl=en&as_sdt=0,33
| 7 | 2,020 |
Recurrent Hierarchical Topic-Guided RNN for Language Generation
| 22 |
icml
| 2 | 5 |
2023-06-17 03:57:06.493000
|
https://github.com/Dan123dan/rGBN-RNN
| 6 |
Recurrent hierarchical topic-guided RNN for language generation
|
https://scholar.google.com/scholar?cluster=11674844584780363467&hl=en&as_sdt=0,4
| 1 | 2,020 |
Breaking the Curse of Space Explosion: Towards Efficient NAS with Curriculum Search
| 52 |
icml
| 7 | 1 |
2023-06-17 03:57:06.695000
|
https://github.com/guoyongcs/CNAS
| 17 |
Breaking the curse of space explosion: Towards efficient nas with curriculum search
|
https://scholar.google.com/scholar?cluster=5489996847363496431&hl=en&as_sdt=0,10
| 4 | 2,020 |
Certified Data Removal from Machine Learning Models
| 163 |
icml
| 8 | 0 |
2023-06-17 03:57:06.897000
|
https://github.com/facebookresearch/certified-removal
| 39 |
Certified data removal from machine learning models
|
https://scholar.google.com/scholar?cluster=5421394926787368463&hl=en&as_sdt=0,33
| 8 | 2,020 |
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
| 28 |
icml
| 0 | 0 |
2023-06-17 03:57:07.099000
|
https://github.com/ZhishuaiGuo/DistributedAUC
| 2 |
Communication-efficient distributed stochastic auc maximization with deep neural networks
|
https://scholar.google.com/scholar?cluster=992924762353556583&hl=en&as_sdt=0,48
| 1 | 2,020 |
Neural Topic Modeling with Continual Lifelong Learning
| 29 |
icml
| 3 | 1 |
2023-06-17 03:57:07.301000
|
https://github.com/pgcool/Lifelong-Neural-Topic-Modeling
| 23 |
Neural topic modeling with continual lifelong learning
|
https://scholar.google.com/scholar?cluster=5694355012238035603&hl=en&as_sdt=0,5
| 2 | 2,020 |
Optimal approximation for unconstrained non-submodular minimization
| 22 |
icml
| 0 | 0 |
2023-06-17 03:57:07.503000
|
https://github.com/marwash25/non-sub-min
| 0 |
Optimal approximation for unconstrained non-submodular minimization
|
https://scholar.google.com/scholar?cluster=16541151635330995478&hl=en&as_sdt=0,31
| 2 | 2,020 |
Polynomial Tensor Sketch for Element-wise Function of Low-Rank Matrix
| 7 |
icml
| 0 | 1 |
2023-06-17 03:57:07.705000
|
https://github.com/insuhan/polytensorsketch
| 2 |
Polynomial tensor sketch for element-wise function of low-rank matrix
|
https://scholar.google.com/scholar?cluster=15937632034353153696&hl=en&as_sdt=0,48
| 1 | 2,020 |
Improving generalization by controlling label-noise information in neural network weights
| 39 |
icml
| 8 | 0 |
2023-06-17 03:57:07.906000
|
https://github.com/hrayrhar/limit-label-memorization
| 37 |
Improving generalization by controlling label-noise information in neural network weights
|
https://scholar.google.com/scholar?cluster=8186840532226802329&hl=en&as_sdt=0,11
| 5 | 2,020 |
Contrastive Multi-View Representation Learning on Graphs
| 683 |
icml
| 46 | 11 |
2023-06-17 03:57:08.108000
|
https://github.com/kavehhassani/mvgrl
| 225 |
Contrastive multi-view representation learning on graphs
|
https://scholar.google.com/scholar?cluster=11131425815493661687&hl=en&as_sdt=0,47
| 9 | 2,020 |
Nested Subspace Arrangement for Representation of Relational Data
| 3 |
icml
| 0 | 0 |
2023-06-17 03:57:08.311000
|
https://github.com/KyushuUniversityMathematics/DANCAR
| 2 |
Nested subspace arrangement for representation of relational data
|
https://scholar.google.com/scholar?cluster=5195931229921461485&hl=en&as_sdt=0,5
| 4 | 2,020 |
The Tree Ensemble Layer: Differentiability meets Conditional Computation
| 48 |
icml
| 7,322 | 1,026 |
2023-06-17 03:57:08.513000
|
https://github.com/google-research/google-research
| 29,791 |
The tree ensemble layer: Differentiability meets conditional computation
|
https://scholar.google.com/scholar?cluster=4646704514802719017&hl=en&as_sdt=0,22
| 727 | 2,020 |
Compressive sensing with un-trained neural networks: Gradient descent finds a smooth approximation
| 59 |
icml
| 2 | 0 |
2023-06-17 03:57:08.715000
|
https://github.com/MLI-lab/cs_deep_decoder
| 15 |
Compressive sensing with un-trained neural networks: Gradient descent finds a smooth approximation
|
https://scholar.google.com/scholar?cluster=17885213602830705754&hl=en&as_sdt=0,5
| 2 | 2,020 |
Hierarchically Decoupled Imitation For Morphological Transfer
| 21 |
icml
| 5 | 3 |
2023-06-17 03:57:08.917000
|
https://github.com/jhejna/hierarchical_morphology_transfer
| 16 |
Hierarchically decoupled imitation for morphological transfer
|
https://scholar.google.com/scholar?cluster=7821488667980467803&hl=en&as_sdt=0,5
| 3 | 2,020 |
Towards Non-Parametric Drift Detection via Dynamic Adapting Window Independence Drift Detection (DAWIDD)
| 23 |
icml
| 2 | 1 |
2023-06-17 03:57:09.128000
|
https://github.com/FabianHinder/DAWIDD
| 7 |
Towards non-parametric drift detection via dynamic adapting window independence drift detection (DAWIDD)
|
https://scholar.google.com/scholar?cluster=4763047039028062564&hl=en&as_sdt=0,7
| 2 | 2,020 |
Topologically Densified Distributions
| 13 |
icml
| 1 | 0 |
2023-06-17 03:57:09.331000
|
https://github.com/c-hofer/topologically_densified_distributions
| 2 |
Topologically densified distributions
|
https://scholar.google.com/scholar?cluster=18143439633922765637&hl=en&as_sdt=0,10
| 2 | 2,020 |
Graph Filtration Learning
| 63 |
icml
| 8 | 0 |
2023-06-17 03:57:09.534000
|
https://github.com/c-hofer/graph_filtration_learning
| 14 |
Graph filtration learning
|
https://scholar.google.com/scholar?cluster=16680082495324217816&hl=en&as_sdt=0,44
| 3 | 2,020 |
Set Functions for Time Series
| 73 |
icml
| 26 | 2 |
2023-06-17 03:57:09.736000
|
https://github.com/BorgwardtLab/Set_Functions_for_Time_Series
| 104 |
Set functions for time series
|
https://scholar.google.com/scholar?cluster=11653676919176974096&hl=en&as_sdt=0,45
| 7 | 2,020 |
Lifted Disjoint Paths with Application in Multiple Object Tracking
| 112 |
icml
| 8 | 2 |
2023-06-17 03:57:09.938000
|
https://github.com/AndreaHor/LifT_Solver
| 51 |
Lifted disjoint paths with application in multiple object tracking
|
https://scholar.google.com/scholar?cluster=7450982728927056524&hl=en&as_sdt=0,5
| 7 | 2,020 |
Infinite attention: NNGP and NTK for deep attention networks
| 71 |
icml
| 227 | 58 |
2023-06-17 03:57:10.141000
|
https://github.com/google/neural-tangents
| 2,024 |
Infinite attention: NNGP and NTK for deep attention networks
|
https://scholar.google.com/scholar?cluster=8612471018033907356&hl=en&as_sdt=0,5
| 64 | 2,020 |
The Non-IID Data Quagmire of Decentralized Machine Learning
| 373 |
icml
| 8 | 0 |
2023-06-17 03:57:10.342000
|
https://github.com/kevinhsieh/non_iid_dml
| 26 |
The non-iid data quagmire of decentralized machine learning
|
https://scholar.google.com/scholar?cluster=6995419568802932569&hl=en&as_sdt=0,5
| 1 | 2,020 |
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation
| 633 |
icml
| 108 | 27 |
2023-06-17 03:57:10.544000
|
https://github.com/google-research/xtreme
| 583 |
Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation
|
https://scholar.google.com/scholar?cluster=3128313942238375094&hl=en&as_sdt=0,10
| 21 | 2,020 |
Momentum-Based Policy Gradient Methods
| 29 |
icml
| 2 | 0 |
2023-06-17 03:57:10.746000
|
https://github.com/gaosh/MBPG
| 6 |
Momentum-based policy gradient methods
|
https://scholar.google.com/scholar?cluster=12318216464045418856&hl=en&as_sdt=0,47
| 2 | 2,020 |
One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control
| 97 |
icml
| 28 | 12 |
2023-06-17 03:57:10.948000
|
https://github.com/huangwl18/modular-rl
| 194 |
One policy to control them all: Shared modular policies for agent-agnostic control
|
https://scholar.google.com/scholar?cluster=14540777310694580207&hl=en&as_sdt=0,33
| 11 | 2,020 |
Generating Programmatic Referring Expressions via Program Synthesis
| 8 |
icml
| 3 | 0 |
2023-06-17 03:57:11.150000
|
https://github.com/moqingyan/object_reference_synthesis
| 5 |
Generating programmatic referring expressions via program synthesis
|
https://scholar.google.com/scholar?cluster=6959334433424014581&hl=en&as_sdt=0,33
| 2 | 2,020 |
Accelerated Stochastic Gradient-free and Projection-free Methods
| 16 |
icml
| 0 | 0 |
2023-06-17 03:57:11.352000
|
https://github.com/TLMichael/Acc-SZOFW
| 3 |
Accelerated stochastic gradient-free and projection-free methods
|
https://scholar.google.com/scholar?cluster=9296344013020465952&hl=en&as_sdt=0,6
| 2 | 2,020 |
Multigrid Neural Memory
| 5 |
icml
| 1 | 0 |
2023-06-17 03:57:11.554000
|
https://github.com/trihuynh88/multigrid_mem
| 7 |
Multigrid neural memory
|
https://scholar.google.com/scholar?cluster=15687545930604068210&hl=en&as_sdt=0,34
| 4 | 2,020 |
Meta-Learning with Shared Amortized Variational Inference
| 15 |
icml
| 0 | 2 |
2023-06-17 03:57:11.756000
|
https://github.com/katafeya/samovar
| 3 |
Meta-learning with shared amortized variational inference
|
https://scholar.google.com/scholar?cluster=5105160592289559562&hl=en&as_sdt=0,21
| 6 | 2,020 |
Do We Need Zero Training Loss After Achieving Zero Training Error?
| 90 |
icml
| 6 | 0 |
2023-06-17 03:57:11.959000
|
https://github.com/takashiishida/flooding
| 82 |
Do we need zero training loss after achieving zero training error?
|
https://scholar.google.com/scholar?cluster=6131533147705685027&hl=en&as_sdt=0,33
| 5 | 2,020 |
Semi-Supervised Learning with Normalizing Flows
| 77 |
icml
| 12 | 1 |
2023-06-17 03:57:12.160000
|
https://github.com/izmailovpavel/flowgmm
| 129 |
Semi-supervised learning with normalizing flows
|
https://scholar.google.com/scholar?cluster=9421035999149534110&hl=en&as_sdt=0,5
| 10 | 2,020 |
Source Separation with Deep Generative Priors
| 28 |
icml
| 5 | 3 |
2023-06-17 03:57:12.362000
|
https://github.com/jthickstun/basis-separation
| 33 |
Source separation with deep generative priors
|
https://scholar.google.com/scholar?cluster=17132907753659598254&hl=en&as_sdt=0,5
| 7 | 2,020 |
T-GD: Transferable GAN-generated Images Detection Framework
| 31 |
icml
| 4 | 0 |
2023-06-17 03:57:12.564000
|
https://github.com/cutz-j/T-GD
| 16 |
T-gd: Transferable gan-generated images detection framework
|
https://scholar.google.com/scholar?cluster=17021668985815827092&hl=en&as_sdt=0,5
| 2 | 2,020 |
Information-Theoretic Local Minima Characterization and Regularization
| 9 |
icml
| 0 | 0 |
2023-06-17 03:57:12.766000
|
https://github.com/SeanJia/InfoMCR
| 5 |
Information-theoretic local minima characterization and regularization
|
https://scholar.google.com/scholar?cluster=16854698489852164998&hl=en&as_sdt=0,11
| 2 | 2,020 |
Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation
| 89 |
icml
| 9 | 3 |
2023-06-17 03:57:12.968000
|
https://github.com/xiangdal/implicit_alignment
| 87 |
Implicit class-conditioned domain alignment for unsupervised domain adaptation
|
https://scholar.google.com/scholar?cluster=17175487218857833755&hl=en&as_sdt=0,5
| 6 | 2,020 |
Multi-Objective Molecule Generation using Interpretable Substructures
| 124 |
icml
| 42 | 8 |
2023-06-17 03:57:13.171000
|
https://github.com/wengong-jin/multiobj-rationale
| 119 |
Multi-objective molecule generation using interpretable substructures
|
https://scholar.google.com/scholar?cluster=7786133206388752764&hl=en&as_sdt=0,44
| 3 | 2,020 |
On Relativistic f-Divergences
| 18 |
icml
| 17 | 0 |
2023-06-17 03:57:13.373000
|
https://github.com/AlexiaJM/relativistic-f-divergences
| 85 |
On relativistic f-divergences
|
https://scholar.google.com/scholar?cluster=17068494214467307697&hl=en&as_sdt=0,5
| 8 | 2,020 |
Being Bayesian about Categorical Probability
| 44 |
icml
| 4 | 2 |
2023-06-17 03:57:13.575000
|
https://github.com/tjoo512/belief-matching-framework
| 33 |
Being bayesian about categorical probability
|
https://scholar.google.com/scholar?cluster=6426225307727814668&hl=en&as_sdt=0,5
| 5 | 2,020 |
Evaluating the Performance of Reinforcement Learning Algorithms
| 46 |
icml
| 0 | 2 |
2023-06-17 03:57:13.777000
|
https://github.com/ScottJordan/EvaluationOfRLAlgs
| 26 |
Evaluating the performance of reinforcement learning algorithms
|
https://scholar.google.com/scholar?cluster=4785496300749883115&hl=en&as_sdt=0,5
| 2 | 2,020 |
Stochastic Differential Equations with Variational Wishart Diffusions
| 7 |
icml
| 1 | 0 |
2023-06-17 03:57:13.979000
|
https://github.com/JorgensenMart/Wishart-priored-SDE
| 8 |
Stochastic differential equations with variational wishart diffusions
|
https://scholar.google.com/scholar?cluster=8080141843979887658&hl=en&as_sdt=0,5
| 1 | 2,020 |
Partial Trace Regression and Low-Rank Kraus Decomposition
| 6 |
icml
| 1 | 0 |
2023-06-17 03:57:14.182000
|
https://github.com/Stef-hub/partial_trace_kraus
| 0 |
Partial trace regression and low-rank kraus decomposition
|
https://scholar.google.com/scholar?cluster=794742801432087247&hl=en&as_sdt=0,47
| 3 | 2,020 |
Operation-Aware Soft Channel Pruning using Differentiable Masks
| 96 |
icml
| 0 | 0 |
2023-06-17 03:57:14.384000
|
https://github.com/kminsoo/SCP
| 6 |
Operation-aware soft channel pruning using differentiable masks
|
https://scholar.google.com/scholar?cluster=6963448174836272281&hl=en&as_sdt=0,31
| 1 | 2,020 |
Non-autoregressive Machine Translation with Disentangled Context Transformer
| 71 |
icml
| 9 | 3 |
2023-06-17 03:57:14.586000
|
https://github.com/facebookresearch/DisCo
| 78 |
Non-autoregressive machine translation with disentangled context transformer
|
https://scholar.google.com/scholar?cluster=8958608366652830932&hl=en&as_sdt=0,25
| 11 | 2,020 |
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
| 705 |
icml
| 162 | 28 |
2023-06-17 03:57:14.788000
|
https://github.com/idiap/fast-transformers
| 1,434 |
Transformers are rnns: Fast autoregressive transformers with linear attention
|
https://scholar.google.com/scholar?cluster=15303739914785429862&hl=en&as_sdt=0,6
| 27 | 2,020 |
Entropy Minimization In Emergent Languages
| 20 |
icml
| 98 | 7 |
2023-06-17 03:57:14.991000
|
https://github.com/facebookresearch/EGG
| 261 |
Entropy minimization in emergent languages
|
https://scholar.google.com/scholar?cluster=9085278772671430646&hl=en&as_sdt=0,6
| 16 | 2,020 |
What can I do here? A Theory of Affordances in Reinforcement Learning
| 49 |
icml
| 1 | 0 |
2023-06-17 03:57:15.193000
|
https://github.com/deepmind/affordances_option_models
| 21 |
What can I do here? A Theory of Affordances in Reinforcement Learning
|
https://scholar.google.com/scholar?cluster=2336774470554893443&hl=en&as_sdt=0,9
| 4 | 2,020 |
FACT: A Diagnostic for Group Fairness Trade-offs
| 28 |
icml
| 0 | 1 |
2023-06-17 03:57:15.395000
|
https://github.com/wnstlr/FACT
| 4 |
FACT: A diagnostic for group fairness trade-offs
|
https://scholar.google.com/scholar?cluster=17087884984751620008&hl=en&as_sdt=0,34
| 4 | 2,020 |
Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup
| 208 |
icml
| 17 | 0 |
2023-06-17 03:57:15.598000
|
https://github.com/snu-mllab/PuzzleMix
| 144 |
Puzzle mix: Exploiting saliency and local statistics for optimal mixup
|
https://scholar.google.com/scholar?cluster=58056101510275173&hl=en&as_sdt=0,47
| 10 | 2,020 |
Bayesian Experimental Design for Implicit Models by Mutual Information Neural Estimation
| 52 |
icml
| 2 | 0 |
2023-06-17 03:57:15.799000
|
https://github.com/stevenkleinegesse/minebed
| 7 |
Bayesian experimental design for implicit models by mutual information neural estimation
|
https://scholar.google.com/scholar?cluster=18098257663902816323&hl=en&as_sdt=0,5
| 1 | 2,020 |
Learning Similarity Metrics for Numerical Simulations
| 14 |
icml
| 4 | 0 |
2023-06-17 03:57:16.001000
|
https://github.com/tum-pbs/LSIM
| 28 |
Learning similarity metrics for numerical simulations
|
https://scholar.google.com/scholar?cluster=16424748636461420663&hl=en&as_sdt=0,31
| 6 | 2,020 |
Online Learning for Active Cache Synchronization
| 4 |
icml
| 15 | 0 |
2023-06-17 03:57:16.204000
|
https://github.com/microsoft/Optimal-Freshness-Crawl-Scheduling
| 34 |
Online learning for active cache synchronization
|
https://scholar.google.com/scholar?cluster=17855139660047402604&hl=en&as_sdt=0,10
| 12 | 2,020 |
SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates
| 72 |
icml
| 17 | 3 |
2023-06-17 03:57:16.406000
|
https://github.com/Lingkai-Kong/SDE-Net
| 86 |
Sde-net: Equipping deep neural networks with uncertainty estimates
|
https://scholar.google.com/scholar?cluster=8672163591192600750&hl=en&as_sdt=0,5
| 5 | 2,020 |
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
| 171 |
icml
| 13 | 1 |
2023-06-17 03:57:16.608000
|
https://github.com/wiseodd/last_layer_laplace
| 68 |
Being bayesian, even just a bit, fixes overconfidence in relu networks
|
https://scholar.google.com/scholar?cluster=12071417821093265788&hl=en&as_sdt=0,10
| 3 | 2,020 |
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
| 77 |
icml
| 2 | 0 |
2023-06-17 03:57:16.810000
|
https://github.com/alevine0/smoothingGenGaussian
| 3 |
Curse of dimensionality on randomized smoothing for certifiable robustness
|
https://scholar.google.com/scholar?cluster=3011754801302314262&hl=en&as_sdt=0,22
| 3 | 2,020 |
Two Routes to Scalable Credit Assignment without Weight Symmetry
| 27 |
icml
| 3 | 0 |
2023-06-17 03:57:17.012000
|
https://github.com/neuroailab/Neural-Alignment
| 22 |
Two routes to scalable credit assignment without weight symmetry
|
https://scholar.google.com/scholar?cluster=5596776573115388882&hl=en&as_sdt=0,21
| 6 | 2,020 |
Soft Threshold Weight Reparameterization for Learnable Sparsity
| 134 |
icml
| 9 | 5 |
2023-06-17 03:57:17.214000
|
https://github.com/RAIVNLab/STR
| 78 |
Soft threshold weight reparameterization for learnable sparsity
|
https://scholar.google.com/scholar?cluster=6875882562671228073&hl=en&as_sdt=0,21
| 6 | 2,020 |
Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics
| 89 |
icml
| 10 | 1 |
2023-06-17 03:57:17.416000
|
https://github.com/bayesgroup/tqc_pytorch
| 63 |
Controlling overestimation bias with truncated mixture of continuous distributional quantile critics
|
https://scholar.google.com/scholar?cluster=17490032530609728383&hl=en&as_sdt=0,5
| 10 | 2,020 |
Principled learning method for Wasserstein distributionally robust optimization with local perturbations
| 6 |
icml
| 2 | 2 |
2023-06-17 03:57:17.618000
|
https://github.com/ykwon0407/wdro_local_perturbation
| 19 |
Principled learning method for Wasserstein distributionally robust optimization with local perturbations
|
https://scholar.google.com/scholar?cluster=2114088593646438168&hl=en&as_sdt=0,22
| 2 | 2,020 |
Bidirectional Model-based Policy Optimization
| 35 |
icml
| 0 | 1 |
2023-06-17 03:57:17.820000
|
https://github.com/hanglai/bmpo
| 21 |
Bidirectional model-based policy optimization
|
https://scholar.google.com/scholar?cluster=8899413271083643198&hl=en&as_sdt=0,5
| 3 | 2,020 |
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
| 724 |
icml
| 84 | 12 |
2023-06-17 03:57:18.023000
|
https://github.com/MishaLaskin/curl
| 519 |
Curl: Contrastive unsupervised representations for reinforcement learning
|
https://scholar.google.com/scholar?cluster=10576608792458329488&hl=en&as_sdt=0,5
| 11 | 2,020 |
Self-Attentive Associative Memory
| 43 |
icml
| 7 | 0 |
2023-06-17 03:57:18.226000
|
https://github.com/thaihungle/SAM
| 39 |
Self-attentive associative memory
|
https://scholar.google.com/scholar?cluster=10962782688418035731&hl=en&as_sdt=0,5
| 4 | 2,020 |
Self-supervised Label Augmentation via Input Transformations
| 110 |
icml
| 14 | 0 |
2023-06-17 03:57:18.428000
|
https://github.com/hankook/SLA
| 100 |
Self-supervised label augmentation via input transformations
|
https://scholar.google.com/scholar?cluster=8322322680850611064&hl=en&as_sdt=0,33
| 5 | 2,020 |
Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning
| 73 |
icml
| 6 | 1 |
2023-06-17 03:57:18.630000
|
https://github.com/younggyoseo/CaDM
| 51 |
Context-aware dynamics model for generalization in model-based reinforcement learning
|
https://scholar.google.com/scholar?cluster=4703047670466451533&hl=en&as_sdt=0,33
| 6 | 2,020 |
Temporal Phenotyping using Deep Predictive Clustering of Disease Progression
| 37 |
icml
| 16 | 3 |
2023-06-17 03:57:18.832000
|
https://github.com/chl8856/AC_TPC
| 40 |
Temporal phenotyping using deep predictive clustering of disease progression
|
https://scholar.google.com/scholar?cluster=529698419891828395&hl=en&as_sdt=0,43
| 2 | 2,020 |
Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks
| 18 |
icml
| 2 | 0 |
2023-06-17 03:57:19.033000
|
https://github.com/Karbo123/AnalyticMesh
| 51 |
Analytic marching: An analytic meshing solution from deep implicit surface networks
|
https://scholar.google.com/scholar?cluster=13457623400866526866&hl=en&as_sdt=0,5
| 5 | 2,020 |
ACFlow: Flow Models for Arbitrary Conditional Likelihoods
| 17 |
icml
| 0 | 4 |
2023-06-17 03:57:19.235000
|
https://github.com/lupalab/ACFlow
| 11 |
ACFlow: Flow models for arbitrary conditional likelihoods
|
https://scholar.google.com/scholar?cluster=4436891943483900806&hl=en&as_sdt=0,9
| 3 | 2,020 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.