title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Evaluating Robustness to Dataset Shift via Parametric Robustness Sets
| 3 |
neurips
| 2 | 0 |
2023-06-16 22:58:36.748000
|
https://github.com/clinicalml/parametric-robustness-evaluation
| 4 |
Evaluating robustness to dataset shift via parametric robustness sets
|
https://scholar.google.com/scholar?cluster=13183637754887103370&hl=en&as_sdt=0,44
| 8 | 2,022 |
CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers
| 78 |
neurips
| 68 | 21 |
2023-06-16 22:58:36.959000
|
https://github.com/thudm/cogview2
| 862 |
Cogview2: Faster and better text-to-image generation via hierarchical transformers
|
https://scholar.google.com/scholar?cluster=13690046467918196748&hl=en&as_sdt=0,24
| 36 | 2,022 |
Recursive Reasoning in Minimax Games: A Level $k$ Gradient Play Method
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:58:37.171000
|
https://github.com/zichuliu/submission
| 3 |
Recursive Reasoning in Minimax Games: A Level Gradient Play Method
|
https://scholar.google.com/scholar?cluster=9230671350422718821&hl=en&as_sdt=0,5
| 1 | 2,022 |
When to Ask for Help: Proactive Interventions in Autonomous Reinforcement Learning
| 4 |
neurips
| 2 | 0 |
2023-06-16 22:58:37.383000
|
https://github.com/tajwarfahim/proactive_interventions
| 6 |
When to Ask for Help: Proactive Interventions in Autonomous Reinforcement Learning
|
https://scholar.google.com/scholar?cluster=552685687177516453&hl=en&as_sdt=0,33
| 4 | 2,022 |
Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering
| 6 |
neurips
| 2 | 0 |
2023-06-16 22:58:37.594000
|
https://github.com/kepsail/SHGP
| 17 |
Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering
|
https://scholar.google.com/scholar?cluster=11543677254444809912&hl=en&as_sdt=0,47
| 1 | 2,022 |
coVariance Neural Networks
| 3 |
neurips
| 0 | 0 |
2023-06-16 22:58:37.809000
|
https://github.com/pennbindlab/vnn
| 2 |
coVariance Neural Networks
|
https://scholar.google.com/scholar?cluster=5746884455895587002&hl=en&as_sdt=0,48
| 0 | 2,022 |
Two-Stream Network for Sign Language Recognition and Translation
| 8 |
neurips
| 9 | 10 |
2023-06-16 22:58:38.020000
|
https://github.com/FangyunWei/SLRT
| 89 |
Two-Stream Network for Sign Language Recognition and Translation
|
https://scholar.google.com/scholar?cluster=18038872806670059767&hl=en&as_sdt=0,5
| 3 | 2,022 |
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives
| 3 |
neurips
| 1 | 1 |
2023-06-16 22:58:38.232000
|
https://github.com/zfying/visfis
| 4 |
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives
|
https://scholar.google.com/scholar?cluster=11221935189799088705&hl=en&as_sdt=0,34
| 1 | 2,022 |
Batch size-invariance for policy optimization
| 6 |
neurips
| 14 | 0 |
2023-06-16 22:58:38.443000
|
https://github.com/openai/ppo-ewma
| 42 |
Batch size-invariance for policy optimization
|
https://scholar.google.com/scholar?cluster=2296025407370141358&hl=en&as_sdt=0,5
| 2 | 2,022 |
Variational Model Perturbation for Source-Free Domain Adaptation
| 4 |
neurips
| 1 | 0 |
2023-06-16 22:58:38.654000
|
https://github.com/mmjing/variational_model_perturbation
| 4 |
Variational model perturbation for source-free domain adaptation
|
https://scholar.google.com/scholar?cluster=11797225835673378824&hl=en&as_sdt=0,18
| 1 | 2,022 |
A Unified Framework for Alternating Offline Model Training and Policy Learning
| 3 |
neurips
| 1 | 0 |
2023-06-16 22:58:38.865000
|
https://github.com/shentao-yang/ampl_neurips2022
| 7 |
A Unified Framework for Alternating Offline Model Training and Policy Learning
|
https://scholar.google.com/scholar?cluster=1237354038205563544&hl=en&as_sdt=0,49
| 1 | 2,022 |
Peer Prediction for Learning Agents
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:58:39.076000
|
https://github.com/fengtony686/peer-prediction-convergence
| 2 |
Peer Prediction for Learning Agents
|
https://scholar.google.com/scholar?cluster=6943061375108468617&hl=en&as_sdt=0,5
| 2 | 2,022 |
ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
| 8 |
neurips
| 7 | 5 |
2023-06-16 22:58:39.288000
|
https://github.com/sunny2109/mobilesr-ntire2022
| 56 |
ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
|
https://scholar.google.com/scholar?cluster=15307398465334207013&hl=en&as_sdt=0,5
| 4 | 2,022 |
Locating and Editing Factual Associations in GPT
| 77 |
neurips
| 52 | 10 |
2023-06-16 22:58:39.500000
|
https://github.com/kmeng01/rome
| 237 |
Locating and editing factual associations in GPT
|
https://scholar.google.com/scholar?cluster=6676170860106418721&hl=en&as_sdt=0,45
| 6 | 2,022 |
Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
| 11 |
neurips
| 2 | 0 |
2023-06-16 22:58:39.712000
|
https://github.com/wimh966/outlier_suppression
| 28 |
Outlier suppression: Pushing the limit of low-bit transformer language models
|
https://scholar.google.com/scholar?cluster=10349903029841353318&hl=en&as_sdt=0,10
| 1 | 2,022 |
DataMUX: Data Multiplexing for Neural Networks
| 4 |
neurips
| 8 | 0 |
2023-06-16 22:58:39.923000
|
https://github.com/princeton-nlp/datamux
| 53 |
Datamux: Data multiplexing for neural networks
|
https://scholar.google.com/scholar?cluster=3955638905484690082&hl=en&as_sdt=0,33
| 7 | 2,022 |
Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:58:40.134000
|
https://github.com/helena-yuhan-liu/modprop
| 3 |
Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators
|
https://scholar.google.com/scholar?cluster=2884524613792294582&hl=en&as_sdt=0,41
| 1 | 2,022 |
Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering
| 8 |
neurips
| 2 | 0 |
2023-06-16 22:58:40.346000
|
https://github.com/gorilla-lab-scut/ttac
| 32 |
Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering
|
https://scholar.google.com/scholar?cluster=15662895642331219475&hl=en&as_sdt=0,36
| 1 | 2,022 |
Active Labeling: Streaming Stochastic Gradients
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:58:40.557000
|
https://github.com/viviencabannes/active-labeling
| 1 |
Active Labeling: Streaming Stochastic Gradients
|
https://scholar.google.com/scholar?cluster=15951285451586696904&hl=en&as_sdt=0,44
| 2 | 2,022 |
TOIST: Task Oriented Instance Segmentation Transformer with Noun-Pronoun Distillation
| 3 |
neurips
| 2 | 0 |
2023-06-16 22:58:40.769000
|
https://github.com/air-discover/toist
| 117 |
TOIST: Task Oriented Instance Segmentation Transformer with Noun-Pronoun Distillation
|
https://scholar.google.com/scholar?cluster=12198126632106334540&hl=en&as_sdt=0,44
| 5 | 2,022 |
Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
| 50 |
neurips
| 5 | 1 |
2023-06-16 22:58:40.980000
|
https://github.com/weixin-liang/modality-gap
| 46 |
Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning
|
https://scholar.google.com/scholar?cluster=9899703375781547991&hl=en&as_sdt=0,5
| 3 | 2,022 |
Sequence Model Imitation Learning with Unobserved Contexts
| 3 |
neurips
| 0 | 0 |
2023-06-16 22:58:41.198000
|
https://github.com/gkswamy98/sequence_model_il
| 3 |
Sequence model imitation learning with unobserved contexts
|
https://scholar.google.com/scholar?cluster=2920440114291350523&hl=en&as_sdt=0,5
| 2 | 2,022 |
Merging Models with Fisher-Weighted Averaging
| 35 |
neurips
| 2 | 0 |
2023-06-16 22:58:41.417000
|
https://github.com/mmatena/model_merging
| 28 |
Merging models with fisher-weighted averaging
|
https://scholar.google.com/scholar?cluster=6334185910733231827&hl=en&as_sdt=0,38
| 1 | 2,022 |
FasterRisk: Fast and Accurate Interpretable Risk Scores
| 2 |
neurips
| 1 | 0 |
2023-06-16 22:58:41.628000
|
https://github.com/jiachangliu/fasterrisk
| 17 |
FasterRisk: Fast and Accurate Interpretable Risk Scores
|
https://scholar.google.com/scholar?cluster=16531707730202339054&hl=en&as_sdt=0,33
| 4 | 2,022 |
Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution
| 12 |
neurips
| 1 | 0 |
2023-06-16 22:58:41.840000
|
https://github.com/ut-austin-data-science-group/csw
| 4 |
Revisiting sliced Wasserstein on images: From vectorization to convolution
|
https://scholar.google.com/scholar?cluster=16632120304055085115&hl=en&as_sdt=0,5
| 0 | 2,022 |
A Rotated Hyperbolic Wrapped Normal Distribution for Hierarchical Representation Learning
| 3 |
neurips
| 1 | 0 |
2023-06-16 22:58:42.052000
|
https://github.com/ml-postech/rown
| 18 |
A rotated hyperbolic wrapped normal distribution for hierarchical representation learning
|
https://scholar.google.com/scholar?cluster=12794077703223787887&hl=en&as_sdt=0,5
| 7 | 2,022 |
SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training
| 4 |
neurips
| 4 | 1 |
2023-06-16 22:58:42.263000
|
https://github.com/dem123456789/semifl-semi-supervised-federated-learning-for-unlabeled-clients-with-alternate-training
| 13 |
SemiFL: Semi-supervised federated learning for unlabeled clients with alternate training
|
https://scholar.google.com/scholar?cluster=15626144916318485438&hl=en&as_sdt=0,47
| 3 | 2,022 |
RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
| 2 |
neurips
| 1 | 0 |
2023-06-16 22:58:42.474000
|
https://github.com/kingjamessong/rankfeat
| 14 |
RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
|
https://scholar.google.com/scholar?cluster=15686388667832765832&hl=en&as_sdt=0,5
| 1 | 2,022 |
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
| 3 |
neurips
| 2 | 1 |
2023-06-16 22:58:42.686000
|
https://github.com/srishtigautam/protovae
| 12 |
Protovae: A trustworthy self-explainable prototypical variational model
|
https://scholar.google.com/scholar?cluster=16989445926776575392&hl=en&as_sdt=0,47
| 1 | 2,022 |
If Influence Functions are the Answer, Then What is the Question?
| 14 |
neurips
| 0 | 0 |
2023-06-16 22:58:42.897000
|
https://github.com/pomonam/jax-influence
| 7 |
If Influence Functions are the Answer, Then What is the Question?
|
https://scholar.google.com/scholar?cluster=17591064813348027664&hl=en&as_sdt=0,23
| 1 | 2,022 |
Hierarchical classification at multiple operating points
| 1 |
neurips
| 1 | 0 |
2023-06-16 22:58:43.107000
|
https://github.com/jvlmdr/hiercls
| 11 |
Hierarchical classification at multiple operating points
|
https://scholar.google.com/scholar?cluster=6696040702671773446&hl=en&as_sdt=0,14
| 2 | 2,022 |
CARD: Classification and Regression Diffusion Models
| 10 |
neurips
| 16 | 1 |
2023-06-16 22:58:43.319000
|
https://github.com/xzwhan/card
| 108 |
CARD: Classification and regression diffusion models
|
https://scholar.google.com/scholar?cluster=13161498921981862309&hl=en&as_sdt=0,5
| 5 | 2,022 |
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:58:43.531000
|
https://github.com/Tsili42/adv-ntk
| 0 |
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
|
https://scholar.google.com/scholar?cluster=765440786974281242&hl=en&as_sdt=0,33
| 1 | 2,022 |
MoCoDA: Model-based Counterfactual Data Augmentation
| 3 |
neurips
| 1 | 0 |
2023-06-16 22:58:43.742000
|
https://github.com/spitis/mocoda
| 8 |
MoCoDA: Model-based Counterfactual Data Augmentation
|
https://scholar.google.com/scholar?cluster=7948314758864851403&hl=en&as_sdt=0,34
| 1 | 2,022 |
On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification
| 5 |
neurips
| 1 | 0 |
2023-06-16 22:58:43.954000
|
https://github.com/activatedgeek/bayesian-classification
| 18 |
On uncertainty, tempering, and data augmentation in bayesian classification
|
https://scholar.google.com/scholar?cluster=5049318542021404538&hl=en&as_sdt=0,33
| 2 | 2,022 |
Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
| 18 |
neurips
| 7,321 | 1,026 |
2023-06-16 22:58:44.165000
|
https://github.com/google-research/google-research
| 29,788 |
Why so pessimistic? estimating uncertainties for offline rl through ensembles, and why their independence matters
|
https://scholar.google.com/scholar?cluster=6972415736332431556&hl=en&as_sdt=0,44
| 727 | 2,022 |
Advancing Model Pruning via Bi-level Optimization
| 7 |
neurips
| 34 | 1 |
2023-06-16 22:58:44.377000
|
https://github.com/optml-group/bip
| 130 |
Advancing Model Pruning via Bi-level Optimization
|
https://scholar.google.com/scholar?cluster=13543295038180870418&hl=en&as_sdt=0,43
| 24 | 2,022 |
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
| 61 |
neurips
| 93 | 24 |
2023-06-16 22:58:44.588000
|
https://github.com/MineDojo/MineDojo
| 1,310 |
Minedojo: Building open-ended embodied agents with internet-scale knowledge
|
https://scholar.google.com/scholar?cluster=231281729668967714&hl=en&as_sdt=0,11
| 27 | 2,022 |
Truncated Matrix Power Iteration for Differentiable DAG Learning
| 1 |
neurips
| 1 | 0 |
2023-06-16 22:58:44.800000
|
https://github.com/zzhang1987/truncated-matrix-power-iteration-for-differentiable-dag-learning
| 1 |
Truncated Matrix Power Iteration for Differentiable DAG Learning
|
https://scholar.google.com/scholar?cluster=9166467047019565651&hl=en&as_sdt=0,5
| 2 | 2,022 |
Learning Debiased Classifier with Biased Committee
| 8 |
neurips
| 0 | 0 |
2023-06-16 22:58:45.011000
|
https://github.com/nayeong-v-kim/lwbc
| 12 |
Learning debiased classifier with biased committee
|
https://scholar.google.com/scholar?cluster=2775898324803541021&hl=en&as_sdt=0,44
| 1 | 2,022 |
Unifying Voxel-based Representation with Transformer for 3D Object Detection
| 53 |
neurips
| 12 | 8 |
2023-06-16 22:58:45.223000
|
https://github.com/dvlab-research/uvtr
| 187 |
Unifying voxel-based representation with transformer for 3d object detection
|
https://scholar.google.com/scholar?cluster=2319515305755204659&hl=en&as_sdt=0,5
| 6 | 2,022 |
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:58:45.435000
|
https://github.com/steliostavroulakis/chaos_rnns
| 2 |
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
|
https://scholar.google.com/scholar?cluster=7078078811342818102&hl=en&as_sdt=0,43
| 1 | 2,022 |
Learning to Branch with Tree MDPs
| 14 |
neurips
| 9 | 2 |
2023-06-16 22:58:45.645000
|
https://github.com/lascavana/rl2branch
| 7 |
Learning to branch with tree mdps
|
https://scholar.google.com/scholar?cluster=5953866441971807828&hl=en&as_sdt=0,47
| 1 | 2,022 |
Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs
| 48 |
neurips
| 9 | 0 |
2023-06-16 22:58:45.856000
|
https://github.com/twitter-research/neural-sheaf-diffusion
| 43 |
Neural sheaf diffusion: A topological perspective on heterophily and oversmoothing in gnns
|
https://scholar.google.com/scholar?cluster=14875672783767429079&hl=en&as_sdt=0,50
| 5 | 2,022 |
How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios
| 1 |
neurips
| 0 | 1 |
2023-06-16 22:58:46.068000
|
https://github.com/hendrycks/emodiversity
| 7 |
How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios
|
https://scholar.google.com/scholar?cluster=7719508504871552377&hl=en&as_sdt=0,33
| 4 | 2,022 |
On Elimination Strategies for Bandit Fixed-Confidence Identification
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:58:46.279000
|
https://github.com/andreatirinzoni/bandit-elimination
| 2 |
On Elimination Strategies for Bandit Fixed-Confidence Identification
|
https://scholar.google.com/scholar?cluster=7723207511483790063&hl=en&as_sdt=0,5
| 1 | 2,022 |
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
| 7 |
neurips
| 3 | 0 |
2023-06-16 22:58:46.492000
|
https://github.com/mo666666/when-adversarial-training-meets-vision-transformers
| 13 |
When adversarial training meets vision transformers: Recipes from training to architecture
|
https://scholar.google.com/scholar?cluster=4979980809128856359&hl=en&as_sdt=0,31
| 2 | 2,022 |
Private Estimation with Public Data
| 48 |
neurips
| 0 | 0 |
2023-06-16 22:58:46.704000
|
https://github.com/alexbie98/1pub-priv-mean-est
| 0 |
Sharing social network data: differentially private estimation of exponential family random-graph models
|
https://scholar.google.com/scholar?cluster=15510004526104950140&hl=en&as_sdt=0,5
| 1 | 2,022 |
Most Activation Functions Can Win the Lottery Without Excessive Depth
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:58:46.915000
|
https://github.com/relationalml/lt-existence
| 2 |
Most activation functions can win the lottery without excessive depth
|
https://scholar.google.com/scholar?cluster=2762350726974066343&hl=en&as_sdt=0,47
| 0 | 2,022 |
Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition
| 1 |
neurips
| 4 | 0 |
2023-06-16 22:58:47.127000
|
https://github.com/kdhht2334/elim_fer
| 24 |
Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition
|
https://scholar.google.com/scholar?cluster=9348912629792592227&hl=en&as_sdt=0,32
| 1 | 2,022 |
SHINE: SubHypergraph Inductive Neural nEtwork
| 1 |
neurips
| 2 | 0 |
2023-06-16 22:58:47.339000
|
https://github.com/luoyuanlab/shine
| 9 |
SHINE: SubHypergraph Inductive Neural nEtwork
|
https://scholar.google.com/scholar?cluster=5043594054485770914&hl=en&as_sdt=0,44
| 2 | 2,022 |
Efficient Aggregated Kernel Tests using Incomplete $U$-statistics
| 7 |
neurips
| 0 | 0 |
2023-06-16 22:58:47.553000
|
https://github.com/antoninschrab/agginc-paper
| 3 |
Efficient Aggregated Kernel Tests using Incomplete -statistics
|
https://scholar.google.com/scholar?cluster=14498936236963978885&hl=en&as_sdt=0,5
| 1 | 2,022 |
Influencing Long-Term Behavior in Multiagent Reinforcement Learning
| 7 |
neurips
| 5 | 0 |
2023-06-16 22:58:47.766000
|
https://github.com/dkkim93/further
| 16 |
Influencing long-term behavior in multiagent reinforcement learning
|
https://scholar.google.com/scholar?cluster=12230303792245064491&hl=en&as_sdt=0,33
| 1 | 2,022 |
Quantized Training of Gradient Boosting Decision Trees
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:58:47.978000
|
https://github.com/quantized-gbdt/quantized-gbdt
| 9 |
Quantized Training of Gradient Boosting Decision Trees
|
https://scholar.google.com/scholar?cluster=4058197876307352226&hl=en&as_sdt=0,10
| 2 | 2,022 |
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
| 37 |
neurips
| 10 | 3 |
2023-06-16 22:58:48.192000
|
https://github.com/deepmind/emergent_in_context_learning
| 53 |
Data distributional properties drive emergent in-context learning in transformers
|
https://scholar.google.com/scholar?cluster=16209854431595052414&hl=en&as_sdt=0,33
| 3 | 2,022 |
Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks
| 5 |
neurips
| 1 | 1 |
2023-06-16 22:58:48.417000
|
https://github.com/mansheej/lth_diet
| 8 |
Lottery tickets on a data diet: Finding initializations with sparse trainable networks
|
https://scholar.google.com/scholar?cluster=17203687298264030475&hl=en&as_sdt=0,5
| 3 | 2,022 |
Memory safe computations with XLA compiler
| 1 |
neurips
| 2 | 5 |
2023-06-16 22:58:48.630000
|
https://github.com/awav/tensorflow
| 1 |
Memory safe computations with XLA compiler
|
https://scholar.google.com/scholar?cluster=18390099303465948139&hl=en&as_sdt=0,47
| 1 | 2,022 |
Towards Theoretically Inspired Neural Initialization Optimization
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:58:48.841000
|
https://github.com/HarborYuan/GradCosine
| 8 |
Towards Theoretically Inspired Neural Initialization Optimization
|
https://scholar.google.com/scholar?cluster=6350876524339921816&hl=en&as_sdt=0,14
| 2 | 2,022 |
AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies
| 0 |
neurips
| 3 | 4 |
2023-06-16 22:58:49.052000
|
https://github.com/lisiyao21/animerun
| 69 |
AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies
|
https://scholar.google.com/scholar?cluster=2206932835628309531&hl=en&as_sdt=0,5
| 11 | 2,022 |
Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization
| 6 |
neurips
| 0 | 0 |
2023-06-16 22:58:49.265000
|
https://github.com/yuri-k111/neurips2022_code
| 0 |
Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization
|
https://scholar.google.com/scholar?cluster=14537579459449046280&hl=en&as_sdt=0,5
| 1 | 2,022 |
Efficient learning of nonlinear prediction models with time-series privileged information
| 2 |
neurips
| 1 | 0 |
2023-06-16 22:58:49.477000
|
https://github.com/healthy-ai/glupts
| 0 |
Efficient learning of nonlinear prediction models with time-series privileged information
|
https://scholar.google.com/scholar?cluster=18191800989177614120&hl=en&as_sdt=0,5
| 0 | 2,022 |
Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:58:49.693000
|
https://github.com/snap-research/spfde
| 8 |
Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
|
https://scholar.google.com/scholar?cluster=8941325294447745327&hl=en&as_sdt=0,33
| 4 | 2,022 |
Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:58:49.904000
|
https://github.com/andrew-cullen/doublebubble
| 1 |
Double bubble, toil and trouble: enhancing certified robustness through transitivity
|
https://scholar.google.com/scholar?cluster=15829183381578160837&hl=en&as_sdt=0,44
| 2 | 2,022 |
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
| 35 |
neurips
| 4 | 2 |
2023-06-16 22:58:50.117000
|
https://github.com/hsouri/Sleeper-Agent
| 45 |
Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch
|
https://scholar.google.com/scholar?cluster=9248176712796866973&hl=en&as_sdt=0,1
| 3 | 2,022 |
A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
| 2 |
neurips
| 2 | 0 |
2023-06-16 22:58:50.328000
|
https://github.com/llyx97/sparse-and-robust-plm
| 20 |
A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
|
https://scholar.google.com/scholar?cluster=12965321937141963299&hl=en&as_sdt=0,39
| 1 | 2,022 |
Pareto Set Learning for Expensive Multi-Objective Optimization
| 6 |
neurips
| 5 | 1 |
2023-06-16 22:58:50.539000
|
https://github.com/xi-l/psl-mobo
| 6 |
Pareto Set Learning for Expensive Multi-Objective Optimization
|
https://scholar.google.com/scholar?cluster=16507134535796504804&hl=en&as_sdt=0,32
| 3 | 2,022 |
Non-monotonic Resource Utilization in the Bandits with Knapsacks Problem
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:58:50.750000
|
https://github.com/raunakkmr/non-monotonic-resource-utilization-in-the-bandits-with-knapsacks-problem-code
| 3 |
Non-monotonic Resource Utilization in the Bandits with Knapsacks Problem
|
https://scholar.google.com/scholar?cluster=12804888557627073813&hl=en&as_sdt=0,5
| 1 | 2,022 |
Efficient identification of informative features in simulation-based inference
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:58:50.962000
|
https://github.com/berenslab/fslm_repo
| 0 |
Efficient identification of informative features in simulation-based inference
|
https://scholar.google.com/scholar?cluster=9408830879778530143&hl=en&as_sdt=0,5
| 2 | 2,022 |
Agreement-on-the-line: Predicting the Performance of Neural Networks under Distribution Shift
| 18 |
neurips
| 0 | 0 |
2023-06-16 22:58:51.173000
|
https://github.com/kebaek/agreement-on-the-line
| 1 |
Agreement-on-the-line: Predicting the performance of neural networks under distribution shift
|
https://scholar.google.com/scholar?cluster=16040179081922789785&hl=en&as_sdt=0,15
| 1 | 2,022 |
Large-Scale Differentiable Causal Discovery of Factor Graphs
| 7 |
neurips
| 2 | 1 |
2023-06-16 22:58:51.385000
|
https://github.com/genentech/dcdfg
| 15 |
Large-scale differentiable causal discovery of factor graphs
|
https://scholar.google.com/scholar?cluster=336010023327316095&hl=en&as_sdt=0,29
| 2 | 2,022 |
Approximate Euclidean lengths and distances beyond Johnson-Lindenstrauss
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:58:51.597000
|
https://github.com/IBM/JLPlusPlus
| 5 |
Approximate Euclidean lengths and distances beyond Johnson-Lindenstrauss
|
https://scholar.google.com/scholar?cluster=5393693491306876887&hl=en&as_sdt=0,5
| 3 | 2,022 |
Few-shot Image Generation via Adaptation-Aware Kernel Modulation
| 5 |
neurips
| 1 | 0 |
2023-06-16 22:58:51.808000
|
https://github.com/yunqing-me/AdAM
| 10 |
Few-shot image generation via adaptation-aware kernel modulation
|
https://scholar.google.com/scholar?cluster=4742360547792769040&hl=en&as_sdt=0,5
| 2 | 2,022 |
Learning to Follow Instructions in Text-Based Games
| 5 |
neurips
| 0 | 0 |
2023-06-16 22:58:52.019000
|
https://github.com/mathieutuli/ltl-gata
| 4 |
Learning to follow instructions in text-based games
|
https://scholar.google.com/scholar?cluster=2065963607262919529&hl=en&as_sdt=0,44
| 2 | 2,022 |
Improving Variational Autoencoders with Density Gap-based Regularization
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:58:52.230000
|
https://github.com/zhangjf-nlp/dg-vaes
| 3 |
Improving Variational Autoencoders with Density Gap-based Regularization
|
https://scholar.google.com/scholar?cluster=5008460593978673315&hl=en&as_sdt=0,26
| 1 | 2,022 |
RISE: Robust Individualized Decision Learning with Sensitive Variables
| 5 |
neurips
| 1 | 0 |
2023-06-16 22:58:52.442000
|
https://github.com/ellenxtan/rise
| 6 |
Rise: Robust individualized decision learning with sensitive variables
|
https://scholar.google.com/scholar?cluster=14552433169165007620&hl=en&as_sdt=0,39
| 3 | 2,022 |
Beyond neural scaling laws: beating power law scaling via data pruning
| 67 |
neurips
| 2 | 0 |
2023-06-16 22:58:52.653000
|
https://github.com/rgeirhos/dataset-pruning-metrics
| 19 |
Beyond neural scaling laws: beating power law scaling via data pruning
|
https://scholar.google.com/scholar?cluster=14309238955014761855&hl=en&as_sdt=0,33
| 1 | 2,022 |
Maximum Class Separation as Inductive Bias in One Matrix
| 6 |
neurips
| 2 | 1 |
2023-06-16 22:58:52.866000
|
https://github.com/tkasarla/max-separation-as-inductive-bias
| 23 |
Maximum class separation as inductive bias in one matrix
|
https://scholar.google.com/scholar?cluster=15315241654161942906&hl=en&as_sdt=0,48
| 5 | 2,022 |
Redundant representations help generalization in wide neural networks
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:58:53.077000
|
https://github.com/diegodoimo/redundant_representation
| 1 |
Redundant representations help generalization in wide neural networks
|
https://scholar.google.com/scholar?cluster=11398110079007886002&hl=en&as_sdt=0,10
| 1 | 2,022 |
Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning
| 3 |
neurips
| 0 | 0 |
2023-06-16 22:58:53.289000
|
https://github.com/romainchor/datascience
| 0 |
Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning
|
https://scholar.google.com/scholar?cluster=16572219026525753208&hl=en&as_sdt=0,5
| 2 | 2,022 |
Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:58:53.501000
|
https://github.com/BatsResearch/mazzetto-neurips22-code
| 2 |
Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes
|
https://scholar.google.com/scholar?cluster=8233985458905085430&hl=en&as_sdt=0,34
| 3 | 2,022 |
Graph Neural Networks with Adaptive Readouts
| 5 |
neurips
| 1 | 0 |
2023-06-16 22:58:53.713000
|
https://github.com/davidbuterez/gnn-neural-readouts
| 16 |
Graph Neural Networks with Adaptive Readouts
|
https://scholar.google.com/scholar?cluster=16233387568833455709&hl=en&as_sdt=0,22
| 2 | 2,022 |
GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games
| 5 |
neurips
| 3 | 1 |
2023-06-16 22:58:53.924000
|
https://github.com/shichangzh/gstarx
| 8 |
GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games
|
https://scholar.google.com/scholar?cluster=7993639036305387244&hl=en&as_sdt=0,5
| 2 | 2,022 |
Low-Rank Modular Reinforcement Learning via Muscle Synergy
| 2 |
neurips
| 1 | 1 |
2023-06-16 22:58:54.136000
|
https://github.com/drdh/synergy-rl
| 4 |
Low-Rank Modular Reinforcement Learning via Muscle Synergy
|
https://scholar.google.com/scholar?cluster=15949324168109968004&hl=en&as_sdt=0,5
| 1 | 2,022 |
Faster Deep Reinforcement Learning with Slower Online Network
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:58:54.348000
|
https://github.com/amazon-research/fast-rl-with-slow-updates
| 15 |
Faster deep reinforcement learning with slower online network
|
https://scholar.google.com/scholar?cluster=8991673976969240285&hl=en&as_sdt=0,5
| 1 | 2,022 |
Green Hierarchical Vision Transformer for Masked Image Modeling
| 24 |
neurips
| 5 | 1 |
2023-06-16 22:58:54.559000
|
https://github.com/layneh/greenmim
| 146 |
Green hierarchical vision transformer for masked image modeling
|
https://scholar.google.com/scholar?cluster=5575721172969217810&hl=en&as_sdt=0,5
| 3 | 2,022 |
Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation
| 7 |
neurips
| 2 | 0 |
2023-06-16 22:58:54.771000
|
https://github.com/montefiore-ai/balanced-nre
| 11 |
Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation
|
https://scholar.google.com/scholar?cluster=2070151199404142004&hl=en&as_sdt=0,5
| 4 | 2,022 |
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
| 19 |
neurips
| 0 | 0 |
2023-06-16 22:58:54.982000
|
https://github.com/eboursier/gfdynamics
| 4 |
Gradient flow dynamics of shallow relu networks for square loss and orthogonal inputs
|
https://scholar.google.com/scholar?cluster=7952131240669274846&hl=en&as_sdt=0,5
| 2 | 2,022 |
Weisfeiler and Leman Go Walking: Random Walk Kernels Revisited
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:58:55.193000
|
https://github.com/nlskrg/node_centric_walk_kernels
| 0 |
Weisfeiler and Leman Go Walking: Random Walk Kernels Revisited
|
https://scholar.google.com/scholar?cluster=3035963861391187619&hl=en&as_sdt=0,33
| 1 | 2,022 |
Multi-agent Dynamic Algorithm Configuration
| 9 |
neurips
| 7 | 0 |
2023-06-16 22:58:55.423000
|
https://github.com/lamda-bbo/madac
| 18 |
Multi-agent Dynamic Algorithm Configuration
|
https://scholar.google.com/scholar?cluster=18124893361074952166&hl=en&as_sdt=0,5
| 1 | 2,022 |
TaSIL: Taylor Series Imitation Learning
| 9 |
neurips
| 1 | 0 |
2023-06-16 22:58:55.634000
|
https://github.com/unstable-zeros/tasil
| 3 |
Tasil: Taylor series imitation learning
|
https://scholar.google.com/scholar?cluster=5196638265754138969&hl=en&as_sdt=0,33
| 1 | 2,022 |
Continuous MDP Homomorphisms and Homomorphic Policy Gradient
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:58:55.846000
|
https://github.com/sahandrez/homomorphic_policy_gradient
| 12 |
Continuous MDP Homomorphisms and Homomorphic Policy Gradient
|
https://scholar.google.com/scholar?cluster=765221308115729349&hl=en&as_sdt=0,33
| 3 | 2,022 |
Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance
| 5 |
neurips
| 2 | 0 |
2023-06-16 22:58:56.058000
|
https://github.com/uw-madison-lee-lab/score-wasserstein
| 12 |
Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance
|
https://scholar.google.com/scholar?cluster=2627264767154274760&hl=en&as_sdt=0,14
| 2 | 2,022 |
OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs
| 11 |
neurips
| 1 | 1 |
2023-06-16 22:58:56.287000
|
https://github.com/yangzez/ood-link-prediction-generalization-mpnn
| 1 |
Ood link prediction generalization capabilities of message-passing gnns in larger test graphs
|
https://scholar.google.com/scholar?cluster=14377211411789123424&hl=en&as_sdt=0,36
| 1 | 2,022 |
Algorithms with Prediction Portfolios
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:58:56.499000
|
https://github.com/tlavastida/predictionportfolios
| 1 |
Algorithms with Prediction Portfolios
|
https://scholar.google.com/scholar?cluster=15626362245695114867&hl=en&as_sdt=0,5
| 2 | 2,022 |
A Unified Hard-Constraint Framework for Solving Geometrically Complex PDEs
| 3 |
neurips
| 3 | 0 |
2023-06-16 22:58:56.711000
|
https://github.com/csuastt/HardConstraint
| 4 |
A unified Hard-constraint framework for solving geometrically complex PDEs
|
https://scholar.google.com/scholar?cluster=12354346662201419102&hl=en&as_sdt=0,5
| 2 | 2,022 |
Optimal and Adaptive Monteiro-Svaiter Acceleration
| 11 |
neurips
| 0 | 0 |
2023-06-16 22:58:56.922000
|
https://github.com/danielle-hausler/ms-optimal
| 1 |
Optimal and adaptive monteiro-svaiter acceleration
|
https://scholar.google.com/scholar?cluster=6181840744509618668&hl=en&as_sdt=0,5
| 1 | 2,022 |
SparCL: Sparse Continual Learning on the Edge
| 8 |
neurips
| 2 | 0 |
2023-06-16 22:58:57.134000
|
https://github.com/neu-spiral/SparCL
| 14 |
Sparcl: Sparse continual learning on the edge
|
https://scholar.google.com/scholar?cluster=7160494277089589433&hl=en&as_sdt=0,5
| 5 | 2,022 |
Adaptively Exploiting d-Separators with Causal Bandits
| 3 |
neurips
| 0 | 0 |
2023-06-16 22:58:57.345000
|
https://github.com/blairbilodeau/adaptive-causal-bandits
| 5 |
Adaptively exploiting d-separators with causal bandits
|
https://scholar.google.com/scholar?cluster=10113006239041370847&hl=en&as_sdt=0,23
| 1 | 2,022 |
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
| 37 |
neurips
| 12 | 1 |
2023-06-16 22:58:57.558000
|
https://github.com/ml-jku/cloob
| 143 |
Cloob: Modern hopfield networks with infoloob outperform clip
|
https://scholar.google.com/scholar?cluster=3714890763443837424&hl=en&as_sdt=0,33
| 9 | 2,022 |
Language Conditioned Spatial Relation Reasoning for 3D Object Grounding
| 3 |
neurips
| 3 | 1 |
2023-06-16 22:58:57.770000
|
https://github.com/cshizhe/vil3dref
| 31 |
Language Conditioned Spatial Relation Reasoning for 3D Object Grounding
|
https://scholar.google.com/scholar?cluster=14666951856631208351&hl=en&as_sdt=0,14
| 2 | 2,022 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.