title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time
| 9 |
neurips
| 2 | 1 |
2023-06-16 16:05:32.717000
|
https://github.com/snel-repo/sbtt-demo
| 6 |
Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time
|
https://scholar.google.com/scholar?cluster=116915416169272448&hl=en&as_sdt=0,33
| 3 | 2,021 |
Preserved central model for faster bidirectional compression in distributed settings
| 22 |
neurips
| 0 | 0 |
2023-06-16 16:05:32.918000
|
https://github.com/philipco/mcm-bidirectional-compression
| 1 |
Preserved central model for faster bidirectional compression in distributed settings
|
https://scholar.google.com/scholar?cluster=11324851301084839987&hl=en&as_sdt=0,5
| 1 | 2,021 |
Luna: Linear Unified Nested Attention
| 64 |
neurips
| 15 | 1 |
2023-06-16 16:05:33.123000
|
https://github.com/XuezheMax/fairseq-apollo
| 94 |
Luna: Linear unified nested attention
|
https://scholar.google.com/scholar?cluster=15945065740745831634&hl=en&as_sdt=0,33
| 6 | 2,021 |
Iterative Causal Discovery in the Possible Presence of Latent Confounders and Selection Bias
| 6 |
neurips
| 9 | 0 |
2023-06-16 16:05:33.323000
|
https://github.com/IntelLabs/causality-lab
| 53 |
Iterative causal discovery in the possible presence of latent confounders and selection bias
|
https://scholar.google.com/scholar?cluster=15917731379630778872&hl=en&as_sdt=0,38
| 10 | 2,021 |
Associating Objects with Transformers for Video Object Segmentation
| 96 |
neurips
| 6 | 0 |
2023-06-16 16:05:33.522000
|
https://github.com/z-x-yang/AOT
| 91 |
Associating objects with transformers for video object segmentation
|
https://scholar.google.com/scholar?cluster=3585510538357549856&hl=en&as_sdt=0,21
| 13 | 2,021 |
Automatic Symmetry Discovery with Lie Algebra Convolutional Network
| 42 |
neurips
| 3 | 0 |
2023-06-16 16:05:33.720000
|
https://github.com/nimadehmamy/l-conv-code
| 38 |
Automatic symmetry discovery with lie algebra convolutional network
|
https://scholar.google.com/scholar?cluster=14029131064477993418&hl=en&as_sdt=0,5
| 4 | 2,021 |
Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
| 14 |
neurips
| 4 | 0 |
2023-06-16 16:05:33.920000
|
https://github.com/gmum/Zero-Time-Waste
| 12 |
Zero time waste: recycling predictions in early exit neural networks
|
https://scholar.google.com/scholar?cluster=16788123232185667658&hl=en&as_sdt=0,44
| 5 | 2,021 |
On Model Calibration for Long-Tailed Object Detection and Instance Segmentation
| 26 |
neurips
| 2 | 1 |
2023-06-16 16:05:34.120000
|
https://github.com/tydpan/NorCal
| 27 |
On model calibration for long-tailed object detection and instance segmentation
|
https://scholar.google.com/scholar?cluster=2480452967273135514&hl=en&as_sdt=0,11
| 2 | 2,021 |
ReSSL: Relational Self-Supervised Learning with Weak Augmentation
| 50 |
neurips
| 8 | 1 |
2023-06-16 16:05:34.327000
|
https://github.com/KyleZheng1997/ReSSL
| 50 |
Ressl: Relational self-supervised learning with weak augmentation
|
https://scholar.google.com/scholar?cluster=9030640366859568915&hl=en&as_sdt=0,10
| 3 | 2,021 |
Learning to See by Looking at Noise
| 14 |
neurips
| 5 | 2 |
2023-06-16 16:05:34.527000
|
https://github.com/mbaradad/learning_with_noise
| 91 |
Learning to see by looking at noise
|
https://scholar.google.com/scholar?cluster=17950334231348284249&hl=en&as_sdt=0,33
| 5 | 2,021 |
Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN
| 14 |
neurips
| 21 | 13 |
2023-06-16 16:05:34.726000
|
https://github.com/xiezhy6/pasta-gan
| 76 |
Towards scalable unpaired virtual try-on via patch-routed spatially-adaptive GAN
|
https://scholar.google.com/scholar?cluster=9712953587399366251&hl=en&as_sdt=0,34
| 3 | 2,021 |
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
| 52 |
neurips
| 2 | 0 |
2023-06-16 16:05:34.926000
|
https://github.com/oxai/intersectional_gpt2
| 9 |
Bias out-of-the-box: An empirical analysis of intersectional occupational biases in popular generative language models
|
https://scholar.google.com/scholar?cluster=10610853007934037556&hl=en&as_sdt=0,10
| 8 | 2,021 |
Weisfeiler and Lehman Go Cellular: CW Networks
| 121 |
neurips
| 20 | 0 |
2023-06-16 16:05:35.127000
|
https://github.com/twitter-research/cwn
| 124 |
Weisfeiler and lehman go cellular: Cw networks
|
https://scholar.google.com/scholar?cluster=10604779220263542295&hl=en&as_sdt=0,33
| 7 | 2,021 |
Learning Conjoint Attentions for Graph Neural Nets
| 15 |
neurips
| 0 | 0 |
2023-06-16 16:05:35.328000
|
https://github.com/he-tiantian/cats
| 5 |
Learning conjoint attentions for graph neural nets
|
https://scholar.google.com/scholar?cluster=4054823873527255592&hl=en&as_sdt=0,18
| 2 | 2,021 |
Aligned Structured Sparsity Learning for Efficient Image Super-Resolution
| 29 |
neurips
| 8 | 1 |
2023-06-16 16:05:35.527000
|
https://github.com/mingsun-tse/assl
| 53 |
Aligned structured sparsity learning for efficient image super-resolution
|
https://scholar.google.com/scholar?cluster=11894104122584992183&hl=en&as_sdt=0,5
| 8 | 2,021 |
Lip to Speech Synthesis with Visual Context Attentional GAN
| 22 |
neurips
| 4 | 0 |
2023-06-16 16:05:35.726000
|
https://github.com/ms-dot-k/Visual-Context-Attentional-GAN
| 12 |
Lip to speech synthesis with visual context attentional GAN
|
https://scholar.google.com/scholar?cluster=3002779332675732669&hl=en&as_sdt=0,5
| 1 | 2,021 |
Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning
| 6 |
neurips
| 1 | 0 |
2023-06-16 16:05:35.927000
|
https://github.com/kibeomkim/gace-gdan
| 24 |
Goal-aware cross-entropy for multi-target reinforcement learning
|
https://scholar.google.com/scholar?cluster=16382968152534550618&hl=en&as_sdt=0,16
| 3 | 2,021 |
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
| 52 |
neurips
| 11 | 1 |
2023-06-16 16:05:36.129000
|
https://github.com/taconite/MetaAvatar-release
| 109 |
Metaavatar: Learning animatable clothed human models from few depth images
|
https://scholar.google.com/scholar?cluster=16058062617951022189&hl=en&as_sdt=0,33
| 6 | 2,021 |
Distributed Principal Component Analysis with Limited Communication
| 5 |
neurips
| 0 | 0 |
2023-06-16 16:05:36.329000
|
https://github.com/ist-daslab/qrgd
| 2 |
Distributed principal component analysis with limited communication
|
https://scholar.google.com/scholar?cluster=16491167680974044307&hl=en&as_sdt=0,5
| 4 | 2,021 |
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
| 13 |
neurips
| 4 | 2 |
2023-06-16 16:05:36.529000
|
https://github.com/lessketching/newtonsketch
| 1 |
Newton-LESS: Sparsification without trade-offs for the sketched newton update
|
https://scholar.google.com/scholar?cluster=8971361646067584316&hl=en&as_sdt=0,33
| 1 | 2,021 |
Confident Anchor-Induced Multi-Source Free Domain Adaptation
| 34 |
neurips
| 1 | 1 |
2023-06-16 16:05:36.733000
|
https://github.com/learning-group123/caida
| 18 |
Confident anchor-induced multi-source free domain adaptation
|
https://scholar.google.com/scholar?cluster=4891466716654628888&hl=en&as_sdt=0,26
| 2 | 2,021 |
Word2Fun: Modelling Words as Functions for Diachronic Word Representation
| 1 |
neurips
| 0 | 1 |
2023-06-16 16:05:36.936000
|
https://github.com/wabyking/word2fun
| 10 |
Word2Fun: Modelling Words as Functions for Diachronic Word Representation
|
https://scholar.google.com/scholar?cluster=14848701185772884980&hl=en&as_sdt=0,33
| 1 | 2,021 |
Low-Rank Constraints for Fast Inference in Structured Models
| 10 |
neurips
| 0 | 1 |
2023-06-16 16:05:37.139000
|
https://github.com/justinchiu/low-rank-models
| 5 |
Low-rank constraints for fast inference in structured models
|
https://scholar.google.com/scholar?cluster=15216352374611711176&hl=en&as_sdt=0,14
| 3 | 2,021 |
Accumulative Poisoning Attacks on Real-time Data
| 11 |
neurips
| 1 | 0 |
2023-06-16 16:05:37.341000
|
https://github.com/ShawnXYang/AccumulativeAttack
| 17 |
Accumulative poisoning attacks on real-time data
|
https://scholar.google.com/scholar?cluster=17018461129104727462&hl=en&as_sdt=0,44
| 2 | 2,021 |
G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators
| 28 |
neurips
| 5 | 1 |
2023-06-16 16:05:37.541000
|
https://github.com/ai-secure/g-pate
| 23 |
G-PATE: scalable differentially private data generator via private aggregation of teacher discriminators
|
https://scholar.google.com/scholar?cluster=18094495377911036601&hl=en&as_sdt=0,5
| 2 | 2,021 |
Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning
| 8 |
neurips
| 3 | 1 |
2023-06-16 16:05:37.747000
|
https://github.com/alinlab/oreo
| 21 |
Object-aware regularization for addressing causal confusion in imitation learning
|
https://scholar.google.com/scholar?cluster=11591778827238296891&hl=en&as_sdt=0,5
| 3 | 2,021 |
Partition-Based Formulations for Mixed-Integer Optimization of Trained ReLU Neural Networks
| 37 |
neurips
| 0 | 0 |
2023-06-16 16:05:37.948000
|
https://github.com/cog-imperial/partitionedformulations_nn
| 2 |
Partition-based formulations for mixed-integer optimization of trained relu neural networks
|
https://scholar.google.com/scholar?cluster=322600744726062077&hl=en&as_sdt=0,44
| 3 | 2,021 |
Hyperparameter Optimization Is Deceiving Us, and How to Stop It
| 12 |
neurips
| 1 | 0 |
2023-06-16 16:05:38.147000
|
https://github.com/pasta41/deception
| 0 |
Hyperparameter optimization is deceiving us, and how to stop it
|
https://scholar.google.com/scholar?cluster=13676283395211391710&hl=en&as_sdt=0,14
| 3 | 2,021 |
On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning
| 8 |
neurips
| 1 | 0 |
2023-06-16 16:05:38.347000
|
https://github.com/kristian-georgiev/sgmrl
| 4 |
On the convergence theory of debiased model-agnostic meta-reinforcement learning
|
https://scholar.google.com/scholar?cluster=4479200688561137043&hl=en&as_sdt=0,6
| 2 | 2,021 |
3D Pose Transfer with Correspondence Learning and Mesh Refinement
| 12 |
neurips
| 4 | 0 |
2023-06-16 16:05:38.547000
|
https://github.com/chaoyuesong/3d-corenet
| 28 |
3D pose transfer with correspondence learning and mesh refinement
|
https://scholar.google.com/scholar?cluster=16594397098263890471&hl=en&as_sdt=0,5
| 7 | 2,021 |
Framing RNN as a kernel method: A neural ODE approach
| 13 |
neurips
| 3 | 0 |
2023-06-16 16:05:38.746000
|
https://github.com/afermanian/rnn-kernel
| 6 |
Framing RNN as a kernel method: A neural ODE approach
|
https://scholar.google.com/scholar?cluster=12320309652310006031&hl=en&as_sdt=0,33
| 3 | 2,021 |
Contextual Similarity Aggregation with Self-attention for Visual Re-ranking
| 8 |
neurips
| 2 | 2 |
2023-06-16 16:05:38.947000
|
https://github.com/mcc-wh/csa
| 21 |
Contextual similarity aggregation with self-attention for visual re-ranking
|
https://scholar.google.com/scholar?cluster=1731686966736408676&hl=en&as_sdt=0,1
| 2 | 2,021 |
Can Information Flows Suggest Targets for Interventions in Neural Circuits?
| 2 |
neurips
| 1 | 0 |
2023-06-16 16:05:39.149000
|
https://github.com/praveenv253/ann-info-flow
| 0 |
Can information flows suggest targets for interventions in neural circuits?
|
https://scholar.google.com/scholar?cluster=59078764435359692&hl=en&as_sdt=0,5
| 2 | 2,021 |
SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes
| 17 |
neurips
| 4 | 0 |
2023-06-16 16:05:39.348000
|
https://github.com/zhaozhiqian/synctwin-neurips-2021
| 5 |
Synctwin: Treatment effect estimation with longitudinal outcomes
|
https://scholar.google.com/scholar?cluster=12038492275286203225&hl=en&as_sdt=0,44
| 4 | 2,021 |
Unsupervised Motion Representation Learning with Capsule Autoencoders
| 12 |
neurips
| 0 | 0 |
2023-06-16 16:05:39.548000
|
https://github.com/ZiweiXU/CapsuleMotion
| 9 |
Unsupervised motion representation learning with capsule autoencoders
|
https://scholar.google.com/scholar?cluster=14303399087955819091&hl=en&as_sdt=0,5
| 1 | 2,021 |
Exploring Forensic Dental Identification with Deep Learning
| 4 |
neurips
| 1 | 1 |
2023-06-16 16:05:39.749000
|
https://github.com/liangyuandg/foid
| 4 |
Exploring forensic dental identification with deep learning
|
https://scholar.google.com/scholar?cluster=10294284615303387635&hl=en&as_sdt=0,5
| 1 | 2,021 |
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks
| 49 |
neurips
| 36 | 0 |
2023-06-16 16:05:39.948000
|
https://github.com/Future-Power-Networks/MAPDN
| 98 |
Multi-agent reinforcement learning for active voltage control on power distribution networks
|
https://scholar.google.com/scholar?cluster=339266555786095875&hl=en&as_sdt=0,10
| 2 | 2,021 |
Dangers of Bayesian Model Averaging under Covariate Shift
| 26 |
neurips
| 2 | 0 |
2023-06-16 16:05:40.148000
|
https://github.com/izmailovpavel/bnn_covariate_shift
| 28 |
Dangers of bayesian model averaging under covariate shift
|
https://scholar.google.com/scholar?cluster=9253304407956386101&hl=en&as_sdt=0,5
| 3 | 2,021 |
Towards Lower Bounds on the Depth of ReLU Neural Networks
| 13 |
neurips
| 0 | 0 |
2023-06-16 16:05:40.347000
|
https://github.com/ChristophHertrich/relu-mip-depth-bound
| 0 |
Towards lower bounds on the depth of ReLU neural networks
|
https://scholar.google.com/scholar?cluster=4120327399657306898&hl=en&as_sdt=0,5
| 1 | 2,021 |
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
| 13 |
neurips
| 0 | 0 |
2023-06-16 16:05:40.548000
|
https://github.com/gpleiss/limits_of_large_width
| 5 |
The limitations of large width in neural networks: A deep Gaussian process perspective
|
https://scholar.google.com/scholar?cluster=18411382208005468775&hl=en&as_sdt=0,47
| 1 | 2,021 |
Exact marginal prior distributions of finite Bayesian neural networks
| 24 |
neurips
| 0 | 0 |
2023-06-16 16:05:40.751000
|
https://github.com/pehlevan-group/exactbayesiannetworkpriors
| 0 |
Exact marginal prior distributions of finite Bayesian neural networks
|
https://scholar.google.com/scholar?cluster=8265985358387037900&hl=en&as_sdt=0,5
| 2 | 2,021 |
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees
| 4 |
neurips
| 1 | 0 |
2023-06-16 16:05:40.978000
|
https://github.com/kjason/resnest
| 2 |
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees
|
https://scholar.google.com/scholar?cluster=9550021969422604022&hl=en&as_sdt=0,5
| 1 | 2,021 |
Repulsive Deep Ensembles are Bayesian
| 44 |
neurips
| 5 | 0 |
2023-06-16 16:05:41.182000
|
https://github.com/ratschlab/repulsive_ensembles
| 13 |
Repulsive deep ensembles are bayesian
|
https://scholar.google.com/scholar?cluster=4880325796914110864&hl=en&as_sdt=0,5
| 5 | 2,021 |
Learning Compact Representations of Neural Networks using DiscriminAtive Masking (DAM)
| 2 |
neurips
| 4 | 0 |
2023-06-16 16:05:41.383000
|
https://github.com/jayroxis/dam-pytorch
| 15 |
Learning compact representations of neural networks using discriminative masking (dam)
|
https://scholar.google.com/scholar?cluster=14512990192508822553&hl=en&as_sdt=0,44
| 2 | 2,021 |
Neural Auto-Curricula in Two-Player Zero-Sum Games
| 20 |
neurips
| 3 | 0 |
2023-06-16 16:05:41.584000
|
https://github.com/waterhorse1/nac
| 21 |
Neural auto-curricula in two-player zero-sum games
|
https://scholar.google.com/scholar?cluster=9201661815839550883&hl=en&as_sdt=0,5
| 2 | 2,021 |
From global to local MDI variable importances for random forests and when they are Shapley values
| 7 |
neurips
| 0 | 0 |
2023-06-16 16:05:41.792000
|
https://github.com/asutera/local-mdi-importance
| 3 |
From global to local MDI variable importances for random forests and when they are Shapley values
|
https://scholar.google.com/scholar?cluster=18342054531367511892&hl=en&as_sdt=0,5
| 1 | 2,021 |
On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
| 41 |
neurips
| 11 | 0 |
2023-06-16 16:05:42.038000
|
https://github.com/facebookresearch/augmentation-corruption
| 38 |
On interaction between augmentations and corruptions in natural corruption robustness
|
https://scholar.google.com/scholar?cluster=440630592288573899&hl=en&as_sdt=0,10
| 7 | 2,021 |
Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data
| 37 |
neurips
| 5 | 2 |
2023-06-16 16:05:42.239000
|
https://github.com/asrafulashiq/dynamic-cdfsl
| 25 |
Dynamic distillation network for cross-domain few-shot recognition with unlabeled data
|
https://scholar.google.com/scholar?cluster=9716577277370774605&hl=en&as_sdt=0,5
| 4 | 2,021 |
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
| 29 |
neurips
| 1 | 1 |
2023-06-16 16:05:42.450000
|
https://github.com/peterbhase/ExplanationSearch
| 15 |
The out-of-distribution problem in explainability and search methods for feature importance explanations
|
https://scholar.google.com/scholar?cluster=11979193341973776256&hl=en&as_sdt=0,5
| 1 | 2,021 |
Control Variates for Slate Off-Policy Evaluation
| 3 |
neurips
| 0 | 0 |
2023-06-16 16:05:42.653000
|
https://github.com/fernandoamat/slateope
| 3 |
Control variates for slate off-policy evaluation
|
https://scholar.google.com/scholar?cluster=7057011324301771972&hl=en&as_sdt=0,5
| 1 | 2,021 |
Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation
| 50 |
neurips
| 32 | 3 |
2023-06-16 16:05:42.856000
|
https://github.com/nicklashansen/dmcontrol-generalization-benchmark
| 121 |
Stabilizing deep q-learning with convnets and vision transformers under data augmentation
|
https://scholar.google.com/scholar?cluster=6794503273897899990&hl=en&as_sdt=0,26
| 4 | 2,021 |
On Effective Scheduling of Model-based Reinforcement Learning
| 6 |
neurips
| 0 | 0 |
2023-06-16 16:05:43.056000
|
https://github.com/hanglai/autombpo
| 11 |
On effective scheduling of model-based reinforcement learning
|
https://scholar.google.com/scholar?cluster=11128521607771619105&hl=en&as_sdt=0,5
| 1 | 2,021 |
Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience
| 5 |
neurips
| 1 | 0 |
2023-06-16 16:05:43.259000
|
https://github.com/eulerlab/rave
| 8 |
Removing inter-experimental variability from functional data in systems neuroscience
|
https://scholar.google.com/scholar?cluster=6596108345516212065&hl=en&as_sdt=0,5
| 3 | 2,021 |
Approximate Decomposable Submodular Function Minimization for Cardinality-Based Components
| 3 |
neurips
| 1 | 0 |
2023-06-16 16:05:43.459000
|
https://github.com/nveldt/SparseCardDSFM
| 1 |
Approximate decomposable submodular function minimization for cardinality-based components
|
https://scholar.google.com/scholar?cluster=7765734626612115875&hl=en&as_sdt=0,5
| 2 | 2,021 |
Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution
| 5 |
neurips
| 0 | 1 |
2023-06-16 16:05:43.659000
|
https://github.com/ars22/meta-learning-eval-id-vs-ood
| 1 |
Two sides of meta-learning evaluation: In vs. out of distribution
|
https://scholar.google.com/scholar?cluster=3248310209715009715&hl=en&as_sdt=0,5
| 3 | 2,021 |
Debiased Visual Question Answering from Feature and Sample Perspectives
| 19 |
neurips
| 7 | 7 |
2023-06-16 16:05:43.860000
|
https://github.com/zhiquan-wen/d-vqa
| 20 |
Debiased visual question answering from feature and sample perspectives
|
https://scholar.google.com/scholar?cluster=9092713122749845551&hl=en&as_sdt=0,5
| 0 | 2,021 |
Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness
| 7 |
neurips
| 3 | 2 |
2023-06-16 16:05:44.060000
|
https://github.com/jie-ren/a-unified-game-theoretic-interpretation-of-adversarial-robustness
| 18 |
Towards a unified game-theoretic view of adversarial perturbations and robustness
|
https://scholar.google.com/scholar?cluster=10405183538906234310&hl=en&as_sdt=0,5
| 1 | 2,021 |
On the Out-of-distribution Generalization of Probabilistic Image Modelling
| 21 |
neurips
| 0 | 0 |
2023-06-16 16:05:44.260000
|
https://github.com/zmtomorrow/nelloc
| 9 |
On the out-of-distribution generalization of probabilistic image modelling
|
https://scholar.google.com/scholar?cluster=16600938628354788442&hl=en&as_sdt=0,5
| 1 | 2,021 |
Information Directed Reward Learning for Reinforcement Learning
| 6 |
neurips
| 1 | 0 |
2023-06-16 16:05:44.460000
|
https://github.com/david-lindner/idrl
| 9 |
Information directed reward learning for reinforcement learning
|
https://scholar.google.com/scholar?cluster=8772252576862267451&hl=en&as_sdt=0,47
| 4 | 2,021 |
SSMF: Shifting Seasonal Matrix Factorization
| 2 |
neurips
| 2 | 0 |
2023-06-16 16:05:44.668000
|
https://github.com/kokikwbt/ssmf
| 10 |
Ssmf: Shifting seasonal matrix factorization
|
https://scholar.google.com/scholar?cluster=11697569962161025412&hl=en&as_sdt=0,6
| 1 | 2,021 |
Robust and differentially private mean estimation
| 40 |
neurips
| 0 | 0 |
2023-06-16 16:05:44.871000
|
https://github.com/xiyangl3/robust_dp
| 3 |
Robust and differentially private mean estimation
|
https://scholar.google.com/scholar?cluster=4295339113216361062&hl=en&as_sdt=0,5
| 2 | 2,021 |
Adaptable Agent Populations via a Generative Model of Policies
| 8 |
neurips
| 0 | 2 |
2023-06-16 16:05:45.071000
|
https://github.com/kennyderek/adap
| 11 |
Adaptable agent populations via a generative model of policies
|
https://scholar.google.com/scholar?cluster=11064961923408119459&hl=en&as_sdt=0,5
| 2 | 2,021 |
Mixed Supervised Object Detection by Transferring Mask Prior and Semantic Similarity
| 11 |
neurips
| 3 | 3 |
2023-06-16 16:05:45.272000
|
https://github.com/bcmi/tramas-weak-shot-object-detection
| 50 |
Mixed supervised object detection by transferring mask prior and semantic similarity
|
https://scholar.google.com/scholar?cluster=809819108668093612&hl=en&as_sdt=0,5
| 7 | 2,021 |
IQ-Learn: Inverse soft-Q Learning for Imitation
| 43 |
neurips
| 26 | 5 |
2023-06-16 16:05:45.471000
|
https://github.com/Div99/IQ-Learn
| 135 |
Iq-learn: Inverse soft-q learning for imitation
|
https://scholar.google.com/scholar?cluster=267480393884738505&hl=en&as_sdt=0,10
| 2 | 2,021 |
Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data
| 6 |
neurips
| 2 | 0 |
2023-06-16 16:05:45.672000
|
https://github.com/kaist-dmlab/taufe
| 7 |
Task-agnostic undesirable feature deactivation using out-of-distribution data
|
https://scholar.google.com/scholar?cluster=15884866726240245144&hl=en&as_sdt=0,21
| 2 | 2,021 |
Speedy Performance Estimation for Neural Architecture Search
| 19 |
neurips
| 0 | 1 |
2023-06-16 16:05:45.872000
|
https://github.com/rubinxin/TSE
| 8 |
Speedy performance estimation for neural architecture search
|
https://scholar.google.com/scholar?cluster=12649354892939725087&hl=en&as_sdt=0,5
| 1 | 2,021 |
Environment Generation for Zero-Shot Compositional Reinforcement Learning
| 19 |
neurips
| 7,321 | 1,026 |
2023-06-16 16:05:46.078000
|
https://github.com/google-research/google-research
| 29,786 |
Environment generation for zero-shot compositional reinforcement learning
|
https://scholar.google.com/scholar?cluster=4049956378759656568&hl=en&as_sdt=0,5
| 727 | 2,021 |
Optimizing Conditional Value-At-Risk of Black-Box Functions
| 10 |
neurips
| 2 | 0 |
2023-06-16 16:05:46.278000
|
https://github.com/qphong/bayesopt-lv
| 1 |
Optimizing conditional value-at-risk of black-box functions
|
https://scholar.google.com/scholar?cluster=1243167075412658030&hl=en&as_sdt=0,5
| 1 | 2,021 |
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning
| 22 |
neurips
| 7 | 1 |
2023-06-16 16:05:46.479000
|
https://github.com/chongjiange/care
| 116 |
Revitalizing cnn attention via transformers in self-supervised visual representation learning
|
https://scholar.google.com/scholar?cluster=11137326961804977691&hl=en&as_sdt=0,10
| 6 | 2,021 |
Learning to Learn Graph Topologies
| 15 |
neurips
| 3 | 0 |
2023-06-16 16:05:46.680000
|
https://github.com/xpuoxford/l2g-neurips2021
| 18 |
Learning to learn graph topologies
|
https://scholar.google.com/scholar?cluster=6887973786384581527&hl=en&as_sdt=0,5
| 2 | 2,021 |
Reducing Collision Checking for Sampling-Based Motion Planning Using Graph Neural Networks
| 18 |
neurips
| 9 | 0 |
2023-06-16 16:05:46.879000
|
https://github.com/rainorangelemon/gnn-motion-planning
| 61 |
Reducing collision checking for sampling-based motion planning using graph neural networks
|
https://scholar.google.com/scholar?cluster=15148652525294899591&hl=en&as_sdt=0,5
| 5 | 2,021 |
Sample Complexity Bounds for Active Ranking from Multi-wise Comparisons
| 1 |
neurips
| 0 | 0 |
2023-06-16 16:05:47.079000
|
https://github.com/wenboren/multi-wise-ranking
| 0 |
Sample Complexity Bounds for Active Ranking from Multi-wise Comparisons
|
https://scholar.google.com/scholar?cluster=2656146730518033072&hl=en&as_sdt=0,33
| 1 | 2,021 |
Efficient Bayesian network structure learning via local Markov boundary search
| 7 |
neurips
| 0 | 0 |
2023-06-16 16:05:47.279000
|
https://github.com/minggao97/tam
| 0 |
Efficient Bayesian network structure learning via local Markov boundary search
|
https://scholar.google.com/scholar?cluster=9088410418328444112&hl=en&as_sdt=0,5
| 2 | 2,021 |
Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention
| 36 |
neurips
| 13 | 3 |
2023-06-16 16:05:47.479000
|
https://github.com/egyptdj/stagin
| 53 |
Learning dynamic graph representation of brain connectome with spatio-temporal attention
|
https://scholar.google.com/scholar?cluster=4519412816058293652&hl=en&as_sdt=0,21
| 2 | 2,021 |
Understanding the Generalization Benefit of Model Invariance from a Data Perspective
| 15 |
neurips
| 0 | 0 |
2023-06-16 16:05:47.679000
|
https://github.com/bangann/understanding-invariance
| 1 |
Understanding the generalization benefit of model invariance from a data perspective
|
https://scholar.google.com/scholar?cluster=6413093922837759333&hl=en&as_sdt=0,29
| 2 | 2,021 |
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
| 20 |
neurips
| 2 | 1 |
2023-06-16 16:05:47.880000
|
https://github.com/dongxinshuai/rift-neurips2021
| 10 |
How should pre-trained language models be fine-tuned towards adversarial robustness?
|
https://scholar.google.com/scholar?cluster=6181501372653861648&hl=en&as_sdt=0,36
| 5 | 2,021 |
Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks
| 3 |
neurips
| 2 | 0 |
2023-06-16 16:05:48.080000
|
https://github.com/robert-lieck/rbn
| 12 |
Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks
|
https://scholar.google.com/scholar?cluster=7508062902599380178&hl=en&as_sdt=0,5
| 1 | 2,021 |
Combining Human Predictions with Model Probabilities via Confusion Matrices and Calibration
| 17 |
neurips
| 5 | 0 |
2023-06-16 16:05:48.280000
|
https://github.com/gavinkerrigan/conf_matrix_and_calibration
| 7 |
Combining human predictions with model probabilities via confusion matrices and calibration
|
https://scholar.google.com/scholar?cluster=12718168671257506616&hl=en&as_sdt=0,5
| 2 | 2,021 |
Probabilistic Attention for Interactive Segmentation
| 2 |
neurips
| 9 | 0 |
2023-06-16 16:05:48.481000
|
https://github.com/apple/ml-probabilistic-attention
| 21 |
Probabilistic attention for interactive segmentation
|
https://scholar.google.com/scholar?cluster=6574265597018003751&hl=en&as_sdt=0,47
| 6 | 2,021 |
Pruning Randomly Initialized Neural Networks with Iterative Randomization
| 14 |
neurips
| 2 | 0 |
2023-06-16 16:05:48.680000
|
https://github.com/dchiji-ntt/iterand
| 9 |
Pruning randomly initialized neural networks with iterative randomization
|
https://scholar.google.com/scholar?cluster=11749710093845056800&hl=en&as_sdt=0,7
| 2 | 2,021 |
Stability and Generalization of Bilevel Programming in Hyperparameter Optimization
| 13 |
neurips
| 1 | 0 |
2023-06-16 16:05:48.880000
|
https://github.com/baofff/stability_ho
| 2 |
Stability and generalization of bilevel programming in hyperparameter optimization
|
https://scholar.google.com/scholar?cluster=3805382994554865062&hl=en&as_sdt=0,5
| 1 | 2,021 |
Offline Meta Reinforcement Learning -- Identifiability Challenges and Effective Data Collection Strategies
| 21 |
neurips
| 8 | 0 |
2023-06-16 16:05:49.080000
|
https://github.com/Rondorf/BOReL
| 20 |
Offline Meta Reinforcement Learning--Identifiability Challenges and Effective Data Collection Strategies
|
https://scholar.google.com/scholar?cluster=3592419884384460621&hl=en&as_sdt=0,5
| 3 | 2,021 |
Flexible Option Learning
| 11 |
neurips
| 0 | 0 |
2023-06-16 16:05:49.280000
|
https://github.com/mklissa/moc
| 7 |
Flexible option learning
|
https://scholar.google.com/scholar?cluster=1622137245379658654&hl=en&as_sdt=0,15
| 2 | 2,021 |
Credit Assignment in Neural Networks through Deep Feedback Control
| 16 |
neurips
| 1 | 0 |
2023-06-16 16:05:49.480000
|
https://github.com/meulemansalex/deep_feedback_control
| 6 |
Credit assignment in neural networks through deep feedback control
|
https://scholar.google.com/scholar?cluster=10215619151456904513&hl=en&as_sdt=0,33
| 2 | 2,021 |
Neural Additive Models: Interpretable Machine Learning with Neural Nets
| 245 |
neurips
| 6 | 0 |
2023-06-16 16:05:49.680000
|
https://github.com/lemeln/nam
| 14 |
Neural additive models: Interpretable machine learning with neural nets
|
https://scholar.google.com/scholar?cluster=14127065231811177587&hl=en&as_sdt=0,18
| 0 | 2,021 |
Kernel Functional Optimisation
| 4 |
neurips
| 0 | 1 |
2023-06-16 16:05:49.880000
|
https://github.com/mailtoarunkumarav/kernelfunctionaloptimisation
| 2 |
Kernel functional optimisation
|
https://scholar.google.com/scholar?cluster=9446899252048844733&hl=en&as_sdt=0,13
| 1 | 2,021 |
Generalized Shape Metrics on Neural Representations
| 22 |
neurips
| 8 | 2 |
2023-06-16 16:05:50.082000
|
https://github.com/ahwillia/netrep
| 83 |
Generalized shape metrics on neural representations
|
https://scholar.google.com/scholar?cluster=3294259291908791528&hl=en&as_sdt=0,39
| 3 | 2,021 |
Towards Robust Bisimulation Metric Learning
| 20 |
neurips
| 2 | 0 |
2023-06-16 16:05:50.282000
|
https://github.com/metekemertas/RobustBisimulation
| 6 |
Towards robust bisimulation metric learning
|
https://scholar.google.com/scholar?cluster=167387616529603590&hl=en&as_sdt=0,5
| 2 | 2,021 |
Beyond BatchNorm: Towards a Unified Understanding of Normalization in Deep Learning
| 22 |
neurips
| 1 | 0 |
2023-06-16 16:05:50.482000
|
https://github.com/EkdeepSLubana/BeyondBatchNorm
| 16 |
Beyond batchnorm: Towards a unified understanding of normalization in deep learning
|
https://scholar.google.com/scholar?cluster=2227521021573022102&hl=en&as_sdt=0,31
| 3 | 2,021 |
Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training
| 5 |
neurips
| 1 | 0 |
2023-06-16 16:05:50.683000
|
https://github.com/npminh12/nn-clt
| 0 |
Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training
|
https://scholar.google.com/scholar?cluster=17789162731650605846&hl=en&as_sdt=0,36
| 2 | 2,021 |
Medical Dead-ends and Learning to Identify High-Risk States and Treatments
| 21 |
neurips
| 15 | 0 |
2023-06-16 16:05:50.882000
|
https://github.com/microsoft/med-deadend
| 43 |
Medical dead-ends and learning to identify high-risk states and treatments
|
https://scholar.google.com/scholar?cluster=7718917214677411862&hl=en&as_sdt=0,34
| 6 | 2,021 |
Batch Normalization Orthogonalizes Representations in Deep Random Networks
| 20 |
neurips
| 1 | 0 |
2023-06-16 16:05:51.080000
|
https://github.com/hadidaneshmand/batchnorm21
| 3 |
Batch normalization orthogonalizes representations in deep random networks
|
https://scholar.google.com/scholar?cluster=8201984774954479451&hl=en&as_sdt=0,5
| 1 | 2,021 |
Support vector machines and linear regression coincide with very high-dimensional features
| 15 |
neurips
| 0 | 0 |
2023-06-16 16:05:51.281000
|
https://github.com/scO0rpion/SVM-Proliferation-NIPS2021
| 1 |
Support vector machines and linear regression coincide with very high-dimensional features
|
https://scholar.google.com/scholar?cluster=9835458066237043881&hl=en&as_sdt=0,5
| 2 | 2,021 |
Offline RL Without Off-Policy Evaluation
| 67 |
neurips
| 1 | 4 |
2023-06-16 16:05:51.481000
|
https://github.com/davidbrandfonbrener/onestep-rl
| 27 |
Offline rl without off-policy evaluation
|
https://scholar.google.com/scholar?cluster=16078097822784982755&hl=en&as_sdt=0,47
| 2 | 2,021 |
Continuous vs. Discrete Optimization of Deep Neural Networks
| 17 |
neurips
| 0 | 0 |
2023-06-16 16:05:51.680000
|
https://github.com/elkabzo/cont_disc_opt_dnn
| 0 |
Continuous vs. discrete optimization of deep neural networks
|
https://scholar.google.com/scholar?cluster=6909198963909680227&hl=en&as_sdt=0,5
| 3 | 2,021 |
Can contrastive learning avoid shortcut solutions?
| 60 |
neurips
| 2 | 1 |
2023-06-16 16:05:51.884000
|
https://github.com/joshr17/IFM
| 45 |
Can contrastive learning avoid shortcut solutions?
|
https://scholar.google.com/scholar?cluster=705841367969128558&hl=en&as_sdt=0,5
| 1 | 2,021 |
Convex Polytope Trees
| 1 |
neurips
| 1 | 0 |
2023-06-16 16:05:52.115000
|
https://github.com/rezaarmand/Convex_Polytope_Trees
| 2 |
Convex Polytope Trees
|
https://scholar.google.com/scholar?cluster=2959989804162360758&hl=en&as_sdt=0,44
| 1 | 2,021 |
Noisy Recurrent Neural Networks
| 27 |
neurips
| 1 | 1 |
2023-06-16 16:05:52.315000
|
https://github.com/erichson/NoisyRNN
| 1 |
Noisy recurrent neural networks
|
https://scholar.google.com/scholar?cluster=6463637827089951262&hl=en&as_sdt=0,33
| 2 | 2,021 |
Matrix encoding networks for neural combinatorial optimization
| 20 |
neurips
| 10 | 1 |
2023-06-16 16:05:52.515000
|
https://github.com/yd-kwon/MatNet
| 45 |
Matrix encoding networks for neural combinatorial optimization
|
https://scholar.google.com/scholar?cluster=13176466295428561186&hl=en&as_sdt=0,33
| 4 | 2,021 |
Continuous Latent Process Flows
| 8 |
neurips
| 5 | 0 |
2023-06-16 16:05:52.716000
|
https://github.com/borealisai/continuous-latent-process-flows
| 8 |
Continuous latent process flows
|
https://scholar.google.com/scholar?cluster=10696451274107290963&hl=en&as_sdt=0,45
| 2 | 2,021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.