title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Domain Generalization by Learning and Removing Domain-specific Features
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:19.310000
|
https://github.com/yulearningg/LRDG
| 8 |
Domain Generalization by Learning and Removing Domain-specific Features
|
https://scholar.google.com/scholar?cluster=5494103796376605602&hl=en&as_sdt=0,10
| 1 | 2,022 |
Torsional Diffusion for Molecular Conformer Generation
| 64 |
neurips
| 31 | 6 |
2023-06-16 22:59:19.524000
|
https://github.com/gcorso/torsional-diffusion
| 164 |
Torsional diffusion for molecular conformer generation
|
https://scholar.google.com/scholar?cluster=1524640103154353919&hl=en&as_sdt=0,5
| 3 | 2,022 |
AgraSSt: Approximate Graph Stein Statistics for Interpretable Assessment of Implicit Graph Generators
| 3 |
neurips
| 0 | 0 |
2023-06-16 22:59:19.735000
|
https://github.com/wenkaixl/agrasst
| 0 |
AgraSSt: Approximate graph Stein statistics for interpretable assessment of implicit graph generators
|
https://scholar.google.com/scholar?cluster=8628149654729531365&hl=en&as_sdt=0,39
| 1 | 2,022 |
On the Limitations of Stochastic Pre-processing Defenses
| 5 |
neurips
| 1 | 0 |
2023-06-16 22:59:19.947000
|
https://github.com/wi-pi/stochastic-preprocessing-defenses
| 0 |
On the Limitations of Stochastic Pre-processing Defenses
|
https://scholar.google.com/scholar?cluster=7806519586437026308&hl=en&as_sdt=0,33
| 1 | 2,022 |
Proximal Point Imitation Learning
| 3 |
neurips
| 1 | 0 |
2023-06-16 22:59:20.159000
|
https://github.com/lviano/p2il
| 3 |
Proximal Point Imitation Learning
|
https://scholar.google.com/scholar?cluster=17949003719943015717&hl=en&as_sdt=0,5
| 1 | 2,022 |
Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation
| 1 |
neurips
| 1 | 0 |
2023-06-16 22:59:20.371000
|
https://github.com/zkzhang98/microseg
| 8 |
Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation
|
https://scholar.google.com/scholar?cluster=10459431178117202282&hl=en&as_sdt=0,3
| 1 | 2,022 |
Smoothed Embeddings for Certified Few-Shot Learning
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:20.582000
|
https://github.com/koava36/certrob-fewshot
| 2 |
Smoothed Embeddings for Certified Few-Shot Learning
|
https://scholar.google.com/scholar?cluster=5547919878197628339&hl=en&as_sdt=0,31
| 0 | 2,022 |
Group Meritocratic Fairness in Linear Contextual Bandits
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:20.793000
|
https://github.com/csml-iit-ucl/gmfbandits
| 1 |
Group Meritocratic Fairness in Linear Contextual Bandits
|
https://scholar.google.com/scholar?cluster=9571107907427385262&hl=en&as_sdt=0,5
| 4 | 2,022 |
Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm
| 4 |
neurips
| 1 | 0 |
2023-06-16 22:59:21.005000
|
https://github.com/akjayant/mbppol
| 12 |
Model-based safe deep reinforcement learning via a constrained proximal policy optimization algorithm
|
https://scholar.google.com/scholar?cluster=7177631673389924386&hl=en&as_sdt=0,5
| 2 | 2,022 |
An Adaptive Kernel Approach to Federated Learning of Heterogeneous Causal Effects
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:21.221000
|
https://github.com/vothanhvinh/causalrff
| 0 |
An Adaptive Kernel Approach to Federated Learning of Heterogeneous Causal Effects
|
https://scholar.google.com/scholar?cluster=17335848302268873481&hl=en&as_sdt=0,25
| 2 | 2,022 |
Towards Improving Faithfulness in Abstractive Summarization
| 4 |
neurips
| 0 | 0 |
2023-06-16 22:59:21.452000
|
https://github.com/iriscxy/fes
| 9 |
Towards Improving Faithfulness in Abstractive Summarization
|
https://scholar.google.com/scholar?cluster=9202173853245340528&hl=en&as_sdt=0,5
| 2 | 2,022 |
ZIN: When and How to Learn Invariance Without Environment Partition?
| 7 |
neurips
| 2 | 1 |
2023-06-16 22:59:21.663000
|
https://github.com/linyongver/zin_official
| 12 |
ZIN: When and How to Learn Invariance Without Environment Partition?
|
https://scholar.google.com/scholar?cluster=16781280623432832625&hl=en&as_sdt=0,5
| 5 | 2,022 |
Active Surrogate Estimators: An Active Learning Approach to Label-Efficient Model Evaluation
| 4 |
neurips
| 2 | 0 |
2023-06-16 22:59:21.875000
|
https://github.com/jlko/active-surrogate-estimators
| 5 |
Active surrogate estimators: An active learning approach to label-efficient model evaluation
|
https://scholar.google.com/scholar?cluster=12181705407954202218&hl=en&as_sdt=0,36
| 1 | 2,022 |
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions
| 1 |
neurips
| 2 | 2 |
2023-06-16 22:59:22.087000
|
https://github.com/lchen001/hapi
| 16 |
HAPI: A large-scale longitudinal dataset of commercial ML API predictions
|
https://scholar.google.com/scholar?cluster=5762229029469931969&hl=en&as_sdt=0,5
| 2 | 2,022 |
Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
| 69 |
neurips
| 105 | 6 |
2023-06-16 22:59:22.299000
|
https://github.com/openai/Video-Pre-Training
| 944 |
Video pretraining (vpt): Learning to act by watching unlabeled online videos
|
https://scholar.google.com/scholar?cluster=17704984102832894583&hl=en&as_sdt=0,43
| 27 | 2,022 |
GLOBEM Dataset: Multi-Year Datasets for Longitudinal Human Behavior Modeling Generalization
| 1 |
neurips
| 21 | 2 |
2023-06-16 22:59:22.512000
|
https://github.com/uw-exp/globem
| 110 |
GLOBEM Dataset: Multi-Year Datasets for Longitudinal Human Behavior Modeling Generalization
|
https://scholar.google.com/scholar?cluster=8900774154166669565&hl=en&as_sdt=0,15
| 11 | 2,022 |
Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:22.724000
|
https://github.com/sc782/sbm-transformer
| 10 |
Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
|
https://scholar.google.com/scholar?cluster=8950920198279158483&hl=en&as_sdt=0,41
| 1 | 2,022 |
NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning
| 17 |
neurips
| 10 | 3 |
2023-06-16 22:59:22.935000
|
https://github.com/polixir/NeoRL
| 76 |
NeoRL: A near real-world benchmark for offline reinforcement learning
|
https://scholar.google.com/scholar?cluster=4124559435421105174&hl=en&as_sdt=0,34
| 4 | 2,022 |
Counterfactual Temporal Point Processes
| 7 |
neurips
| 3 | 0 |
2023-06-16 22:59:23.146000
|
https://github.com/networks-learning/counterfactual-ttp
| 11 |
Counterfactual temporal point processes
|
https://scholar.google.com/scholar?cluster=5471926667923328181&hl=en&as_sdt=0,14
| 3 | 2,022 |
Dungeons and Data: A Large-Scale NetHack Dataset
| 1 |
neurips
| 102 | 16 |
2023-06-16 22:59:23.358000
|
https://github.com/facebookresearch/nle
| 871 |
Dungeons and Data: A Large-Scale NetHack Dataset
|
https://scholar.google.com/scholar?cluster=10376659435054658161&hl=en&as_sdt=0,5
| 29 | 2,022 |
GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions
| 7 |
neurips
| 10 | 0 |
2023-06-16 22:59:23.569000
|
https://github.com/princeton-computational-imaging/gensdf
| 87 |
GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions
|
https://scholar.google.com/scholar?cluster=11531522694580627214&hl=en&as_sdt=0,41
| 7 | 2,022 |
Forecasting Human Trajectory from Scene History
| 3 |
neurips
| 3 | 3 |
2023-06-16 22:59:23.780000
|
https://github.com/makaruinah/shenet
| 11 |
Forecasting human trajectory from scene history
|
https://scholar.google.com/scholar?cluster=5059609174660170314&hl=en&as_sdt=0,47
| 1 | 2,022 |
Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure
| 7 |
neurips
| 11 | 0 |
2023-06-16 22:59:23.992000
|
https://github.com/googlebaba/disc
| 26 |
Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure
|
https://scholar.google.com/scholar?cluster=8726960753760538986&hl=en&as_sdt=0,5
| 3 | 2,022 |
Asymptotics of $\ell_2$ Regularized Network Embeddings
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:24.204000
|
https://github.com/aday651/embed-reg
| 0 |
Asymptotics of Regularized Network Embeddings
|
https://scholar.google.com/scholar?cluster=10375708066059724613&hl=en&as_sdt=0,5
| 1 | 2,022 |
On Embeddings for Numerical Features in Tabular Deep Learning
| 19 |
neurips
| 18 | 1 |
2023-06-16 22:59:24.416000
|
https://github.com/Yura52/tabular-dl-num-embeddings
| 170 |
On embeddings for numerical features in tabular deep learning
|
https://scholar.google.com/scholar?cluster=2553810460800723920&hl=en&as_sdt=0,1
| 4 | 2,022 |
Visual Prompting via Image Inpainting
| 29 |
neurips
| 12 | 4 |
2023-06-16 22:59:24.628000
|
https://github.com/amirbar/visual_prompting
| 197 |
Visual prompting via image inpainting
|
https://scholar.google.com/scholar?cluster=15899337886963537746&hl=en&as_sdt=0,5
| 12 | 2,022 |
OpenAUC: Towards AUC-Oriented Open-Set Recognition
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:24.839000
|
https://github.com/wang22ti/openauc
| 5 |
OpenAUC: Towards AUC-Oriented Open-Set Recognition
|
https://scholar.google.com/scholar?cluster=17140867226806315612&hl=en&as_sdt=0,5
| 2 | 2,022 |
Reduction Algorithms for Persistence Diagrams of Networks: CoralTDA and PrunIT
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:25.051000
|
https://github.com/cakcora/PersistentHomologyWithCoralPrunit
| 3 |
Reduction Algorithms for Persistence Diagrams of Networks: CoralTDA and PrunIT
|
https://scholar.google.com/scholar?cluster=7224655115635333850&hl=en&as_sdt=0,10
| 2 | 2,022 |
GAUDI: A Neural Architect for Immersive 3D Scene Generation
| 28 |
neurips
| 24 | 0 |
2023-06-16 22:59:25.264000
|
https://github.com/apple/ml-gaudi
| 577 |
Gaudi: A neural architect for immersive 3d scene generation
|
https://scholar.google.com/scholar?cluster=14944404431434808615&hl=en&as_sdt=0,13
| 36 | 2,022 |
Mask-based Latent Reconstruction for Reinforcement Learning
| 5 |
neurips
| 2 | 0 |
2023-06-16 22:59:25.476000
|
https://github.com/microsoft/Mask-based-Latent-Reconstruction
| 21 |
Mask-based latent reconstruction for reinforcement learning
|
https://scholar.google.com/scholar?cluster=11030675521552103190&hl=en&as_sdt=0,5
| 5 | 2,022 |
Product Ranking for Revenue Maximization with Multiple Purchases
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:25.688000
|
https://github.com/windxrz/mpb-ucb
| 3 |
Product Ranking for Revenue Maximization with Multiple Purchases
|
https://scholar.google.com/scholar?cluster=5497221065518652797&hl=en&as_sdt=0,33
| 1 | 2,022 |
One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations
| 7 |
neurips
| 0 | 1 |
2023-06-16 22:59:25.899000
|
https://github.com/kumapowerliu/ffclip
| 36 |
One model to edit them all: Free-form text-driven image manipulation with semantic modulations
|
https://scholar.google.com/scholar?cluster=9106501574546184017&hl=en&as_sdt=0,47
| 6 | 2,022 |
LieGG: Studying Learned Lie Group Generators
| 5 |
neurips
| 0 | 0 |
2023-06-16 22:59:26.110000
|
https://github.com/amoskalev/liegg
| 3 |
LieGG: Studying Learned Lie Group Generators
|
https://scholar.google.com/scholar?cluster=6458900076329173639&hl=en&as_sdt=0,5
| 1 | 2,022 |
FourierNets enable the design of highly non-local optical encoders for computational imaging
| 2 |
neurips
| 2 | 0 |
2023-06-16 22:59:26.323000
|
https://github.com/turagalab/snapshotscope
| 3 |
FourierNets enable the design of highly non-local optical encoders for computational imaging
|
https://scholar.google.com/scholar?cluster=17235458650551264923&hl=en&as_sdt=0,47
| 3 | 2,022 |
Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
| 1 |
neurips
| 2 | 0 |
2023-06-16 22:59:26.534000
|
https://github.com/dchiji-ntt/meta-ticket
| 4 |
Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
|
https://scholar.google.com/scholar?cluster=355485473057301987&hl=en&as_sdt=0,5
| 1 | 2,022 |
LAION-5B: An open large-scale dataset for training next generation image-text models
| 310 |
neurips
| 548 | 89 |
2023-06-16 22:59:26.746000
|
https://github.com/mlfoundations/open_clip
| 5,243 |
Laion-5b: An open large-scale dataset for training next generation image-text models
|
https://scholar.google.com/scholar?cluster=8018158103125985189&hl=en&as_sdt=0,36
| 59 | 2,022 |
Constants of motion network
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:59:26.957000
|
https://github.com/machine-discovery/comet
| 2 |
Constants of motion network
|
https://scholar.google.com/scholar?cluster=10578402621842665146&hl=en&as_sdt=0,33
| 3 | 2,022 |
Online Deep Equilibrium Learning for Regularization by Denoising
| 6 |
neurips
| 8 | 2 |
2023-06-16 22:59:27.168000
|
https://github.com/phernst/pytorch_radon
| 22 |
Online deep equilibrium learning for regularization by denoising
|
https://scholar.google.com/scholar?cluster=12374699513175757258&hl=en&as_sdt=0,6
| 2 | 2,022 |
Earthformer: Exploring Space-Time Transformers for Earth System Forecasting
| 12 |
neurips
| 41 | 2 |
2023-06-16 22:59:27.380000
|
https://github.com/amazon-science/earth-forecasting-transformer
| 212 |
Earthformer: Exploring space-time transformers for earth system forecasting
|
https://scholar.google.com/scholar?cluster=6165560125598001271&hl=en&as_sdt=0,5
| 11 | 2,022 |
Benchopt: Reproducible, efficient and collaborative optimization benchmarks
| 6 |
neurips
| 35 | 85 |
2023-06-16 22:59:27.591000
|
https://github.com/benchopt/benchopt
| 158 |
Benchopt: Reproducible, efficient and collaborative optimization benchmarks
|
https://scholar.google.com/scholar?cluster=3504541958783431314&hl=en&as_sdt=0,33
| 6 | 2,022 |
SketchBoost: Fast Gradient Boosted Decision Tree for Multioutput Problems
| 1 |
neurips
| 1 | 0 |
2023-06-16 22:59:27.803000
|
https://github.com/sb-ai-lab/sketchboost-paper
| 9 |
SketchBoost: Fast Gradient Boosted Decision Tree for Multioutput Problems
|
https://scholar.google.com/scholar?cluster=12204750564848511287&hl=en&as_sdt=0,5
| 2 | 2,022 |
Decentralized Training of Foundation Models in Heterogeneous Environments
| 10 |
neurips
| 10 | 3 |
2023-06-16 22:59:28.016000
|
https://github.com/DS3Lab/DT-FM
| 59 |
Decentralized training of foundation models in heterogeneous environments
|
https://scholar.google.com/scholar?cluster=13763983237898796416&hl=en&as_sdt=0,3
| 1 | 2,022 |
Cross Aggregation Transformer for Image Restoration
| 11 |
neurips
| 5 | 2 |
2023-06-16 22:59:28.230000
|
https://github.com/zhengchen1999/cat
| 77 |
Cross Aggregation Transformer for Image Restoration
|
https://scholar.google.com/scholar?cluster=17495936545828523011&hl=en&as_sdt=0,43
| 3 | 2,022 |
DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems
| 5 |
neurips
| 3 | 0 |
2023-06-16 22:59:28.452000
|
https://github.com/dimesteam/dimes
| 24 |
DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems
|
https://scholar.google.com/scholar?cluster=7607751078671404883&hl=en&as_sdt=0,47
| 3 | 2,022 |
NSNet: A General Neural Probabilistic Framework for Satisfiability Problems
| 0 |
neurips
| 3 | 0 |
2023-06-16 22:59:28.664000
|
https://github.com/zhaoyu-li/nsnet
| 13 |
NSNet: A General Neural Probabilistic Framework for Satisfiability Problems
|
https://scholar.google.com/scholar?cluster=1383198639431989116&hl=en&as_sdt=0,5
| 1 | 2,022 |
Brain Network Transformer
| 16 |
neurips
| 8 | 1 |
2023-06-16 22:59:28.876000
|
https://github.com/wayfear/brainnetworktransformer
| 34 |
Brain network transformer
|
https://scholar.google.com/scholar?cluster=10818376030441199053&hl=en&as_sdt=0,31
| 2 | 2,022 |
Improved Utility Analysis of Private CountSketch
| 4 |
neurips
| 0 | 0 |
2023-06-16 22:59:29.087000
|
https://github.com/rasmus-pagh/private-countsketch
| 3 |
Improved Utility Analysis of Private CountSketch
|
https://scholar.google.com/scholar?cluster=9045975206203918002&hl=en&as_sdt=0,14
| 1 | 2,022 |
Improving Diffusion Models for Inverse Problems using Manifold Constraints
| 53 |
neurips
| 15 | 1 |
2023-06-16 22:59:29.299000
|
https://github.com/hj-harry/mcg_diffusion
| 121 |
Improving diffusion models for inverse problems using manifold constraints
|
https://scholar.google.com/scholar?cluster=18097862330271049483&hl=en&as_sdt=0,11
| 5 | 2,022 |
Deep Model Reassembly
| 31 |
neurips
| 7 | 2 |
2023-06-16 22:59:29.511000
|
https://github.com/adamdad/dery
| 176 |
Deep model reassembly
|
https://scholar.google.com/scholar?cluster=17041268371866200453&hl=en&as_sdt=0,43
| 2 | 2,022 |
BigBio: A Framework for Data-Centric Biomedical Natural Language Processing
| 11 |
neurips
| 100 | 187 |
2023-06-16 22:59:29.723000
|
https://github.com/bigscience-workshop/biomedical
| 335 |
Bigbio: a framework for data-centric biomedical natural language processing
|
https://scholar.google.com/scholar?cluster=16248185859280855738&hl=en&as_sdt=0,33
| 27 | 2,022 |
Gradient Estimation with Discrete Stein Operators
| 6 |
neurips
| 0 | 0 |
2023-06-16 22:59:29.934000
|
https://github.com/thjashin/rodeo
| 15 |
Gradient estimation with discrete Stein operators
|
https://scholar.google.com/scholar?cluster=17367160563592360698&hl=en&as_sdt=0,5
| 3 | 2,022 |
Rapidly Mixing Multiple-try Metropolis Algorithms for Model Selection Problems
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:59:30.145000
|
https://github.com/changwoo-lee/rapidmtm
| 3 |
Rapidly mixing multiple-try Metropolis algorithms for model selection problems
|
https://scholar.google.com/scholar?cluster=8288484673444745873&hl=en&as_sdt=0,38
| 1 | 2,022 |
Online Agnostic Multiclass Boosting
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:30.356000
|
https://github.com/vinodkraman/onlineagnosticmulticlassboosting
| 0 |
Online Agnostic Multiclass Boosting
|
https://scholar.google.com/scholar?cluster=17530449480068506498&hl=en&as_sdt=0,31
| 2 | 2,022 |
A contrastive rule for meta-learning
| 14 |
neurips
| 0 | 1 |
2023-06-16 22:59:30.569000
|
https://github.com/smonsays/contrastive-meta-learning
| 9 |
A contrastive rule for meta-learning
|
https://scholar.google.com/scholar?cluster=1536313672687965148&hl=en&as_sdt=0,33
| 1 | 2,022 |
Distinguishing Learning Rules with Brain Machine Interfaces
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:59:30.780000
|
https://github.com/jacobfulano/learning-rules-with-bmi
| 0 |
Distinguishing learning rules with brain machine interfaces
|
https://scholar.google.com/scholar?cluster=11051656974979640667&hl=en&as_sdt=0,5
| 2 | 2,022 |
Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:30.991000
|
https://github.com/jzf2101/alphatology
| 3 |
Evaluation Beyond Task Performance: Analyzing Concepts in AlphaZero in Hex
|
https://scholar.google.com/scholar?cluster=8603391882567020641&hl=en&as_sdt=0,5
| 1 | 2,022 |
Semi-supervised Semantic Segmentation with Prototype-based Consistency Regularization
| 7 |
neurips
| 0 | 3 |
2023-06-16 22:59:31.203000
|
https://github.com/heimingx/semi_seg_proto
| 22 |
Semi-supervised semantic segmentation with prototype-based consistency regularization
|
https://scholar.google.com/scholar?cluster=2500907054917724227&hl=en&as_sdt=0,5
| 2 | 2,022 |
Benchmarking and Analyzing 3D Human Pose and Shape Estimation Beyond Algorithms
| 8 |
neurips
| 4 | 1 |
2023-06-16 22:59:31.414000
|
https://github.com/smplbody/hmr-benchmarks
| 106 |
Benchmarking and Analyzing 3D Human Pose and Shape Estimation Beyond Algorithms
|
https://scholar.google.com/scholar?cluster=4376621748772936242&hl=en&as_sdt=0,24
| 8 | 2,022 |
TTOpt: A Maximum Volume Quantized Tensor Train-based Optimization and its Application to Reinforcement Learning
| 10 |
neurips
| 0 | 0 |
2023-06-16 22:59:31.626000
|
https://github.com/andreichertkov/ttopt
| 13 |
TTOpt: A maximum volume quantized tensor train-based optimization and its application to reinforcement learning
|
https://scholar.google.com/scholar?cluster=6175341780524530089&hl=en&as_sdt=0,3
| 2 | 2,022 |
A Mixture Of Surprises for Unsupervised Reinforcement Learning
| 1 |
neurips
| 0 | 1 |
2023-06-16 22:59:31.838000
|
https://github.com/leaplabthu/moss
| 13 |
A Mixture of Surprises for Unsupervised Reinforcement Learning
|
https://scholar.google.com/scholar?cluster=9731296982002152035&hl=en&as_sdt=0,39
| 2 | 2,022 |
PeRFception: Perception using Radiance Fields
| 2 |
neurips
| 15 | 9 |
2023-06-16 22:59:32.049000
|
https://github.com/POSTECH-CVLab/PeRFception
| 301 |
PeRFception: Perception using Radiance Fields
|
https://scholar.google.com/scholar?cluster=13895322647029601648&hl=en&as_sdt=0,5
| 14 | 2,022 |
Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:59:32.261000
|
https://github.com/ThyrixYang/gdfm_nips22
| 8 |
Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems
|
https://scholar.google.com/scholar?cluster=10242536886571160185&hl=en&as_sdt=0,10
| 2 | 2,022 |
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:59:32.472000
|
https://github.com/mingruiliu-ml-lab/communication-efficient-local-gradient-clipping
| 0 |
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
|
https://scholar.google.com/scholar?cluster=5333604100052232790&hl=en&as_sdt=0,3
| 0 | 2,022 |
On Analyzing Generative and Denoising Capabilities of Diffusion-based Deep Generative Models
| 5 |
neurips
| 1 | 0 |
2023-06-16 22:59:32.684000
|
https://github.com/kamildeja/analysing_ddgm
| 5 |
On analyzing generative and denoising capabilities of diffusion-based deep generative models
|
https://scholar.google.com/scholar?cluster=995225694240773141&hl=en&as_sdt=0,10
| 1 | 2,022 |
DiSC: Differential Spectral Clustering of Features
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:59:32.896000
|
https://github.com/Mishne-Lab/DiSC
| 2 |
DiSC: Differential Spectral Clustering of Features
|
https://scholar.google.com/scholar?cluster=7617996408610291337&hl=en&as_sdt=0,33
| 2 | 2,022 |
UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes
| 24 |
neurips
| 60 | 6 |
2023-06-16 22:59:33.108000
|
https://github.com/google-research/big_vision
| 890 |
Uvim: A unified modeling approach for vision with learned guiding codes
|
https://scholar.google.com/scholar?cluster=13016594180316687621&hl=en&as_sdt=0,5
| 23 | 2,022 |
Proximal Learning With Opponent-Learning Awareness
| 1 |
neurips
| 3 | 4 |
2023-06-16 22:59:33.320000
|
https://github.com/silent-zebra/pola
| 4 |
Proximal Learning With Opponent-Learning Awareness
|
https://scholar.google.com/scholar?cluster=6796004730417376000&hl=en&as_sdt=0,5
| 5 | 2,022 |
Coresets for Wasserstein Distributionally Robust Optimization Problems
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:33.532000
|
https://github.com/h305142/wdro_coreset
| 2 |
Coresets for Wasserstein Distributionally Robust Optimization Problems
|
https://scholar.google.com/scholar?cluster=15328832581114921682&hl=en&as_sdt=0,5
| 2 | 2,022 |
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
| 5 |
neurips
| 2 | 8 |
2023-06-16 22:59:33.744000
|
https://github.com/linziyi96/st-adapter
| 20 |
St-adapter: Parameter-efficient image-to-video transfer learning
|
https://scholar.google.com/scholar?cluster=16710270545076573950&hl=en&as_sdt=0,23
| 6 | 2,022 |
Can Adversarial Training Be Manipulated By Non-Robust Features?
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:33.955000
|
https://github.com/tlmichael/hypocritical-perturbation
| 2 |
Can Adversarial Training Be Manipulated By Non-Robust Features?
|
https://scholar.google.com/scholar?cluster=7120256042443644794&hl=en&as_sdt=0,3
| 1 | 2,022 |
Generalizing Goal-Conditioned Reinforcement Learning with Variational Causal Reasoning
| 7 |
neurips
| 1 | 0 |
2023-06-16 22:59:34.166000
|
https://github.com/gilgameshd/grader
| 13 |
Generalizing Goal-Conditioned Reinforcement Learning with Variational Causal Reasoning
|
https://scholar.google.com/scholar?cluster=3294634165353463281&hl=en&as_sdt=0,10
| 2 | 2,022 |
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models
| 6 |
neurips
| 0 | 0 |
2023-06-16 22:59:34.378000
|
https://github.com/winogavil/winogavil-experiments
| 1 |
WinoGAViL: Gamified association benchmark to challenge vision-and-language models
|
https://scholar.google.com/scholar?cluster=2502557314883549286&hl=en&as_sdt=0,26
| 1 | 2,022 |
Elucidating the Design Space of Diffusion-Based Generative Models
| 182 |
neurips
| 52 | 4 |
2023-06-16 22:59:34.589000
|
https://github.com/nvlabs/edm
| 611 |
Elucidating the design space of diffusion-based generative models
|
https://scholar.google.com/scholar?cluster=5258718823597512255&hl=en&as_sdt=0,5
| 28 | 2,022 |
Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:59:34.802000
|
https://github.com/shoelim/mpgd
| 1 |
Chaotic regularization and heavy-tailed limits for deterministic gradient descent
|
https://scholar.google.com/scholar?cluster=15394418026673969383&hl=en&as_sdt=0,48
| 2 | 2,022 |
SMPL: Simulated Industrial Manufacturing and Process Control Learning Environments
| 0 |
neurips
| 2 | 1 |
2023-06-16 22:59:35.014000
|
https://github.com/smpl-env/smpl
| 12 |
SMPL: Simulated Industrial Manufacturing and Process Control Learning Environments
|
https://scholar.google.com/scholar?cluster=11656776523650390398&hl=en&as_sdt=0,33
| 3 | 2,022 |
The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models
| 8 |
neurips
| 3,103 | 884 |
2023-06-16 22:59:35.235000
|
https://github.com/microsoft/DeepSpeed
| 25,950 |
The stability-efficiency dilemma: Investigating sequence length warmup for training GPT models
|
https://scholar.google.com/scholar?cluster=2863317000596137587&hl=en&as_sdt=0,44
| 290 | 2,022 |
Generalization Gap in Amortized Inference
| 5 |
neurips
| 0 | 0 |
2023-06-16 22:59:35.455000
|
https://github.com/zmtomorrow/generalizationgapinamortizedinference
| 1 |
Generalization gap in amortized inference
|
https://scholar.google.com/scholar?cluster=8684926098848417995&hl=en&as_sdt=0,1
| 1 | 2,022 |
PulseImpute: A Novel Benchmark Task for Pulsative Physiological Signal Imputation
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:35.667000
|
https://github.com/rehg-lab/pulseimpute
| 14 |
PulseImpute: A Novel Benchmark Task for Pulsative Physiological Signal Imputation
|
https://scholar.google.com/scholar?cluster=16974721368633186321&hl=en&as_sdt=0,5
| 4 | 2,022 |
What are the best Systems? New Perspectives on NLP Benchmarking
| 7 |
neurips
| 3 | 3 |
2023-06-16 22:59:35.879000
|
https://github.com/pierrecolombo/rankingnlpsystems
| 12 |
What are the best systems? new perspectives on nlp benchmarking
|
https://scholar.google.com/scholar?cluster=6399800265216949784&hl=en&as_sdt=0,33
| 1 | 2,022 |
Learning from Label Proportions by Learning with Label Noise
| 4 |
neurips
| 0 | 0 |
2023-06-16 22:59:36.091000
|
https://github.com/z-jianxin/llpfc
| 3 |
Learning from label proportions by learning with label noise
|
https://scholar.google.com/scholar?cluster=5147088171143783724&hl=en&as_sdt=0,5
| 1 | 2,022 |
Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution
| 5 |
neurips
| 0 | 0 |
2023-06-16 22:59:36.302000
|
https://github.com/aorvieto/decsps
| 4 |
Dynamics of sgd with stochastic polyak stepsizes: Truly adaptive variants and convergence to exact solution
|
https://scholar.google.com/scholar?cluster=1202208377216276410&hl=en&as_sdt=0,25
| 1 | 2,022 |
BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs
| 15 |
neurips
| 92 | 3 |
2023-06-16 22:59:36.515000
|
https://github.com/pygod-team/pygod
| 906 |
Bond: Benchmarking unsupervised outlier node detection on static attributed graphs
|
https://scholar.google.com/scholar?cluster=4649486946947801284&hl=en&as_sdt=0,10
| 11 | 2,022 |
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
| 45 |
neurips
| 17 | 5 |
2023-06-16 22:59:36.727000
|
https://github.com/zrrskywalker/point-m2ae
| 130 |
Point-M2AE: multi-scale masked autoencoders for hierarchical point cloud pre-training
|
https://scholar.google.com/scholar?cluster=8230127879015912569&hl=en&as_sdt=0,21
| 11 | 2,022 |
Exploring Example Influence in Continual Learning
| 8 |
neurips
| 0 | 0 |
2023-06-16 22:59:36.938000
|
https://github.com/sssunqing/example_influence_cl
| 14 |
Exploring Example Influence in Continual Learning
|
https://scholar.google.com/scholar?cluster=4168097285182390934&hl=en&as_sdt=0,5
| 1 | 2,022 |
Subspace clustering in high-dimensions: Phase transitions & Statistical-to-Computational gap
| 0 |
neurips
| 1 | 0 |
2023-06-16 22:59:37.150000
|
https://github.com/lucpoisson/subspaceclustering
| 1 |
Subspace clustering in high-dimensions: Phase transitions\& Statistical-to-Computational gap
|
https://scholar.googleusercontent.com/scholar?q=cache:4HwC6_qwHhoJ:scholar.google.com/+Subspace+clustering+in+high-dimensions:+Phase+transitions+%26+Statistical-to-Computational+gap&hl=en&as_sdt=0,3
| 1 | 2,022 |
How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders
| 4 |
neurips
| 4 | 0 |
2023-06-16 22:59:37.362000
|
https://github.com/zhangq327/u-mae
| 32 |
How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders
|
https://scholar.google.com/scholar?cluster=12421230382199683849&hl=en&as_sdt=0,34
| 3 | 2,022 |
Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:37.575000
|
https://github.com/novaglow646/nips22-mat-and-ldat-for-ood
| 6 |
Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors
|
https://scholar.google.com/scholar?cluster=847890003773472313&hl=en&as_sdt=0,44
| 1 | 2,022 |
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
| 34 |
neurips
| 3,103 | 884 |
2023-06-16 22:59:37.787000
|
https://github.com/microsoft/DeepSpeed
| 25,950 |
Zeroquant: Efficient and affordable post-training quantization for large-scale transformers
|
https://scholar.google.com/scholar?cluster=14601198018737164595&hl=en&as_sdt=0,33
| 290 | 2,022 |
ProtoX: Explaining a Reinforcement Learning Agent via Prototyping
| 0 |
neurips
| 0 | 0 |
2023-06-16 22:59:37.998000
|
https://github.com/rrags/ProtoX_NeurIPS
| 2 |
ProtoX: Explaining a Reinforcement Learning Agent via Prototyping
|
https://scholar.google.com/scholar?cluster=15061235494194718383&hl=en&as_sdt=0,46
| 1 | 2,022 |
NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation
| 12 |
neurips
| 3 | 0 |
2023-06-16 22:59:38.210000
|
https://github.com/taesikgong/note
| 25 |
Note: Robust continual test-time adaptation against temporal correlation
|
https://scholar.google.com/scholar?cluster=342119686943612993&hl=en&as_sdt=0,5
| 1 | 2,022 |
Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation
| 6 |
neurips
| 1 | 2 |
2023-06-16 22:59:38.422000
|
https://github.com/zoilsen/clom
| 6 |
Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation
|
https://scholar.google.com/scholar?cluster=13623703872858769843&hl=en&as_sdt=0,3
| 1 | 2,022 |
Forecasting Future World Events With Neural Networks
| 7 |
neurips
| 50 | 1 |
2023-06-16 22:59:38.635000
|
https://github.com/andyzoujm/autocast
| 157 |
Forecasting Future World Events with Neural Networks
|
https://scholar.google.com/scholar?cluster=17792483394679760594&hl=en&as_sdt=0,5
| 7 | 2,022 |
Autoregressive Perturbations for Data Poisoning
| 9 |
neurips
| 3 | 0 |
2023-06-16 22:59:38.847000
|
https://github.com/psandovalsegura/autoregressive-poisoning
| 10 |
Autoregressive perturbations for data poisoning
|
https://scholar.google.com/scholar?cluster=17109390722215919135&hl=en&as_sdt=0,47
| 3 | 2,022 |
ESCADA: Efficient Safety and Context Aware Dose Allocation for Precision Medicine
| 1 |
neurips
| 1 | 0 |
2023-06-16 22:59:39.059000
|
https://github.com/bilkent-cyborg/escada
| 1 |
Escada: Efficient safety and context aware dose allocation for precision medicine
|
https://scholar.google.com/scholar?cluster=12799291179330941239&hl=en&as_sdt=0,3
| 1 | 2,022 |
Improved Algorithms for Neural Active Learning
| 2 |
neurips
| 0 | 0 |
2023-06-16 22:59:39.270000
|
https://github.com/matouk98/i-neural
| 0 |
Improved Algorithms for Neural Active Learning
|
https://scholar.google.com/scholar?cluster=14846687732402339872&hl=en&as_sdt=0,33
| 1 | 2,022 |
CUP: Critic-Guided Policy Reuse
| 1 |
neurips
| 1 | 0 |
2023-06-16 22:59:39.482000
|
https://github.com/nagisazj/cup
| 4 |
CUP: Critic-Guided Policy Reuse
|
https://scholar.google.com/scholar?cluster=11253594017005639050&hl=en&as_sdt=0,39
| 1 | 2,022 |
QUARK: Controllable Text Generation with Reinforced Unlearning
| 25 |
neurips
| 9 | 1 |
2023-06-16 22:59:39.693000
|
https://github.com/gximinglu/quark
| 50 |
Quark: Controllable text generation with reinforced unlearning
|
https://scholar.google.com/scholar?cluster=15982538186848433892&hl=en&as_sdt=0,31
| 3 | 2,022 |
Parameter-free Dynamic Graph Embedding for Link Prediction
| 2 |
neurips
| 1 | 1 |
2023-06-16 22:59:39.904000
|
https://github.com/fudancisl/freegem
| 9 |
Parameter-free Dynamic Graph Embedding for Link Prediction
|
https://scholar.google.com/scholar?cluster=737985874382634688&hl=en&as_sdt=0,44
| 1 | 2,022 |
Non-Markovian Reward Modelling from Trajectory Labels via Interpretable Multiple Instance Learning
| 3 |
neurips
| 0 | 0 |
2023-06-16 22:59:40.115000
|
https://github.com/jaearly/mil-for-non-markovian-reward-modelling
| 3 |
Non-markovian reward modelling from trajectory labels via interpretable multiple instance learning
|
https://scholar.google.com/scholar?cluster=9372994966597211189&hl=en&as_sdt=0,47
| 1 | 2,022 |
Explaining Preferences with Shapley Values
| 1 |
neurips
| 0 | 0 |
2023-06-16 22:59:40.327000
|
https://github.com/mrhuff/pref-shap
| 3 |
Explaining Preferences with Shapley Values
|
https://scholar.google.com/scholar?cluster=13809288685377851579&hl=en&as_sdt=0,5
| 2 | 2,022 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.