text
stringlengths 0
1.96k
|
---|
"It is unclear to me what is the benefit of the proposed method" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The improvement on test errors does not look significant" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"For example, a simple thing to do is t0 separately train networks with standard setting and then ensemble trained networks." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Or apply distributed knowledge distillation like in (Anil 2018 Large scale distributed neural network training through online distillation) 3." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"The experiments are not strong" "['con', 'con', 'con', 'con', 'con']" "paper quality"
|
"In figure 1 (b), the results of M=4,8,16,32 are very similar, and it looks unstable" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"However, the proposed method use a N times larger batch and same number of iterations, and hence N times more computation resources." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"The proposed method looks unstable" "['con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Regarding the theoretical part, I still do not follow the authors' explanation" "['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"This paper presents CoDraw, a grounded and goal-driven dialogue environment for collaborative drawing." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Im not sure how impressed I should be by these results" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The humanhuman similarity score is pretty far above those of the best models , even though MTurkers are not optimized (and likely not as motivated as an NN) to solve this task." "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Are the machinemachine pairs consistently performing well together" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The red one!" "['non', 'non', 'non', 'non']" "paper quality"
|
"Overview: The authors aim at finding and investigating criteria that allow to determine whether a deep (convolutional) model overfits the training data without using a hold-out data set." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Instead of using a hold-out set they propose to randomly flip the labels of certain amounts of training data and inspect the corresponding 'accuracy vs. randomization curves." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Because of that, the experimental evaluation remains vague as well, as the criteria are tested on one data set by visual inspection" "['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"As in that case correlation in the data can be destroyed by the introduction of randomness making the data easier to learn." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Is that an assumption?" "['non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Instead, you present vague of sharp drops and two modes but do not present rigorous definitions" "['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"In my view, this evaluation of the (vague) criteria is not fit for showing their possible merit ." "['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non']" "paper quality"
|
"This paper generalizes basic policy gradient methods by replacing the original Gaussian or Gaussian mixture policy with a normalizing flow policy, which is defined by a sequence of invertible transformations from a base policy." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"and why one needs to compute gradients of the entropy (Section 4.1)?" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"BTW, in the Section 4.3, what does [-1, 1]^2 mean" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Maybe they can uniformly outperform Gaussian policy?" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"My main concern about the paper is the time cost." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"The second module is responsible for mapping goals from this embedding space to control policies." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"A contrastive loss would seemingly be more appropriate for learning the instruction-goal distance function" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Are they free-form instructions" "['con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The proposed method is evaluated on object classification and object alignment tasks." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"It would be better to compare the proposed method to the existing multi-objective methods in terms of classification accuracy and other objectives" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"This paper argues that the choice of the number of parameters is sub-optimal and ineffective in terms of computational complexity." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Text contradicting the equation : ""In order to balance the individual loss terms, we normalize according to dimensions and weight the KL divergence with a constant of 0.1""." "['con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Tables and figures are inconveniently far from where they are referenced in the text" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"This is not true in a beta-VAE" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The weighting of the KL that the authors introduce is going to bias the learned generator towards the high probability regions." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"3) Experiments Finally, the experimental results do not look very compelling , it seems to be overall worse than the baselines in the two image datasets and slightly better in the audio dataset, so it's unclear that this approach is superior" "['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"They construct a pair of synthetic but somewhat realistic datasetsin one case, the Bayes-optimal classifier is *not* robust, demonstrating that the Bayes-optimal classifier may not be robust for real-world datasets." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"The paper also definitively proves that there are realistic datasets where the Bayes-optimal classifier is non-robust, which goes against quite a bit of conventional wisdom in the field and opens up many new paths for research" "['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
|
"A Discussion of Adversarial Examples are not Bugs they are Features (pseudo-url): Nakkiran (2019) actually constructs a dataset (called adversarial squares) where the Bayes-optimal classifier is robust but neural networks learn a non-robust classifier due to label noise and overfitting." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Adversarially robust generalization requires more data (pseudo-url): Schmidt et al show a setup where many more samples are required for adversarial robustness than for standard classification error." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"First, if my understanding of the paper is correct, the experiments show that (a) the Bayes-optimal classifier can be non-robust in real-world settings, and (b) even when the Bayes-optimal classifier is robust, NNs can learn a non-robust decision boundary." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"It is unclear on what basis one can say that real-world datasets are more like the symmetric case or the asymmetric case" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"CNN vs Linear SVM: I am confused about why we would expect a CNN to be able to learn the Bayes-optimal decision boundary but not the Linear SVM" "['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The paper justifies the adversarial vulnerability of the Linear SVM by arguing that the Bayes-optimal classifier is not in the Linear SVM hypothesis class, which makes sense" "['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
|
"For CNNs, however, it is unclear if the Bayes-optimal classifier lies in the hypothesis class (there are ""universal approximation"" arguments but these usually require arbitrarily wide networks and are non-constructive)couldn't it be that the CNNs used here is in the same boat as the Linear SVM (i.e. the Bayes-optimal decision boundary is not expressible by the CNN?)" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Experimental setup: - One somewhat concerning (but perhaps unavoidable) thing about the experimental setup is that all the considered datasets are not perfectly linearly separable , i.e. the Bayes-optimal classifier has non-zero test error in expectation, and moreover the data variance is full-rank in the embedded space." "['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"This is in stark contrast to real datasets, where there seem to be many different ways to perfectly separate say, dogs from cats, and the variance of the data seems to be very heavily concentrated in a small subset of directions" "['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"A suggestion rather than a concern and not impacting my current score: but it would be very interesting to see what happens for robustly trained classifiers on the symmetric and asymmetric datasets." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"The authors recognize that since the dataset is synthetically generated it is not necessarily predictive of how methods would perform with real-world data, but still it can serve a useful and complementary role similar to the one CLEVR has served in image understanding" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.