text
stringlengths
0
1.96k
"The method does show some modest improvements in the experiments provided by the authors" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"These were good data sets a few years ago and still are good data sets to test the code and sanity of the idea, but concluding anything strong based on the results obtained with them is not a good idea" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"The authors claim the formalization of the problem to be one of their contributions." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"It is difficult for me to accept it" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"The formalization that the authors proposed is basically the definition of curriculum learning" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"There is no novelty about this" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"The proposed method introduces a lot of complexity for very small gains" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"While these results are scientifically interesting , I don't expect it to be of practical use" "['non', 'arg', 'arg', 'arg', 'arg', 'arg', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"I realize that they were obtained with a simple network, however, showing improvements in this regime is not that convincing" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Even the results with the VGG network are very far from the best available models" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"I suggest checking the papers citing Bengio et al. (2009) to find lots of closely related papers." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"In summary, it is not a bad paper , but the experimental results are not sufficient to conclude that much" "['non', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Experiments with ImageNet or some other large data set would be advisable to increase significance of this work ." "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'non']" "paper quality"
"This paper tested a very simple idea: when we do large batch training, instead of sampling more training data for each minibatch, we use data augmentation techniques to generate training data from a small minibatch." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"The authors claim the proposed method has better generalization performance." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"I think it is an interesting idea , but the current draft does not provide sufficient support" "['non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"The proposed method is very simple" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"It looks to me the better generalization comes from more complicated data augmentation, not from the proposed large batch training" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"It is unclear to me what is the benefit of the proposed method" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"The improvement on test errors does not look significant" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"The largest batch considered is 64*32, which is relatively small" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"In figure 1 (b), the results of M=4,8,16,32 are very similar, and it looks unstable" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"It is unclear what is the default batchsize for Imagenet" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"For theorem 1, it is hard to say how much the theoretical analysis based on linear approximation near global minimizer would help understand the behavior of SGD." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"I fail to understand the the authors augmentation" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Following the authors logic, normal large batch training decrease the variability of <H>_k and which converges to flat minima." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"It contradicts with the authors other explanation" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"In section 4.2, I fail to understand why the proposed method can affect the norm of gradient" "['non', 'non', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Related works: Smith et al. 2018 Don't Decay the Learning Rate, Increase the Batch Size." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"I will keep my score and argue for the rejection of this paper" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"My main concern is that the benefit of this method is unclear" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"The main baseline that has been compared is the standard small-batch training." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"However, the proposed method use a N times larger batch and same number of iterations, and hence N times more computation resources." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Moreover, the proposed method also use N times more augmented samples." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"The proposed method looks unstable" "['arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Moreover, instead of showing the consistent benefits of large batch, the authors tune the batchsize as a hyperparameter for different experiments" "['non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Regarding the theoretical part, I still do not follow the authors' explanation" "['non', 'non', 'non', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"I think it could at least be improved for clarity ." "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'non']" "paper quality"
"Im not sure how impressed I should be by these results" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"The humanhuman similarity score is pretty far above those of the best models , even though MTurkers are not optimized (and likely not as motivated as an NN) to solve this task." "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"You might be able to convince me more if you had a stronger baseline e.g. a bag-of-words Drawer model which works off of the average of the word embeddings in a scripted Teller input" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Are the machinemachine pairs consistently performing well together" "['arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Depending on those variance numbers you might also consider doing a statistical test to argue that the auxiliary loss function and and RL fine-tuning offer certain improvement over the Scene2seq base model" "['non', 'non', 'non', 'non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"Framing: there is a lot of work in collaborative / multi-agent dialogue models which you have missed see refs below to start" "['non', 'non', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg', 'arg']" "paper quality"
"References Vogel & Jurafsky (2010)." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Speaker-follower models for vision-and-language navigation." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"The red one!" "['non', 'non', 'non', 'non']" "paper quality"
"On learning to refer to things based on their discriminative properties." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Overview: The authors aim at finding and investigating criteria that allow to determine whether a deep (convolutional) model overfits the training data without using a hold-out data set." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"