text
stringlengths 0
1.96k
|
---|
"The largest batch considered is 64*32, which is relatively small" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"It is unclear what is the default batchsize for Imagenet" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"In Table 1, the proposed method tuned M as a hyperparameter." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"The baselines are fairly weak , the authors did not compare with any other method" "['con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"For theorem 1, it is hard to say how much the theoretical analysis based on linear approximation near global minimizer would help understand the behavior of SGD." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"I fail to understand the the authors augmentation" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Following the authors logic, normal large batch training decrease the variability of <H>_k and which converges to flat minima." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"It contradicts with the authors other explanation" "['con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"In section 4.2, I fail to understand why the proposed method can affect the norm of gradient" "['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Related works: Smith et al. 2018 Don't Decay the Learning Rate, Increase the Batch Size." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"after rebuttal ==================== I appreciate the authors' response, but I do not think the rebuttal addressed my concerns" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"I will keep my score and argue for the rejection of this paper" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"My main concern is that the benefit of this method is unclear" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The main baseline that has been compared is the standard small-batch training." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Moreover, the proposed method also use N times more augmented samples." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"I have suggested the authors to compare with stronger baselines to demonstrate the benefits." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Moreover, instead of showing the consistent benefits of large batch, the authors tune the batchsize as a hyperparameter for different experiments" "['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"I think it could at least be improved for clarity ." "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non']" "paper quality"
|
"The authors argue convincingly that an interactive and grounded evaluation environment helps us better measure how well NLG/NLU agents actually understand and use their language rather than evaluating against arbitrary ground-truth examples of what humans say, we can evaluate the objective end-to-end performance of a system in a well-specified nonlinguistic task." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"They collect a novel dataset in this grounded and goal-driven communication paradigm, define a success metric for the collaborative drawing task, and present models for maximizing that metric." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"This is a very interesting task and the dataset/models are a very useful contribution to the community" "['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
|
"I have just a few comments below:" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"You might be able to convince me more if you had a stronger baseline e.g. a bag-of-words Drawer model which works off of the average of the word embeddings in a scripted Teller input" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Have you tried baselines like these?" "['non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Are the humans" "['con', 'con', 'con']" "paper quality"
|
"Depending on those variance numbers you might also consider doing a statistical test to argue that the auxiliary loss function and and RL fine-tuning offer certain improvement over the Scene2seq base model" "['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Framing: there is a lot of work in collaborative / multi-agent dialogue models which you have missed see refs below to start" "['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"You should link to this literature (mostly in NLP) and contrast your task/model with theirs" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"References Vogel & Jurafsky (2010)." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Learning to follow navigational directions." "['non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Speaker-follower models for vision-and-language navigation." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"On learning to refer to things based on their discriminative properties." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"They propose three potential criteria based on the curves for determining when a model overfits and use those to determine the smallest l1-regularization parameter value that does not overfit." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"I have several issues with this work" "['con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Foremost, the presented criteria are actually not real criteria (expect maybe C1) but rather general guidelines to visually inspect 'accuracy over randomization curves" "['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"The criteria remain very vague and seem be to applicable mainly to the evaluated data set (e.g. what defines a steep decrease?)." "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Additionally, only one type of regularization was assumed, namely l1-regularization, though other types are arguably more common in the deep (convolutional) learning literature" "['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Overall, I think this paper is not fit for publication, because the contributions of the paper seem very vague and are neither thoroughly defined nor tested" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Detailed remarks: General: A proper definition or at least a somewhat better notion of overfitting would have benefitted the paper" "['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"In the current version, you seem to define overfitting on-the-fly while defining your criteria." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"You mention complexity of data and model several times in the paper but never define what you mean by that" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Detailed: Page 3, last paragraph: Why did you not use bias terms in your model?" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Page 4, Assumption." "['non', 'non', 'non', 'non', 'non']" "paper quality"
|
"What do you mean by the data being independent" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"Independent and identically distributed?" "['non', 'non', 'non', 'non', 'non']" "paper quality"
|
"What do you mean by ""easier to learn""?" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
"Better training error?" "['non', 'non', 'non', 'non']" "paper quality"
|
"I dont understand the assumptions" "['con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
|
"You state that the regularization parameter should decrease complexity of the model." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.