Input
stringlengths 251
41.6k
| Output
stringlengths 137
9.7k
|
---|---|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
update following author response and reviewer discussion i would like to thank the authors for providing a response and in particular for providing further justification for their injectivity assumption however the main concern remains the lack of empirical validation even on toy examples showing that or whether the derived theory here goes beyond a hypothetical thought exercise and can be implemented in practice it shouldnt be difficult to provide such examples assuming the proposed approach does indeed work as indicated by the theory developed here and without such examples the paper seems rather incomplete therefore unfortunately my score remains unchanged at this point although i would like to encourage the authors to keep pursuing this direction initial review this manuscript discusses a hypothetical pooling approach that recursively applies a generic unspecified gnn to subgraph constructed from local neighborhoods of a certain radius or sequence of them applied in successive applications of the recursion in an input graph the main motivation set out by this work is to prove that under some assumptions which may or may not be realistic on the applied gnns this pooling strategy is sufficiently expressive to admit a configuration of weights that would enable counting subgraphs of certain size in the main input graph the authors provide theoretical study of this expressivity and prove the capability to count subgraphs in an existence sense that is they show there exists some function that can be written in the form of their proposed pooling architecture with sufficiently rich architecture between pooled levels such that non isomorphic subgraphs would necessarily be distinguishable by some of the extracted features and thus counting equivalency classes of these features would provide the desired subgraph counting the authors also provide other results showing functions written in the form of their pooling based architecture have reasonable complexity of node operations and providing some lower bound on the capability of any gnn approach to count subgraphs thus putting their theoretical results in context the proposed pooling approach itself is rather straight forward but intuitively makes sense given an input sequence of radii at each iteration of the recursion each node in the graph constructs a subgraph over nodes that are within the current considered radius in the sequence then by recursion the same approach is applied to each of these subgraphs with the remainder of the radii sequence subsequent to the current one being passed on until reaching only a single radius or a single node as stopping conditions given the resulting subgraph features for each node these are combined with the input node features via generic functions parametrized by unspecified neural networks written as an mlp with arbitrary architecture for the sake of presenting pseudocode in the supplement that are supposed to be injective when carefully examining the derivation and proofs in the supplement finally the resulting node features are aggregated together again via generic functions modeled as unspecified neural networks that are supposed to be injective this time up to node permutation into graph features that provide a readout to be returned as the output of the network to outer levels of the recursion or finally to the general output of the network it seems to me that the main theoretical results here crucially hinge on the injectivity assumption and yet the authors make no attempt to verify that any existing gnn indeed provides an architecture that satisfies their assumed conditions more importantly they do not provide any empirical evidence that the approach described here is viable in practice there is no implementation or demonstration of this approach even on toy examples let alone benchmarks on graph datasets there is also no discussion on the impact of the proposed pooling on the training of the resulting rnpgnn it is not clear to me that this approach is feasible to apply in practice beyond a hypothetical thought experiment therefore while this line of research seems like an interesting direction i find it somewhat lackluster and do not thing it is mature enough at this stage to be accepted for publication finally as a minor remark the writing here seems somewhat rushed with several typos and unclear phrases some examples equation 4 should probably have rnpgnn rather than rnngnn the last line before section 5 appendix missing a proper reference page 6 sincethe missing a space between since and the page 7 used for for aggregation has a duplicated for the last sentence of section 5 right before section 6 should be rephrased i couldnt decipher it grammatically there may be more that i missed but the paper and supplement would benefit from some additional proofreadingdocsepsummary the goal of the paper is to show that gnns without exponential computational complexity can be constructed with the ability to count subgraphs to this effect the authors propose a principled neighborhood pooling strategy and theoretically characterize their expressive power with respect to other models proposed earlier more specifically the authors propose a recursive neighborhood pooling strategy which characterizes graphs based on the counts of subgraphs furthermore they show that if the tuple of recursion parameters are chosen well their proposed model can capture all induced subgraphs universality of sizes smaller than the first value in the tuple of recursion parameters plus 1 and show a relationship to the reconstruction conjecture kelly et al 1957 the authors also provide a bound on the number of iterations required to learn the expressive representations pros 1 theoretical contributions which provide a strategy to learn expressive representations with the ability to count subgraphs and distinguish graphs based on the same 2 proposed model provides a strategy to perform a tradeoff on the exponential nature required to compute sugraphs in earlier works 3 comprehensive literature review concerns 1 the theorems all state that there exists a set of recursion parameters etc but do not provide a computationally tractable strategy to determine the same for real world graphs eg diameters of the graph 15 or a strategy to determine the vertex covering sequence if it exists which is the basis for theorem 1among all permutations of the set mathcals without looking over all permutations this can explode even with subgraphs of size 10 for instance 2 lack of any experimental evidence to empirically show gains while the authors explicitly say that their work is theoretical they could have included synthetic experiments to demonstrate the same given that the local relational pooling kwl works and kign the latter two which are exponential works have experimental results if the concerns are addressed i will be happy to update my scoresdocsepthe paper has a fully theoretical flair while proposing a novel and seemingly efficient procedure to recursively compute the higherorder ie more than 1hop neighbourhood of a node that are used for learning discriminative graph embedding the paper contributes with the model above rnpgnn and by providing a proof of its representational power and a general theorem supplying an information theoretic lower bound on the complexity of gnns that can count induced substructures to the extent of my knowledge the recursive neighbourhood construction procedure presented in the paper is original and it is based on solid graphtheoretic concepts a proof of the asymptotic complexity of the approach is provided in section 6 suggesting that the approach can yield to increased representational power at minor computational costs these results are derived under a sparsity assumption which is not completely clear how much it would hold in practice in particular i am not sure whether the sparsity constraint should hold on average in the graph or if the presence of a single node infringing the constraint will induce an exponential growth in complexity i am thinking in particular to social graphs where hubness might become an issue if the second is true i would like to read some elaborations as concerns this specific point the theoretical results appear solid and the proof wellconstructed although admittingly i have not checked the proofs in appendix in detail i do have one question as regards the construction of counting substructures proof since it entails leveraging vertex covers does it require this to be minimal or any vertex cover would suffice is there any change if the cover is not minimal concluding while the paper has certainly good quality theoretical contributions it lacks an empirical analysis even a small one given the theoretical nature of the problem which would have made the paper claims more convincing in particular since much of the results hang on the tradeoff between representational power and efficiency one would have liked to see an empirical proof of such excellent tradeoff possibly in graphs of different nature ie biochemical and social given that they tend to have considerably different connectivity patterns the short but informative empirical analysis in the paper by murphy et al icml 2019 is an excellent example one final remark the paper while generally well written has misprints here and there which can be easily spotted by careful proofreading not listing them here but they should be taken care of post rebuttal the rebuttal is very helpful and to the point in clarifying the issues that i had raised as concerns the impact of the assumptions taken in the theoretical proofs the fact that complexity hinges on average node degree and that any vertex cover sufficies for the proof confirms that the approach put forward in the paper might work out of the theoretical box the authors also suggest that the theoretical framework can be translated to a running model with a certain ease and that in fact its practical implementation and empircal assessment is on the way which brings me to the key point in my assessment i am convinced there is value in this work and in the theoretical contribution in the paper i am not convinced that this paper can have a strong impact without an empirical validation as i have underlined in my review there are several related works in literature with a similar theoretical flair which nevertheless provided at least a simple empirical validation i believe that this paper shold do the same it would be stronger more complete and with a higher potential to influence the community as it is this is a borderline paper leaning on the positive side docsepthe proposed paper seeks a theoretical possibility of counting the subgraph by a graph neural network to this end the authors proposed a recursive neighborhood pooling graph neural network and proved the express power of the model the universal approximation results on a subgraph have been shown as well analysis of computational complexity shows the algorithm is much efficient than the known class of models that can count substructures the strength of this paper is the detailed theoretical analysis of the proposed method the analysis has been taken from multiple perspectives model complexity computational complexity and theoretic lowerbound on the class of the proposed method the paper gives a new insight into measuring the representation power of graph neural networks through subgraph counting although subgraph counting is an interesting problem from a theoretical viewpoint the weakness of this paper is that it is not answered how relevant it is to realworld tasks and how successful it is since this is the first paper proposing a new way of measuring the express power of the graph neural network it would have been much better if theres some discussion on this point
### Summary:
|
the reviewers agree that this is an interesting and promising paper although it is on the theoretical side without even satisfying toy examples to demonstrate its usefulness this itself is not a fatal problem iclr can and should welcome theoretical papers however including such experiments would significantly strengthen the impact of this paper and make it more competitive with other iclr submissions
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
postdiscussion update i would like to thank the authors for addressing albeit partially my comments as well as the comments from other reviewers while i understand that some connections can be made between the proposed approach and other approaches or aspects that go beyond local smoothing or oversmoothing this is somewhat anecdotal in my opinion more generally it is still not entirely clear to me how significant are the contributions here reading the other reviews it seems these concerns are also shared by other reviewers although i still think the analysis here is not without merit and this warrants it at least a borderline score further there are some interesting insights provided here which place this work slightly over the threshold since marginally above the threshold was already the score i gave the manuscript initially it remains unchanged original review this work provides a formal interpretation of the aggregation step in common gnn architectures as aiming to solve at least partially a graph denoising problem formulated via graph laplacian smoothing in particular it provides a unified formulation of gcn gat ppnp and appnp in these terms further the authors use this understanding to formulate a unified gnn architecture which can be instantiated to these particular ones and then leverage this architecture to provide an improved adaptive one that is demonstrated to have some advantages over the mentioned classic gnns the main insight showing that traditional gnn architectures essentially smooth or denoise node features is not surprising and one might argue not quite novel since this has been discussed and demonstrated in several previous work some important ones already acknowledged and cited here however the unified view provided here does seem insightful and can contribute to more methodical view of their architecture design indeed this is demonstrated by the introduction of adaugnn here therefore the paper does have merit that progresses the theory behind graph neural networks on the other hand it is not clear how much insight can be drawn from the presented theory or what impact this analysis might have as said before it is generally well accepted and understood that the four architectures discussed here rely on local smoothing and are susceptible to some degree to oversmoothing in node classification there is little discussion of numerous attempts in recent years to overcome this issue and provide architectures that aim to go beyond smoothing eg mixhop graph scattering or various architectures with residual layers or skip connections come to mind it seems the proposed approach here would still rely on local smoothing and therefore it is not entirely clear if it will really contribute significantly compared to the current trends and state of the art in the field therefore i would consider this work as a borderline case but i am leaning towards acceptance due to the contribution towards a more theoretically oriented rather than algorithmic unified formulation encompassing a plethora of architectures that fall under the local smoothing paradigm while noting the paper would be more convincing if it also considered how to advance beyond this smoothing or denoising regime minor remarks in section 5 the authors say gat unintentionally adopts an adaptive c i would argue this is rather deliberate and not unintentional the use of author year and author year citation styles citet and citep natbib commands seems somewhat inconsistent and does not follow the recommended guidelines of when to utilize each style docsepsummary this paper establishes a relation between different popular graph neural networks by mathematically proving that the feature aggregation operation of such networks can be understood as a graphsignaldenoising step moreover the authors try to establish a general framework based on graph signal denoising that subsumes the studied architectures developing new graph neural network gnn architecture under this framework strong points showing that different architectures are indeed using a similar approach for the feature aggregation which is closely related to graph signal denoising is an interesting idea which helps to gain insight into how these architectures work furthermore the numerical results illustrate that the proposed architecture has a competitive performance under certain settings weak points the mathematical notation of the paper is sometimes ambiguous and unclear so it should be carefully revised the relation between gat and the graph signal denoising approach is not clear and should be detailed since it is one of the main contributions of the paper while proposing a unified gnn framework based on graph signal denoising is stated as one of the main contributions of the paper it amounts to presenting the graph denoising formulation with an arbitrary regularization function the paper should focus more on the relation between the proposed architecture and the different gnn with graph signal denoising schemes the paper would benefit if the proofs of theorems 14 were included in the main body rather than in the appendix the reason is twofold those proofs are likely to constitute the main contribution of the paper furthermore the statement of the theorems without the proofs is not sufficient to fully illustrate the relation with graph signal denoising the proposed adaugnn network should be further analyzed an mlp is chosen as the feature aggregation function without providing a motivation furthermore the impact of c being learned instead of being a hyperparameter theorems 5 and 6 should be discussed in more detailed since it implies that convexity of 14 is lost additional comments the symbol l is ambiguously used to denote different types of laplacian matrices in section 2 says that each node is associated with a ddimensional signal x of size n times d but the signal associated with a node should be a vector not a matrix equation 4 is not mathematically correct the variables used as indexes of the summation are not present in the terms inside the summation in fact notation for the indexes of the summations throughout the entire manuscript is quite confusing and in cases like 4 definitely incorrect the edgecentric interpretation of the laplacian regularization is never used also both the edgecentric and nodecentric formulations are not correct since they are missing the related term of the adjacency matrix aij if a is binary this should be stated clearly two different notations for the gradient descent algorithms are used in the paper see eg equations 21 and 24 vs 25 and 30 this should be unified in eq 15 please clarify what di and dj represent docsepsummary of the paper in this paper the authors make the following new argument the aggregation processes of current popular gnn models such as gcn gat ppnp and appnp can be treated as a graph denoising problem where the objective is to minimize a recovery error a norm of noisy feature matrix ie fx plus a graphbased regularization smoothness this new view provides a way to build a gnn model namely adaugnn experimental results show the effectiveness of adaugnn on the task of node classification and the task of preventing adversarial attacks on graphs strong points 1 theoretical contributions of the proposed framework are solid and interesting these findings show that two basic operations of a gnn layer feature transformation and feature aggregation can be viewed as a gradient descent step of minimizing a graph denoising function 2 experimental results demonstrate the effectiveness of adaugnn weak points 1 i think one weakness of this paper is that explanations are only focused on one layer local the theorems do not explain the relations between layers and how nonlinear activation functions affect these theoretical findings for example 1 and 2 treat the gnn as a procedure of encoding and decoding as a whole however it seems that the objective of gnn cannot be viewed as a simple combination of graph denoising problems 2 the experiments do not explain well of theoretical findings these connections are missing in experiments i do see results of adaugnn are promising on node classification and the task of preventing adversarial attacks however it would be better if there are some empirical evidence to explain these new theorems recommendation based on the above points i tend to marginally accept this paper but have concerns these weak points questions other comments the improvements of the proposed model compared with appnp are marginal as shown in table 1 are these really improvements based on my understanding these means of adaugnn are higher than appnp but the variance is also high significance test is needed in adaugnn it approximately solves problem 2 and uses a special regularization term how does the approximation affect the final performance is there any clear guidance on how to choose the regularization term in different problems are these regularizations problemdependent 1 hamilton william l rex ying and jure leskovec representation learning on graphs methods and applications arxiv preprint arxiv170905584 2017 2 chami i abuelhaija s perozzi b r c murphy k 2020 machine learning on graphs a model and comprehensive taxonomy arxiv preprint arxiv200503675 docsep1 the novelty and contribution are very limited in literature many papers have discussed the connection between different gnns typically including aggregators and updaters such as discussed in deep learning on graphs a survey the submission only provides a kind of connection between gcn gat pppn and appnp in the perspective of denoising compared with that the survey paper actually connects many different gnns 2 the writing quality is low there are many errors for example in section 3 the unnormalized version of laplacian matrix with lda the unnormalized version of laplacian matrix with lda 3 in eq 1 to ease the discussion the nonlinear activation is not included however the nonlinearity is the key part for deep gnns in formula 8 the gnns as graph signal denoising actually oversimplified the topological smoothing over attributes since the nonlinear transformation especially with dropout will already conduct the signal denoting so why use such additional effects to do that 4 the submission provides both node classification task and adversarial defense task to validate the performance as for node classification the results however are not very promising compared with current sota for example in iclr20 paper adaptive structural fingerprints for graph attention networks the cora dataset reports 85403 compared with that reported by this submission only 845908 in cite seer 74004 compared with this submissions report 720505 in pubmed this paper reports 797004 however in the iclr paper they report 81203 from that perspective i did not see any advantage in the submission 5 as another task for validation that is robustness to adversarial attack it is suggest to compare the recent sota graph information bottleneck by jure stanford in neurips20 docsepthe paper attempts to provide a unified picture of a number of different gnn architectures from the point of view of graph signal processing gsp in particular it is argued that the aggregation operation in a number of important architectures can effectively be seen as a form of graph signal denoising pros consolidates a number of a priori unrelated gnn architectures and unifies them from the point of view of gsp the unified picture can be used for systematic comparison of architectures or for the design of new architectures as the authors show cons it appears that despite the theoretical insights are nice for unifying the representation but do not quite lead to a more improved gnn architecture in terms of numerical performance the paper concentrates only on the aggregation step but in the architectures considered there are also different nonlinearities this part is completely neglected more detailed comments overall i very much appreciate the work towards simplifying and unifying the somewhat disconnected literature this is an good contribution i think while some of the transformations the authors propose to make the architectures align with the gsp picture feel a bit contrived i think that is ok given the simplification gained what i am missing somewhat was a discussion in how far knowing this representation can actually theoretically lead to improvements for instance it is known that gnns cannot be more expressive that the weisfeilerleman test see morris et al does the gsp lens provide any alternative perspective on this this would be very interesting theoretically i think my second suggestion for possible improvements would be to provide a somewhat more detailed discussion on the effects of the nonlinearity in such networks for the classification results unifying the aggregation layer is useful but at the end the improvements that these insights bring in terms of ugnn seem to somewhat small overall i see the strongest contribution here in terms of the theoretical unification provided minor comment the laplacian is invariant to the addition of self loops ie hatd hata da as the addition of a self loop on the diagonal of a will be cancelled by the corresponding change in d cf top of page 4
### Summary:
|
the paper argues that gnns can be understood as a graph signal denoising while this interpretation is not surprising and not novel the unified view does seem insightful according to some reviewers yet it is not clear how much insight can be drawn from the presented theory as no significantly better architecture or experimental results are presented additional criticism was raised wrt unclear relation between gat and the graph signal denoising the fact that analysis focused on one layer and does not explain the relations between layers and how nonlinear activation functions affect these theoretical findings and that the objective of gnn cannot be viewed as a simple combination of graph denoising problems several reviewers complained that the paper is hard to follow in light of the above despite the significant efforts of the authors to address these issues in the rebuttal we believe the paper is below the bar and recommend rejection
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the submission presents an algorithm that renders adversarial image examples for neural networks robust against discretisation and compression artefacts my main issue with this paper is the framing the authors claim the observation that adversarial images are generally not robust with respect to image transferring or reloading it is not clear to me why these specific applications have been picked i would argue that image compression and quantisation artefacts are obviously a problem for attacks that rely on minimal perturbations of images it is very clear what it happening here and contrary to the authors statements not surprising it also seems quite odd that the authors apparently use transfer through wechat to assess these artefacts instead of just computing compressed images of several levels using any image library this would be a much cleaner and transparent angle for this paper despite these very general remarks i think the method is valuable for two reasons the method is simple yet effective and is easily applicable as a post process to any method generating adversial images i would have liked to see a better evaluation to assess the distribution of success rates and confidence values across images it would also be helpful to focus on specific compression techniques with different parameters since i am not an expert on this topic i can not assess the novelty of the approach or if the issue of discretisation artefacts has been tackled in the context of adversarial image generation before docsepthe authors explore how storing and transmitting images affects the attack performance of adversarial examples they propose a novel metric called confidence iteration to generate adversarial examples that are robust to said effects of storage and transmission comments although the objective is clear there is no discussion on why more basic techniques cannot be used or fail to address this problem for example one would expect that using data augmentation during the training will increase the networks robustness to arbitrary transformations including photometric geometric noise etc in contrast to the authors statement the adversarial examples shown are noticeably noisy it seems that applying a simple median filter before passing the image to the network would get rid of most of the noise the idea of studying the effects of compression and quantization on adversarial examples effectiveness is useful however i find it trivial and would not call this a discovery of a new phenomenon any information loss due to any factor not just these two which leads to perturbations can affect the networks performance it is unclear whether the inceptionv3 model was trained on the created dataset or any other dataset why was vgg11 used instead of inceptionv3 i find the confidence iteration trivial the range of the colour values is not specified are the rgb images in the range of 01 if so the perturbations are up to 30 which seems quite extreme considering the claim that the adversarial examples are stealthy the paper requires proofreading as there are a few sentences that do not parse docsepi do not think this work meets the bar for gi 2021 the main issue is that they way the problem is posed is misleading this work overemphasizes the process of saving and transmitting while both steps could be invertible while this might seem like a small detail it makes the introduction completely incomprehensible i had no idea what authors mean when they say that saving a piece of data and the reloading it somehow degrades it i think it is interesting to study how compression and quantization artifacts affect the quality of adversarial examples but classifying these artifacts based on the specific formats jpg bmp or social media apps wechat provides little insights and has little scientific value in the former case new formats might emerge or old ones become obsolete and it would be unclear how findings in this work apply to them in the latter case a small change in backend implementation might completely invalidate all the findings at any time if authors want to further investigate this with rigor they could specifically focus on different kinds of compression algorithms and quantization steps that are used in existing formats of data transmission apps this could be complemented with the study proposed in this paper to demonstrate how this could translate to practical use case as a small note the proposed algorithm it seems to suggest that physically saving data to the hard drive or transmitting it over the network is an important step obviously it is slow and unnecessary one can simply use any kind of data format conversion locally to see how this conversion affects the adversity of the proposed example it also would be interesting to see if some of these data conversion artifacts could be implemented as differentiable layers in a network so one could directly optimize for adversarial examples backpropagating over particular compression algorithm
### Summary:
|
the authors present an algorithm that addresses the effects of storing and transmitting images on the quality of adversarial examples the reviewers find that although it is an interesting concept this work has significant issues that limit its scientific value for example all reviewers find that the claim that compression and quantization effects on the quality of adversarial examples is a discovery of a new phenomenon as overstated at best this is a trivial and straightforward observation additionally the experiments are limited to two image formats and the application wechat for transmitting the images
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose a framework for learning an rl policy that performs well in the real and enjoys performance and safety guarantees to achieve this this paper proposes a framework called simtolabtoreal which combines hamiltonjacobi reachability analysis and pacbayes generalization guarantees in the sim stage a performance and a backup policy are jointly trained using rl in the lab stage the generalization bounds are optimized to certify the robots performance and safety before deployment results from numerical and physical experiments show that the proposed method can effectively reduce safety violations generally speaking this framework generalizes the popular simtoreal scheme and it is a fascinating idea this paper is also very relevant to this workshops agenda trustworthy embodied ai the writing is good and experimental results are promising my main criticisms are 1 what is the definition of safety here it seems that this paper cannot guarantee fullstate safety because the minimax problem is in general intractable 2 the pacbayes perspective is very interesting however it is unclear in which case the pacbayes bound will be vacuous if the reviewer understands correctly the pacbayes bound gives an average performance guarantee is it still suitable for the worstcase scenariodocsepthe paper adds an intermediate stage lab in the simreal transfer and provides performance and safety guarantees for robots trained in this framework a distribution of policies is learned in the sim setting which is then finetuned in the lab setting before deploying in the real setting these distributions are treated as the prior and posterior respectively to obtain pacbayes bounds on performance safety is ensured by applying hamiltonjacobi reachability analysis to train a backup policy that takes over when it predicts the original policy to violate safety constraints strengths i think the idea of combining reachability analysis and pacbayes bound is an interesting way to provide theoretical bounds for safe rl experiments are wellthoughtout and i think they provide justification for the claims made in the paper the theoretical guarantees transferring to actual experiments is nice to see weaknesses my main concern is the assumption that lab and real environments come from the same distribution the authors justify it line 373 by saying that lab environments were modeled to be close to real environments and minor distribution shifts wouldnt affect the performance much but this might be very difficult in certain settings eg in the selfdriving example given by the authors since accurately modeling the behavior of pedestrians and other cars is a difficult problem if there is a gap between lab and real i think the valuebased shielding eq 6 will be less reliable additionally the value threshold for switching the policy will also be hard to tune i think further study is necessary to ensure that it is not too detrimental to the performance or safety of the agent in real environments the policy training seems sensitive to the annealing schedules of rho and epsilon so i wonder if it would be difficult to set good schedules when applying this method to different problems misc i would be interested to see how the theoretical guarantees can be extended to cases in which the weights of the policy network are also finetuned in the lab phase instead of just the distribution of latent vectors minor i found the beginning of section 41 hard to follow since the qvalue and the bellman equation are different from the standard ones used in rl i think an intuitive description of the equation similar to how it is done in bridging hamiltonjacobi safety analysis and reinforcement learning by fisac et al would be helpful overall i would say that the framework introduced in this paper is interesting enough to warrant further discussion and hence i recommend the paper be accepted disclaimer i am not very familiar with the pacbayes control framework and hence i have not verified the math behind the bounds given in the paperdocsep summary this work presents a pacbayesbased learning paradigm that utilizes an intermediary step called lab to bridge the simtoreal gap this allows the authors to generate probabilistic guarantees on the success and safety of the robots actions while also empirically improving upon other baseline methods the authors provide a thorough comparison of their method against existing baselines and provide significant discussion of parameter sensitivity in the appendix they also present a compelling hardware demo of their method quality this paper appears to be of high quality clarity the paper is generally clear but could use some improvement some unclear portions include the differentiation between the sim and lab stages is confusing in the experiments section since the lab portion is also simulated and not performed on a test track as suggested in the introduction the authors mention the reason for this in the conclusion but this comes significantly after the description of their experiments i would suggest moving this discussion to the environments subsection of section 5 in the results section it is mentioned that the ratio of number of safety violations to the number of episodes collected during training is calculated however the number of episodes is not directly mentioned here or in table a1 furthemore in appendix a3 it is mentioned that the large number of environments required to utilize the pacbayes control framework is a limitation this limitation is obfuscated by the averaging performed to obtain the results in table a1 and should be made more clear appendix a5s ablation study mentions that the unsuccessful ration in deployment increases by 16 8 and 14 what are these three separate numbers referring to the text in figure 3 is too small to read the notation s in mathcals subseteq mathbbrns and ain mathcala subseteq mathbbrna is clear but the set mathcale is not similarly defined can you further describe this set there is some lack of clarity when the function g is introduced is g lipschitz in both of its arguments or just in s and is ges triangleq gse it is mentioned that the robot is considered to have discretetime dynamics st1 fest at what are these dynamics for robot used in the experimental section are the torques of the quadrupeds joints being controlled or is a multilevel control structure utilized what is the time step used in the discrete dynamics furthermore the authors mention that the same dynamics used in the physical experiment are used in advancedrealistic lab other than the altered velocity bounds are the same dynamics used for all of the other sim and lab environments figure 1 is introduced in the text in paragraph 2 of the introduction but contains a large amount of notation that is not introduced until section 3 it would improve readability of this figure if the math notation were replaced with simplified text that the reader can understand while reading the introduction minor typos include in the abstract policy distribution domain randomization do not explicitly adress safety of the robot when introducing the rest of formulation and the approach i believe that maximizing the lower bound rtextrmpac can be viewed as maximizing the empirical reward rmathcalmp along with a regularizer c that prevents overfitting by penalizing the deviation of the posterior p from the prior p0 should probably be maximizing the lower bound rtextrmpac can be viewed as maximizing the empirical reward rmathcalmp along with minimizing a regularizer c that prevents overfitting by penalizing the deviation of the posterior p from the prior p0 with reachabilitybased rl we enforces originality the authors extend the existing pacbayes baseline the theoretical analysis that additional learning performed on labs sampled from the distribution d would improve would improve performance in a real world sampled from d is straightforward but the work provides thorough and novel exploration of this idea significance of this work this work addresses the highly relevant problem of navigating safely in novel environments although a significant number of safety violations occur the authors show that they significantly improve on existing methods the proposed method appears to be limited by the fact that 1 the real environments must be drawn from the same distribution as the lab environments limiting generalizability and 2 a very large number of lab environments are needed to generate useful bounds when training with domain randomization and the pacbayes control framework as mentioned in a3 list of pros and cons pros the paper generates probablistic theoretical bounds for safety of learned controllers the proposed method is shown to outperform a variety of other baseline methods a significant amount of information regarding the learning framework is provided in the appendix which helps improve reproducibility of the result cons the presentation is occasionally unclear the generalization guarantees assume no distribution shift between lab and real environments limiting generalizability of the method and its guarantees a large number of lab environments are required when training with a pacbayes control framework no code or data are provided access to the code and data used to produce the results in figures 5 would significantly improve the reproducibility of this result
### Summary:
|
the paper is wellwritten and of high quality the proposed method is wellmotivated and throughoutly justified both theorectically and empirically please consider the comments from the reviewers in the final version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper discussed the convergence of the gradient flows of twolayer neural networks while they claim the convergence of nonconvex gradient flows to global optimum which is different from existing works the theoretical analysis framework seems to be novel and some sufficient condition is proposed to support the strong theoretical results strengths 1 clear presentation and strong logicality 2 the corresponding nonconvex maxmargin problem with the following dual reformulation is clear maybe mathematically correct 3 the geometry discussion of the neural gradient flow is clear especially based on figures 14 although i am not an expert on this topic the paper is wellwritten and the novelty of the idea seems to be clear after careful reading and thinking docsepthis work analyzes the training dynamics of twolayer relu networks applied to separable problem data based on the equivalent convex formulation by pilanci ergen 2020 the main result states that the gradientflow training dynamics provably converges to a maximummargin classifier overall i feel that the technique and the result are novel and of interest to the community while other approaches such as ntk or meanfield have established global convergence guarantees those results are asymptotic in the size of the neural network and do not provide the interpretation of the maxmargin classifier i feel that these results make the line of work on the convex formulation more complete and will serve to further encourage the line of work analyzing neural networks based on some hidden form of convexity as an aside it would be great if the authors can comment on the extendability of this approach to the setup of threelayer relu as studied in ergen and pilanci 2021 establishing training guarantees of twolayer relu networks based on convex formulation of pilanci ergen 2020 is sufficiently novel and interesting docsepthe paper studies the subgradient flows when training a twolayer relu neural network to this end the nonconvex maxmargin problem is reformulated as a convex optimization problem the authors then analyze the dual extreme points of the convex formulation and show the implicit regularization of unregularized gradient flow as convex regularization then for the binary classification problem it is proven that the kkt points of the nonconvex maxmargin problem correspond to the kkt points of the convex maxmargin problem if the direction is dual feasible the paper then demonstrates that this is the case under some conditions on spikefree matrices and orthogonal separable data finally it is shown that if the dataset is orthogonal separable and initialized sufficiently close to zero the limiting point of the gradient flow is the global minimizer of the maxmargin problem the main contribution of the paper is to prove that the nonconvex gradient flows for the maxmargin problem converges to the kkt points of the convex maxmargin problem this is a significant result which may be of interest to parts of the community on the other hand the structure of the paper can be improved for instance section 22 is labelled outline of our contributions but this section already contains some preliminary results eg the characterization of the dual extreme points as well as the discussion of the implicit regularization of unregularized gradient flow while the main contributions are not discussed here moreover i found section 5 a bit hard to follow since the purpose of the individual lemmas is not made clear directly in the section the flow of the section could be improved by providing additional information about which role they play in the overall proofs finally the abstract says that the paper presents numerical results verifying the predictions of their theory however those experiments are only contained in the appendix overall while the paper contains some interesting results overall the presentation and flow can be improved therefore i am leaning towards rejection of the paper after rebuttal the authors made several improvements to the flow and structure of the paper as suggested in my review therefore i have increased my score the paper contains some interesting results which may be of interest to the community but the presentation and flow of the paper can be improved
### Summary:
|
the papers makes progress on the important question of implicit bias in gradient based neural learning remarkably they derive reasonable conditions for global optimality
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors investigate the train and test error of deeponets as a function of the number of parameters on the pendulum problem they empirically show that the double descent phenomena observed with eg deep resnets on mnistcifar data is not seen with deeponets for the pendulum problem it would be worth demonstrating that other architectures than deeponet show double descent on the pendulum problem without this its hard to assess the value of this work the experiments are interesting but would be easier to judge the results with simpler graphs theres some other notable differences from the double descent work in 27 for example adding label noise seems to shift the local maxima of test error towards models of lower complexity im not sure whether these discrepancies between 27 are due to the notion of generalization for this particular task the way data is sampled from learning differential equations or because of the choice of architecture it would be good to see a comparison between deeponets and for example fullyconnected networks on this task the work lacks any discussion of why deeponets might behave differently when effects of overparameterization are being considered if the authors would contrast this architecture with others in their experiments on the pendulum task that would perhaps help further analysis minor things i would recommend making the figures using vector graphics theyre blurry and hard to read i would also recommend keeping the training curve and the validation curves on separate figures for some of the figures its hard to clearly distinguish local maximaminima in the error curves docsepthe authors study the effects of overparameterization in deeponets for finding the solution for the differential equation of a pendulum they do this by turning the task of finding the solution to a supervised learning task with appropriate input and output data through their experiments they show that contrary to observations in the supervised learning literature for deeponets for solving the pendulum the first local minima instead of maxima occurs at the interpolation threshold they further show that double descent only occurs under certain conditions and they do not observe any triple descent i feel that the scope of the empirical study performed in the paper is too limited for any substantial conclusions to be made either about deeponets or any task that looks for solving a differential equation as observations to an empirical study yes the results are interesting but the paper lacks any reasoning mathematical or otherwise for the observations that they make docsepsummary the paper under review investigates the effects of overparametrization when learning the solution of the ode for the pendulum problem via socalled deeponets which are certain neural networks this is a simple supervised learning task where the training data is generated via a standard ode solver the behavior of the training and test error as a function of the number of parameters is then analyzed the test error has a first local minimum when the number of parameters equals the training data size which is in contrast to standard observations moreover no significant double descent and triple descent phenomena are observed comments the paper provides an empirical study for a very specific setting without giving mathematical reasons for the observed phenomena in my opinion the conclusions cannot be translated to general deeponets since the architecture and in particular the problem is very specific in particular the research questions asked in section 1 have only be answered in the case of the pendulum problem under a very specific architecture moreover the results should be compared with the ones of 22 where theoretical results are provided and where for instance the gravity pendulum is analyzed
### Summary:
|
the paper considers an interesting architecture deeponets recently proposed for solving dynamical equations and the authors investigate the double descend phenomena for this setting on the pendulum problem the work has potential to be an interesting contribution to statistical learning theory in the context of dynamical systems applications the authors are highly encouraged to address reviewers remarks for their poster such as by reviewing how the double descend phenomena emerges in the usual setting on the pendulum task when deeponets are not applied as a comparison study that can better highlight the reported results
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper studies the expressive power of symmetric neural networks used to compute functions on sets specifically relational networks and deepsets architectures an exponential separation result is given where a certain function can be represented efficiently using a relational network with symmetric width l1 yet cannot be approximated by a deepset architecture to accuracy better than a constant unless its symmetric width is exponential in either of n the size of the set or sqrtd where d is the dimension of each element strengths the paper is wellwritten and easy to follow the proof technique used bears a certain subjective elegance the mathematical tools used in the proof which leverage symmetric polynomials to establish the main result seem like the natural machinery for studying the expressive power of symmetric architectures and the proof idea seems like a clever use of these tools weaknesses it feels like the paper does not motivate the study of the problem addressed in it sufficiently to give a concrete comparison the separation in 1 is wellmotivated since it was widely observed that deeper architectures are much more successful in many learning tasks it is not clear if this is also the case when using relational networks instead of deepsets the proof technique seems to make an essential use of complex numbers and analytic activations while the authors are overall transparent about these weaknesses it doesnt feel like they are properly discussed it was shown in 2 that certain functions used to separate depth cannot be learned using gradient methods since the construction used to approximate the target function using a relational network seems intricate and also uses a relatively deep architecture im not sure if it can be learned efficiently while i find the result in the paper interesting nevertheless i think it would be appropriate to add a comment about this citations used 1 eldan and shamir the power of depth for feedforward neural networks 2 malach et al the connection between approximation depth separation and learnability in neural networks yes docsepthis paper considers the representational power of neural network functions it considers two types of network architectures the deepsets architecture which treats the inputs in a permutation invariant manner and the relational network architecture which allows for pairwise interaction among the inputs the question considered in this paper is to compare the representation power of these two types of architectures the main result is a width separation between the deepsets and the relational network architectures it shows that there exists a function such that for any width less than expmininput dimension sqrtinput set size the best function from the deepsets family incurs a constant error on the other hand with polyinput dimension input set size width the relational network could represent the same function up to arbitrary small error the lower bound for the onedimensional case is as follows the deepsets networks with width l restricts the space of functions to within some space of rank relating to l so for a high rank function its orthogonal projection to this rank l space could still be quite large the highdimensional case requires using a highdimensional powersum polynomials however the construction of the lower bound instance is more delicate than the above strength this paper deals with an important yet technically challenging question the proof requires sophisticated machinery from symmetric polynomial theory however the authors do a great job of explaining their results and building up their proofs which is instructive for a reader the writing of the paper is a pleasure to read with particular attention paid to the proof details the width separation between deepsets and relational networks would be a significant contribution to the literature for example one could imagine that the machinery developed here might help separate architectures in other settings such as graph neural networks where this sort of permutation invariance appears quite often weakness the main result relies on a certain analytic assumption on the activation functions of the neural network as the authors have discussed in the limitations section both the limitations and the potential negative societal impact are discussed in the paper docsepthe paper studies expressivity capabilities of neural networks that belong to the class of symmetric neural networks two most prominent examples that are used as motivation for the current work is the socalled deepsets and relational networks even though these are universal approximators for representing symmetric functions what are the depthwidth tradeoffs and approximation guarantees this question reflects the analogous questions for standard feedforward neural networks the main contribution of the paper is to explicitly construct symmetric functions which provably require exponentiallymany neurons in the deepsets model yet are efficiently approximated with selfattention the crucial parameter controling the expressivity of the network here is the socalled symmetric width and this leads to a conceptually different set of results in comparison to standard depth vs width tradeoffs the difference in the two architectures is presented by eq 2 vs eq 4 where the latter allows for pairwise interactions among set elements under plausible assumptions the main result of the paper is formally theorem 24 is to provide a family of analytic symmetric functions g which leads to two important properties in order to prove separation results the first part says that singleton symmetric networks ie those that dont allow for pairwise interactions are insufficient to approximate g unless they are exponentially large in terms of their symmetric width whereas the second part says that a simple pairwise symmetric network will incur negligible loss when trying to approximate g the notions of error used are for the lower bound in the first part the authors used the notion of l2 error under a suitable data distribution and for the second part in the upper bound they use the infinity norm error solid contribution for theory of expressivity in neural nets that offers new perspective and techniques to a different set of architectures conceptually interesting the fact that the result bears differences with standard results of feedforward neural nets separations weaknesses limited literature comparisons eg chulhee yun et al are transformers universal approximators of sequencetosequence functions l2 error bound instead of l1 error bound please see questions below overall the reviewer thinks that the paper has to offer sth new to the much needed theory of neural nets for symmetic function representation and that the paper does so in a technically solid manner na docsepthis paper provides a novel result on the relative expressiveness of two popular architectures for symmetric functions deep sets pooling over individual inputs and relational networks pooling over pairs of inputs this is a very wellwritten paper the narrative is easy to follow and the key ideas are explained clearly the key result is novel and interesting and motivation is provided in the text this paper will certainly be of significant interest to researchers working with symmetric neural networks most of the paper is devoted to setting up mathematical context or describing the mathematics of proving the main result the mathematical content here is explained and structured very clearly but perhaps a small amount of additional space could be given over to motivation and practical context for the result which although present is currently a little lacking i see no potential for negative societal impact the authors briefly note a major limitation of their result i think that this brief note is adequate since it correctly identifies the most important possible strengthening of the result and explicitly leaves it open for future work
### Summary:
|
four domain experts recommended acceptance for this paper and i agree with their assessment the writeup presents its setup result and argument all very clearly the width separation result is solid and characterizes an exponential gap in expressive capacity between two material neural net architectures reviewers agree that the analysis carries independent interestnovelty as well and are optimistic that it might be useful in other separation arguments eg reviewer ddfps comment about graph neural nets especially in initial reviews there were some concerns raised around motivation and grounding in practice for instance reviewer yvxw naturally questioned the importance of separating the two types of architectures studied here specifically asking whether this was grounded in a known empirical discrepancy between them i think this was addressed well in the authors subsequent response with references and it seems the reviewer agrees still i found this thread helpful and i suspect that readers will naturally ask a similar question i would recommend that the authors consider incorporating some of this reply into the paper itself whetherhow to do this is up to them and doesnt bear on my acceptance recommendation in the same discussion there was a good point raised to remark on learnability eg by gradient descent the authors mentioned they would comment on this in the next draft and id encourage that further as well thanks to reviewers and authors both for their work overall this a nice research contribution and a wellwritten paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes to use mixture of experts for image clustering the individual expert for each cluster adopts an instance discrimination approach for training the proposed method has shown superior clustering performance compared to an extensive number of clustering methods on a reasonable collection of data sets though the overall idea of the proposed method is clear the paper does not seem to explain some technical details clear enough in equation the probability pyzx does not seem to be correct the product term over k appears to be valid only when it uses a hard assignment ie pznxn 1 for exactly one of the clusters the student and teacher embeddings are used before equation 4 but have not been explained until a later part of the paper it is also unclear why an instance discrimination approach would lead to a better clustering performance although an extensive number of clustering methods have been included in the experiments it has omitted some strong competitors including variational deep embedding jiang et al 2016 latent tree variational autoencoder li et al 2019 deep clustering via a gaussian mixture vae with graph embedding yang et al 2019 etc those methods appear to perform better on some of the data sets for example those methods have been reported to yield over 85 of accuracy on the stl10 data set but the proposed method yields only 75 on the same data set it would be better if the paper could include those methods in the experiments or justify the selection of baseline methods overall the proposed method appears to be an interesting combination of some existing methods however the technical details need better clarity and the experiments should include more relevant methodsdocsepsummary authors present mixture of experts type of method to solve a clustering with unsupervised learning problem method is called as mixture of contrastive experts mice which uses contrastive learning as a base module and combines it with latent mixture models authors develop a scalable algorithm for mice and empirically evaluate the proposed method for image clustering recommendation i am tending towards accepting the paper rating 6 reason for the acceptance is the novel method supported by empirical evidence reason for score not being too high are some weaknesses mentioned in the details later strengths 1 authors address an image clustering algorithm given number of clusters submission is clear technically correct and present novel findings 2 the proposed approach is well motivated weaknessquestions 1 all recent papers have imagenet10 as one of the five common datasets 12 why was it omitted in the current paper 2 it looks like method is very tied up with moco and developed on top of it is there an easyquick way to use other backbones like simclr instead of moco and still preserve all the steps in the method 3 why were images in imagenetdog resized to 96 x 96 x3 4 almost all prior methods and proposed method mice assume that a number of clusters are known which shouldnt ideally be the case but it looks like the proposed method mice uses the information in a better way by assuming number of experts number of clusters can one use more or less number of experts than number of clusters k and still partition the sample data into k clusters can one easily use over clustering as presented in iic 5 how does the proposed method mice fare in terms of computation complexity when compared to moco 6 how do mu w the expert and gating prototypes differ during the training since the prediction value is the sum of the expert probability weighted by the gating function one would expect the gating prototypes and expert prototypes to be similar is this true how is consistency maintained is there a clear correspondence prototype 1 in expert matches to prototype 2 in gating function 7 can authors elaborate more about the usage of maxmahalanobis distribution mmd and how exactly does it solve the issue of unnecessary difficulties in partitioning the dataset if some of them are crowded together 8 since the images of cifar10 and cifar100 are smaller than imagenet images following chen et al 2020 we replace the first 7x7 conv of stride 2 with a 3x3 conv of stride 1 for all experiments on cifar10 and cifar100 the first maxpooling operation is removed as well for fair comparisons is removing first maxpooling operation standard practice is there any performance loss when max pooling is not removed minor 1 what does nmi and ari of 1 mean dont they have to be in the range of 01 1 huang jiabo shaogang gong and xiatian zhu deep semantic clustering by partition confidence maximisation in proceedings of the ieeecvf conference on computer vision and pattern recognition pp 88498858 2020 2 wu jianlong keyu long fei wang chen qian cheng li zhouchen lin and hongbin zha deep comprehensive correlation mining for image clustering in proceedings of the ieee international conference on computer vision pp 81508159 2019 docsepsummary and contributions inspired by the mixture of experts authors propose an image clustering algorithm using a mixture of contrastive experts where each of the conditional models is an expert in discriminating a subset of instances based on contrastive learning to this end they use a gating function to partition an unlabeled dataset into subsets according to the latent semantics and discriminative distinct where the gating function performs a soft partitioning of the dataset based on the cosine similarity between the image embeddings and the gating prototypes the authors carry out experiments on four widely adopted natural image datasets to evaluate the performance of the method in these tasks and compare it to competing methods and baselines correctness and clarity the paper is wellwritten with informative figures and tables the paper presents the idea in a clear and straightforward manner and is solidly built on top of the current literature authors convincingly tested the method with multiple sota and baseline and the results look correct to me reproducibility the details of the experiments implementation and the public datasets are included in the paper thanks also for sharing the code additional feedback and suggestions since the goal of the paper is image clustering providing some visual results is appreciated also i am curious to see the performance of the method when we have large number of clusters in our dataset eg imagenet decision the idea of using a scalable variant of the expectationmaximization em algorithm to help with the nontrivial inference and learning problems caused by the latent variables seems interesting to me and overall the technical novelty together with the fine evaluation are good enough for iclr in my opinion docsepthe paper presents an image clustering methodology based on mixture of experts moe for image clustering although moe has been proposed for supervised learning problems the authors exploit the instance discrimination framwork to apply the moe idea for image clustering this is a novel aspect of the proposed method the moco framework unsupervised for contrastive learning of image representations is employed to define a mixture of moco experts model where each expert additionally includes a cluster prototype vector to facilitate clustering this unified approach for simultaneous mocobased representation learning and clustering seems to provide better results that the twostage approach of first applying moco and then using kmeans clustering on the obtained representations a probabilistic formulation of the method is presented along with a training approach based on em algorithm for likelihood maximization there are several concerns related to presentation and clarity comments to be addressed 1 it would be easier to understand the contribution of the paper if the moco approach were initially described and then the proposed method was presented as a mixture of moco experts the paper in its current form section 3 is difficult to follow since several moco ideas are mentioned eg student and teacher network ema etc without been intuitively explained 2 in section 3 that describes the method there is no reference about image augmentation although it is a critical aspect of the approach use of image augmentation is only mentioned at the end of the appendix 3 a pseudocode descibing the exact steps of the proposed method is imperative 4 due to some approximations made is it possible to prove convergence of the proposed em procedure 5 gating prototypes omega remain fixed during training it is important to provide more details on the mmd method used to specify them 6 it seems strange that while omega are specified using embeddings from the the initial network gx they are not involved during training 7 a bad specification of omega is expected to have strong negative influence on the results that cannot be recovered 8 what is the size of minibatch b eq 10
### Summary:
|
thanks for your submission to iclr this paper considers a novel unsupervised image clustering framework based on a mixture of contrastive experts framework most of the reviewers were overall positive about the paper on the positive side they noted that the paper had an interesting idea was well motivated written well and had solid results also the authors provided detailed and useful responses to the reviews which further strengthened the case for accepting the paper on the negative side one reviewer felt that the paper seemed a bit preliminary and its presentation could improve also there was some concern about missing comparisons discussion to previous work including from a public comment or data sets eg imagenet10 again the authors responded well to these concerns given that the overall response was quite positive with the paper im happy to recommend accepting it
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary the paper proposed a simple but powerful idea of assuming a linear decomposition structure of the centralized critic into individual critics it demonstrated 2 existing problems in the baseline mapg agents coma maddpg namely centralizeddecentralized mismatch problem and credit assignment issues and offers a theoretically motivated solution with carefully conducted experiment results including ablation studies as well as comparison with sota gradientbased and valuebased algorithms reasons for score the paper is very well written its proposal is simple but theoretically motivated to solve existing problems in mapg algorithms the experiment results support its claims and provides additional insightful analysis therefore i recommend accept however its theoretical justification is weakened by assuming a tabular case of pi and q functions which is rarely the case in practice furthermore in proposition 1 the conditioning just above eq 11 looks quite particular and im not sure how general it is satisfied in practice last but not least it would be nice to see comparisons with more recent value decomposition algorithms such as ndq and qtranqplex overall i gave it a score 7 pros 1 the paper is organized in very clear fashion it is well motivated by existing problems and provides a simple and effective solution 2 it conducted clever experimental analysis that made the advantages of dop very obvious and understandable 3 its method of linear decomposition of the central critic into individual critics is simple yet powerful the paper gave adequate theoretical support to this method cons 1 both the tabular assumption on functions pi and q and especially the conditioning of proposition 1 weaken the theoretical guarantee that the algorithm will converge to local optima 2 the paper did not compare with more recent value decomposition algorithms such as ndq and qtranqplex its unclear that the baselines are the sota results on the chosen domains questions during rebuttal period please address the two points in cons above docsepthis works motivates the use of a factorized critic for multiagent policy gradient the technique is wellmotivated and the exposition anticipates and answers readers likely concerns the experiment section is wellorganized supports the papers major claims and is empirically compelling the policy improvement claims in section 412 are initially unintuitive but ultimately are intelligible as an agentblockcoordinate local optimality statement however this reviewer is not clear on the quality of these local optima ie when do we get trapped for example is it possible to design a task where the local optima are all very poor of course the experiment section indicates many benchmark tasks are amenable to this decomposition but perhaps reasoning about this would help in redefining multiagent problems to encourage success eg it would be interesting if adding actions that communicate information directly between agents mitigates the local optima problem docsepthis paper focuses on the problem of multiagent reinforcement learning marl for ctde scenario which is well studied in recent literature the work discusses shortcomings of actorcritic methods for marl and proposes a solution using linearly factored critic the paper is somewhat difficult to read and can be made better by deferring the details about previous methods to appendix however my main concern is with the problem of centralizeddecentralized mismatch cdm motivated in the paper and its proposed solution itself 1 how exactly is a regular critic bad as such a critic is supposed to be true to the policy the requirement of decentralization has little bearing on the variance of policy gradients gradient noise increases with number of agents irrespective of whether there is centralized or decentralized execution 2 the so called problem of cdm seems rather redundant see 1 above for example the authors say in page 3 3rd para from bottom in line 3 that if the critic expectation under policy is negative then individual policy performance is hurt such problem can easily be fixed using baselines see sutton and barto 2018 for example 3 how is a linear factored critic compatible with an arbitrary joint policy in general this not true and requires many strong assumptions see for ex bhatnagar 2009 while the authors acknowledge this bypassing the actual complexity for modelling a joint critic with a linear one will in general render it insufficient to model interagent interactions this puts into serious question whether coordination is required in the experiment domains in the first place and if the performance improvement is just coming due to a biased but albeit easier to learn critic 4 there are some unsupported claims which need better explanation like this becomes problematic because a negative feedback loop is created in which the joint critic is affected by the suboptimality of agent i which disturbs policy updates of other agents how is that so the updates in principle can affect the policies of already suboptimal agents which might fix them 5 learning the decomposed critic implicitly realizes multiagent credit assignment because the individual critic provides credit information for each agent to improve its policy in the direction of increasing the global expected return again how so claims like this need to be well supported 6 expectations are usually sampled so in principle even the oan can be estimated with fewer samples incurring some variance it might not be necessary to bias the critic drastically for this 7 the authors need to shed more light on when the precondition qitau ai qitau ai iff betaai taugeq betaai tau in prop 1 holds beyond tabular settings it seems a rather strong assumption to hold for all trajectory and oan inputs right now it seems rather grab bag to show policy improvement 8 why isnt comparison on sc2 done against more recent baselines like qtran maven roma etcdocsepin the context centralized training distributed execution in cooperative multiagent reinforcement learning marl the paper proposes an architecture to learn a decomposed action value function expressed as a weighted sum of the agents individual functions plus an additional weight those weights are themselves learned and depend on the observed history thanks to this decomposition gradients can be decomposed over each agent the authors propose to use a combination of offpolicy using tree backup and onpolicy using tdlambda methods for estimating the decomposed critic they formulate both a deterministic and stochastic decomposed policy gradients which are analyzed theoretically to some extent and evaluated experimentally pros the paper contributes to the active effort of designing more efficient marl algorithms the authors introduce the idea of value decomposition which was investigated first in the valuebased methods to the actorcritic scheme the experimental results suggest that the proposed combination of value decomposition and offpolicy critic training has a good performance the writing of the paper is clear the identification of the issue of centralizeddecentralized mismatch and how it is tackled by the proposed methods are useful and interesting cons the formulation of prop 1 is hard to understand what is qiphii is it the value function wrt pi or pi0 besides how do we ensure that the strict inequality about qiphii holds some theoretical analyses eg a3 or c are only loosely related to the actual proposed method although i agree that a direct analysis would be difficult to conduct
### Summary:
|
the paper presents a decomposition of the value function in the context of ccda most reviewers find this paper clear and well written although one reviewer suggests to change the paper structure the method presented in this paper is simple and well justified by a theoretical section experiments on several domains including starcraft 2 micromanagement tasks are supporting the claims of that section after some reviewers pointed out that the tabular setup is not useful in practice the authors have extended the empirical and theoretical results to a more general setup some reviewers point out that some theoretical results may not be directly related to the experimental findings in particular reviewer 3 does not support a central claim of the paper and find that cdm is misleading and not provably representing the core problem in general reviewer 3 does not support acceptance of this paper but i still believe this paper should be accepted based on the other reviews clearly in favour of acceptance i hope that the authors and reviewer 3 will be able to further discuss and reach understanding which hopefully should lead to fruitful results
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
docsepin this paper the authors consider the question of how to explain model uncertainty in terms of the input features whereas prior work eg clue uses a counterfactual approach the authors develop an attributionbased approach assigning a score to each feature the particular method is built on integrated gradients ig a baseline is identified that resembles the input but has no prediction uncertainty and a nonstraightline path is followed by traversing a vaes latent space the experiments suggest that the proposed method igh has some advantages relative to clue as well as simple adaptations of iglimeshap to uncertainty attribution overall the method appears to be sound and reasonable it seems quite efficient and the results look positive modulo some qualms i have about the experiments particularly the lack of metrics the problem area is also an interesting and important one below ill delve into some possible areas of improvement the role of bayesian model uncertainty the authors claim beginning with the first sentence of the paper that this method is designed for bayesian models however even nonbayesian models can have predictive uncertainty particularly classification models it may not be wellcalibrated but thats a separate issue i wonder then why the presentation is focused on the bayesian setting the only advantage i can see is that it permits a distinction between aleatoric and epistemic uncertainty and the generation of separate entropyaleatoricepistemic explanations this is nice in theory but the authors dont really make the case that this is useful or at least it doesnt appear useful in the experiments this may be worth rethinking because its not advantageous to the authors to restrict their methods application and its misleading to readers to imply that nonbayesian models dont permit the analysis of predictive uncertainty criticism of counterfactual uncertainty explanations in the introduction one of the main criticisms of counterfactual approaches most importantly is that they dont satisfy properties like completeness and implementation invariance thats an odd criticism because those properties dont even mean anything outside the context of feature attribution explanations how for example could a method for generating counterfactuals satisfy something resembling completeness with this in mind is the real criticism then perhaps that counterfactual explanations lack properties like the ig or shap axioms to imply a unique principled approach that point would make more sense to me also arent the authors subjecting themselves to any issues inherent to counterfactual explanations by using counterfactuals as part of their method anyway the point is this came across as a facile argument and it would help to think this through more carefully a multitude of ways to do uncertainty explanation the first paragraph of section 21 is getting at an interesting point once weve decided to explain predictive uncertainty rather than the prediction itself we find ourselves in the situation of being able to apply virtually any existing explanation method by just swapping out the quantity to be explained i believe this is one of the key points made in covert et al 2020 and it might be worth saying that more clearly the authors seem to realize this because they adapt versions of ig lime and shap to do uncertainty explanation in their experiments in my view it would be better for readers if the authors described the situation more clearly the fact that we can adapt any explanation method either removalbased or gradientbased discuss the reasons why we might prefer gradientbased over removalbased as they appear to think this is the case as well as what modifications are necessary eg having a baseline that encodes low uncertainty rather than high uncertainty rather than just jumping into their method with these questions left unexamined about the method and experiments arbitrariness of counterfactuals there are many aspects of the method that could lead to different counterfactuals and hence different explanations this is not a desirable property in an explanation and the experiments leave these potential points of sensitivity virtually unexamined for example what if changing the optimizer or learning rate in algorithm 1 yields a very different result because of nonconvexity what if we had a different vae what if changing the distance metric even by a constant multiplicative factor yields a different counterfactual the results may be very sensitive to the explanation algorithms implementation ideally this wouldnt be the case none of the other baselines seem to have this unwanted property but given that it is could the authors do some sensitivity analysis to see how critical these things are or at least acknowledge this somewhere in the main text lack of metrics the experiments are purely qualitative and presuppose that we the readers know the ground truth for what contributes to the uncertainty in each image thats maybe reasonable for figure 5 but probably not for figures 6 or 7 how for example are we supposed to know why the model is uncertainty about these fashionmnist examples im pretty opposed to accepting any model explanation paper that doesnt verify its results in a quantitative manner so i would strongly encourage the authors to come up with some more rigorous metrics during the revision period if they want to discuss this id be happy to make some recommendations how much work is done by the counterfactual vs ig im concerned that igh works because of the counterfactual generation step and that the ig component is unnecessary when determining a counterfactual we find an input similar to x that is likely to differ in a small number of dimensions particularly if we use ell1 distance for d for features that are unchanged arent we guaranteed to then see zero or nearzero attributions that suggests that the counterfactual step plays a critical role in making the explanation sparse and determining which features are eligible for nonzero attributions with that in mind how good would the explanation be if we just looked at psiz0i xi for each featurepixel xi or how about if we took psiz0 and used it as a baseline in ig with the straight line path or what if we used it as a baseline for shap i would be very curious to see if any of these work unfortunately its going to be pretty hard to tell without any metrics hence the request above smaller issues in the experiments in figure 5 the proposed column does not appear to match the entropy column whats going on with that vanilla ig is a confusing name for the version of ig adapted to explain predictive uncertainty is there a better name for this describing shap and to some extent limeig as making an error by identifying regions that decrease uncertainty is a bit of a misunderstanding of whats going on shap for example has a baseline with all the inputs removed with high predictive uncertainty so its natural that some features decrease the predictive uncertainty describing lime as sensitive to the tuning of the segmentation algorithm is true but you didnt have to tune it to give super pixels quite this coarse a couple smaller issues gaussian smoothing the use of gaussian smoothing is a bit unusual why is that necessary its a bit strange to describe the ig properties as very important and then apply an operation that may distort them what would the explanations look like without smoothing did you apply the smoothing to all explanations in your results figures or just igh confusing presentation of igh the equations that describe igh look like they may be unnecessarily complicated for example you could have just substituted h or ha into the ig equation in the place of f rather than applying product rule without saying so to arrive at the version shown in the text see the two equations below eq 3 why is this version necessary to present is it relevant to how you calculate igh in practice or do you calculate the predictive entropy and differentiate that with autograd overall this paper considers an important an interesting problem and designs a pretty reasonable method i have some qualms with the method and experiments described above but the most important one is the lack of metrics if that were fixed or if the authors made other improvements based on the issues described above i would be inclined to raise my score docsepin this paper the authors propose an attribution method inspired by integrated gradients ig with the goal to explain the entropy using an indistribution path as opposed to vanilla ig that uses a linear path in the original domain the authors show that the proposed method yields saliency maps that look more sensible for a few datasets feature attributions are an important framework within interpretable machine learning and ig a mathematically grounded method to generate such attribution maps therefore extensions to it are an interesting area to explore the paper does a good job at establishing the problem and going over the two central ideas use of entropy as the explanation target as opposed to say logits and use of indistribution paths however i had a few concerns 1 the ideas presented in the paper are fairly incremental namely changing the explanation target from pi to ehpi and using a different path the former is quite straightforward and for latter there is existing followup work to ig considering other paths see for example blur ig 1 and guided ig2 2 the evaluation is severely lacking the paper shows some results on different datasets but its mostly handpicked images with justifications like we notice that our attributions are comparatively neat interpretable and always restricted to facial features the field of explainable ai has been known to just share such qualitative findings but these are prone to experimenter bias and insufficient to convince readers 3 the combination of preceding two points means that there are missing comparisons with latest techniques making the claims of the paper appear weaker even for vanilla ig just using two baselines blackwhite followed by averaging shows improvements and is missing here 1 xu shawn subhashini venugopalan and mukund sundararajan attribution in scale and space proceedings of the ieeecvf conference on computer vision and pattern recognition 2020 2 kapishnikov andrei et al guided integrated gradients an adaptive path method for removing noise proceedings of the ieeecvf conference on computer vision and pattern recognition 2021 overall while i think the problem domain is relevant the papers contributions are incremental and the evaluation fails to be convincing for me i am not inclined for the paper to be accepted docsepthe authors propose an extension of the existing path integrated gradients ig approach for the attribution of uncertainties of bayesian models for image classification tasks ig constructs straight paths between a fiducial image with high predictive entropy and the given image and integrates the uncertainty contributions along this path the authors argue that this method suffers from the limitation that the paths may pass through images outside the data mainfold ip because the proper choice of fiducial image is unclear and thus often defaults to a black image to alleviate these issues the authors assume the existence of a variational autoencoder model trained on the dataset with which they aim to generate indistribution paths they define the path as the decoder image of a straight path in latent space the endpoints of which are defined by optimization problems keeping both of them indistribution and the fiducial image close to a image of the same class with low predictive entropy in experiments on mnist fashionmnist and celeba the authors compare their method to existing approaches and argue that their method yields easier interpretable results the authors propose an extension of the existing ig approach by a novel method for constructing indistribution paths i think this approach is interesting and the basic idea welljustified the authors provide extensive experimental evaluation the results are purely qualitative in nature visually the proposed method seems to be easier to interpret than existing approaches at the expense of relying on a variational autoencoder model trained on the dataset under consideration however it is hard for me to judge the significance of the results the submission is mostly clearly written and well organized the submission could further be strengthened the submission lacks a detailed discussion of the final part psiz rightarrow x of the constructed path why can we assume that the part psiz rightarrow x is insignificant wrt changes in predictive entropy if this is the case why dont we just define the endpoint to be phimux instead of psiz in the first place the authors do not discuss the influence of the assumed existence of a vae on the applicability of their work in comparison with existing methods the authors do not discuss how to choose the hyperparameter lambda defining their optimization problem are their results sensitive to lambda how is lambda tuned i feel that sec 33 lacks depth ip because the authors stress that their method respects desirable properties of completeness sensitivityb and implementation invariance i would like to see definitions and more detailed discussions of those terms as they seem to be central to the submission the equations below eq 3 define the same symbol deltaialpha with different right hand sides the proposed method is an interesting extension of prior work visually the results indicate an improvement over existing approaches however the significance of these results is hard for me to judge as the submission lacks detailed discussions of key points i lean towards rejection however i will raise my score if the authors improve upon the points i mentioned in the main review docsepthe authors propose a method to interpret model uncertainty for individual input examples that is it attempts to identify which input features eg pixels of an image are contributing to prediction uncertainty measured as entropy in the multiclass prediction probabilities to do this for some query input the proposed method applies integrated gradients between the input and a fiducial ie high certainty reference which serves as a version of the input which is similar but the network is very certain of its prediction class the same prediction class as the query input to find this fiducial reference the method uses a pretrained variational autoencoder to find an input in the latent embedding space of high prediction certainty that also lies closely on the latent embedding manifold integrated gradient interpolation is done linearly in this latent space the authors show through several examples that the explanations for uncertainty are reasonable and compare them to other methods like clue lime and shap lacking global comparison metrics the individual examples shown are both diverse and nicelooking but i think the papers main weakness is a lack of a global metric to summarize the improvements seen by using this method compared to benchmarks eg vanilla ig clue lime shap given the size of these datasets seeing individual examples is important but global metrics are also important to convince readers that it works properly and not just on a few cherrypicked examples limitations of using a vae the use of a vae is unfortunately a limiting factor as it does require training a separate neural network since the proposed method depends so heavily on this latent space learned by the vae it is extremely important that the vae is trained well this is a bit unfortunate because vaes can be a bit tricky to train given their competing loss functions i think this is an important limitation to keep in mind and it might be worth quantifying how much the performance of the proposed method changes by using different vae architectures latent space dimensions etc or different autoencoder types sanyal and ren 2021 discretized integrated gradients for explaining language models proposed somewhat recently improving ig explanations using embeddings by ensuring that the integration path stays within the training distribution many lessons from that paper are also seen here this may be a paper that should be cited if not cotemporaneous sanyal and ren 2021 suggest some alternative methods to identifying an indistribution integration path also based on embeddings for example you could perform anchor search using embeddings that come from the last nonoutput layer of the classification model this might be a better reflection of the latent space from the point of view of the classifier we are trying to query and also bypasses the need to retrain a vae the downside is that it depends more on the model architecture theoretical justifications a more minor point the theoretical justification of uncertainty attributions section 2 is appreciated but it was not clear to me how the derivation of he is important for the method especially so early in the manuscript i found it to be a bit distracting i would suggest putting most of these theoretical justifications in the supplementappendix additionally some of the math could benefit from more detailed proofs particularly the epistemic aleatoric decomposition of h and the derivation of ig with uncertainty the main contributions of this paper seem to be 1 applying integrated gradients to explaining local model uncertainties by explaining the difference between a query input and an analogous highcertainty reference and 2 finding indistribution integration paths by using a vae the idea is interesting and the example results in the paper are encouraging however the paper lacks global metrics that summarize improvements or lack thereof across entire datasets or large random samples of datasets if the authors can include convincing global metrics of improvement then i would support its acceptance
### Summary:
|
the paper explores a method to identify features in an input that can explain uncertainties in the model prediction the proposed approach is similar to integrated gradients igwith a different explanation target and integration path overall the idea seems fairly incremental and the experimental evaluation is lacking and does not sufficiently demonstrate the advantages of the proposed approach evaluation metrics could be improved see suggestions by reviewer n3ei to strengthen the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper summarizes previous lifelong learning methods and identifies three different continual learning scenarios based on that it draws a conclusion that dgrdistill outperforms other methods on all these scenarios further the paper proposes unified model that combines a replay generator and a classification model the proposed rtf model achieves comparable performance with dgrdistill and is approximately two times faster than dgrdistill my biggest concern is the novelty of the model since rtf is still a replaybased method that is very similar as dgrdistill empirically it can be expected that rtf should behave similar as dgrdistill as well and the result in this paper justifies that so the main contribution comes from the efficiency boost by the integrated model strategy that is by replacing a separate generative model by a symmetric vae besides that there seem to be no significant contribution of the proposed model in my opinion this paper look somewhat incremental the first five pages are mostly reviews of previous methods and the model it propose behave very similar to a previous methoddocsepsummary the paper claims to make three contributions 1 it surveys the current literature on preventing catastrophic forgetting during lifelong learning it explains the apparent inconsistencies in reported results by distinguishing three types of deployment scenarios categorizing the evaluation procedures in the literature accordingly 2 the paper conducts two sets of simulated experiments on mnist data to understand which existing methods do not work well it finds that deep generative replay dgr that learns to generate imaginary new samples from previously seen training data potentially augmented with soft labels seems to work best in these specific experiments but potentially doubles the computational cost 3 to reduce computational cost without sacrificing much accuracy it proposes to integrate the ability to learn to generate imaginary samples into the learning of the classifier itself it does this by augmenting a symmetrical vae with a softmax classification layer connected to the final hidden layer of the encoder comments about significance 1 im not entirely sure if the paper does a good job separating contributions 2 3 above cleanly so that each can stand on its own and be fully trustworthy 2 in particular the experimental evaluation depends on the nn architectures chosen here the choice of architectures that were used for the best performing approach in the experiment dgr the classifier were simply combined together to motivate the new approach however this feels a bit too simplistic for example what would happen if you replaced the simple 2hiddenlayer nn with a much more sophisticated network for each classifier but still had a simple vae to generate samples the combination is no longer likely to be this easy but it would likely work more accurately than anything shown in table 3 novelty this reviewer feels that augmenting a 2hiddenlayer vae with a softmax classification layer does not seem to be a very significant new contribution by itself the fact that it is being motivated for the specific problem of reducing catastrophic forgetting during lifelong learning is the main novelty here but the relative amount of novelty might to be somewhat limited when viewed from this perspective docsepthis paper points out a important issue in current continual learning literature due to the different settings and different evaluation protocols of each method comparison between methods are usually not fair and lead to distinct conclusions the paper is in general easy to understand except a few drawbacks listed in the cons pros 1 this paper investigates an important problem aka how does the methods compare to each other with the same evaluation protocol 2 experiments are performed on the previous methods which could be used as a baseline for future works in this field 3 proposes to combine discriminative model with generative model to save computation when using generative model to store rehearsal examples cons 1 details of each experiments are missing different methods are evaluated under the incremental task learning incremental domain learning incremental class learning settings however to my knowledge some of the methods will not work under all of the three settings as the author also suggest that xdg only works with task id however i think there are a few more for example the lwf methods has multiple sets of output neurons which implicitly assumes the task id is known it is not described in the paper how to evaluate it under incremental domain learning aka how to decide which set of output to use if task id is not available during testing another example the results in table 3 and 4 indicates that ewc with task id is better than without however original ewc does not take task id during testing it is not described how to introduce dependency on the task id for ewc 2 using the term feedback connection is misleading to the reader since the described method is just using an encoderdecoder structure in my opinion this is different from feedback connection in which higher layer is an input for lower layers autoencoder or encoderdecoder structure is more appropriate 3 there is some contribution in the rtf part namely the saved computation compared to dgr however subjectively i think this contribution is not very significant the same thing can be achieved with dgr by sharing the network between the discriminative model and the discriminator in gan in my opinion this is more a design bonus in using generative replay than a major methodology innovation conclusion the first part that compares different methods is worth publishing given more details are provided im more than happy to give a higher score if the authors are able to provide more details and the details are reasonable
### Summary:
|
the authors have proposed 3 continual learning variants which are all based on mnist and which vary in terms of whether task ids are given and what the classification task is and they have proposed a method which incorporates a symmetric vae for generative replay with a class discriminator the proposed method does work well on the continual learning scenarios and the incorporation of the generative model with the classifier is more efficient than keeping them separate the discussion of the different cl scenarios and of related work is nice to read however the authors imply that these scenarios cover the space of important cl variants yet they do not consider many other settings such as when tasks continually change rather than having sharp boundaries the authors have also only focused on the catastrophic forgetting aspect of continual learning without considering scenarios where eg strong forward transfer or backwards transfer is very important regarding the proposed architecture that combines a vae with a softmax classifier for efficiency the reviewers all felt that this was not novel enough to recommend publication
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper describes the problem of discovering and acting on selfexplaining deviations seds when two or more agents are cooperating in a turntaking environment such that one agent can deviate from its expected behavior in order to signal to another agent that it has private information indicating that a higher utility situation can be achieved if the other agent also deviates from its expected behavior the factored observation stochastic game is modeled as a public pomdp to plan behaviors for all agents jointly two flavors of an algorithm called improvised are provided to enable the signaling agent to discover such opportunities and enable the signaled agent to know how to respond to achieve the higher utility the approach is evaluated in the context of 1 a novel trampoline tiger problem demonstrating that even in simple scenarios stateoftheart deep marl algorithms like mappo and qmix cannot learn to find and exploit seds and 2 the card game hanabi demonstrating that improvised can find seds called finesse plays commonly used by human players but not again not learned by deep marl approaches and rarely even by a learned model that is trained to replicate human play from observed games overall the research is interesting and is relevant to the multiagent planning and reinforcement learning communities at neurips the problem and solution are novel as far as im aware and are also relevant to humanagent connections in ai the paper is relatively clear and easy to follow given the lack of theoretical analysis i would have maybe liked to see a little more evaluation on the empirical side i appreciated the use of two domains one toy problem for illustrating the challenge for stateoftheart rl methods as well as a more challenging game playable by humans but the paper generally lacked any statistical significance testing i suspect youll find improved performance in terms of using finesse but it is less clear about the difference in overall game scores i would have especially appreciated more discussion of the other types of seds found in hanabi and whether those are also categories of moves played by humans or if the algorithm discovered novel strategies that can push the boundary of human play similar to innovations in backgammon and go by ai agents throughout the years instead the paper mentions it can be either that improvised finds no beneficial deviations or it finds better nonfinesse seds i suspect the latter happened at least some of the time given the overall boost in game score compared to the blueprint strategies minor notes line 157 where the bobs where bobs line 366 decpomds decpomdps the limitations of the approach are adequately described docsepthis paper focuses on a specific subclass of coordination problem selfexplaining deviations seds seds refer to the actions that deviate from reasonable behavior but with the intention of communicating with other agents this paper first motivates sed with a vividly realworld example and afterward gives a formal definition of it then they proposed a novel algorithm to perform seds their further evaluations of toy games and hanabi show the strengths of the proposed algorithms strengths 1 importance of the problem the problem this paper considers is important humans have the ability to coordinate with others using minimal explicit agreement and only relying on information from normal or abnormal actions this paper formalizes the definition of such behavior as selfexplaining deviations and steps forward in this interesting topic 2 motivation this paper is wellmotivated seds are very common behaviors of humankind but are rare for ai explicitly incorporating seds into other baselines might further improve performance 3 the novelty of the proposed method the proposed method is simple but novel 4 clarity this paper is generally wellwritten and easy to follow weaknesses 1 the soundness of the proposed method the reviewer thinks the assumptions might be too restrictive in particular assumption 2 what if we remove assumption 2 how likely is it that the game in real life meets this assumption 2 quality and soundness of empirical evaluation 1 the provided results are sound but the evaluations are conducted only on two games including a toy game can the author show more results of other games or more interpretations of existing results it is okay if the rebuttal time is not enough for the authors to do extra evaluations 2 it seems that the algorithms and evaluations are based on two cooperative players is it possible to extend this idea to more players the authors have described enough limitations of their algorithms and completely solving these limitations might be enough for a new paper docsepthis paper introduces the idea of a selfexplaining deviation sed where an agent does an action that is seemingly irrational with the explicit intention of getting other agents to realize that the privately observed state is not what the others think it is and make them react accordingly the authors provide a fairly generic definition of an sed and an algorithm for recognizing such situations and altering the policy accordingly they provide an evaluation of their approach in both a simple toy problem and the game of hanabi where this kind of action is common among human players and not produced by previous computational approaches originality im not aware of work conceptualizing anything similar to sed in either pomdpadjacent systems or earlier approaches to recursive modeling of the actions of other agents clarity in general the paper is written clearly with good examples most of the mathematical content is also fine although i find4344 the most confusing part of the paper with few examples and given that the rest is fairly straightforward presumably the appendix information makes this more clear quality i dont find any obvious technical errors in the main body of the paper there could be a greater discussion in the main body of the computational issues of the two versions of the improvised algorithm and how they would relate to general scalability of the approach this is pointed out in the conclusions but nothing appears in the main body significance this is probably the biggest issue yes the authors give a horrifying realistic but extremely specific human example but otherwise confine the analysis to the relatively cerebral and abstract hanabi there is a bit of an odd focus in the current decpomdpmarl literature in finding some environment that eliminates or limits explicit communication even though many practical applications dont have that as a limitation authors address this directly no issues docsep summary in multiagent partiallyobservable context human beings are able to coordinate using very few information at their disposal eg minimal explicit agreement and conventions and reason about new information to extend previous conventions allowing us to perform socalled finesse plays in the card game hanabi this paper proposes a formal definition for such behaviours under the name selfexplaining deviations sed within the formalism of factored observation stochastic games fosgs after applying a public pomdp transformation that maps cooperative multiagent settings to equivalent singleagent pomdps two assumptions are made though 1 sequential and publicly observable actions 2 only considers seds where the first players private information is not necessary for coordination in short under these assumptions seds take place when despite all players expecting a common knowledge blueprint bp policy possibly a nash equilibrium an initial players move deviates from the bp and possibly from optimality but it actually provides meaningful information that can be exploited by the subsequent players to choose moves that improves locally the performance achieving a greater payoff the paper highlights the mechanics of seds thus defined on a toy example and proceeds to devise an optimization problem whose solving is tantamount to performing seds an approximated search algorithm is thus devised to solve the optimization problem in a tractable fashion at every players turn it is entitled improvised two variants can be formulated though improvisede and improvisedp the former is based on the both players choosing there moves by maximizing the expected return in comparison to the bps one while the latter enforces the subsequent player to choose its move by maximising the expected improvement over the bps action note that the proposed algorithm relies on a blueprint policy being available as it is a search algorithm that is added in a plugandplay fashion after deployment finally the paper investigates the proposed algorithms ability to perform a subset of finesse plays in hanabi note that not all seds are finesse plays comparisons are made against 1 a behaviour cloning policy trained on human moves and 2 sparta another search algorithm applied to decpomdp the bp policy used for that evaluation is the behaviour cloning policy the evaluation is performed in terms of number of specific finesse plays performed over a set of specific finesseable situations results show that the proposed methods largely outperform both methods in terms the number of finesse plays actually performed in finesseable situations also using this proposed finesse benchmark the paper investigates stateoftheart marl agents ability to perform those specific finesse plays while all methods reach the same amount of finesseable situations 12k situations over 1k games sad otherplay and obl are only completing 43 33 and 32 respectively for comparison the behaviour cloning blueprint reaches 85 postrebuttal and initial acreviewer discussion update following the authors rebuttal the authorreviewer discussion phases and the acreviewer initial discussion phase i mean to emphasise the following and detail how i have increased my marks 1 following the rebuttal i acknowledged that the mathematical soundnessclarity issue that i was raising is very subjective ie more a matter of taste than a matter of quality indeed the maths laid out in the paper are of a fairly good quality i just would have hoped for better but i recognised that it is not necessary given that the aim is primarily to communicate efficiently the concept that are presented in the paper which i find they do perfectly 2 the paper does not claim reaching a new sota result on the hanabi benchmark yet my statistical significance concerns were regarding the results on the full game experiment benchmark which i find misguided in the end 3 rather the paper does claim initialising a sota result on the ability to perform finesse plays on the hanabi benchmark and the results on that end are statistically significant without needing to run any test given that they show a 5fold increase compared to the relevant baseline of sparta and the sanitycheck baseline of using solely the behavior cloning blueprint thus in light of this i think that i mean to champion this paper for providing an excellent theoretical contribution in formally framing what finesse plays are and a statistically significant numerical evidence in showing that their proposed algorithm raises the sota in finesseexecution testing i am therefore updating the marks in my review raising the mark from 6 to 8 raising the contribution mark from 3 to 4 raising the presentation mark from from 2 to 3 raising the soundness mark from 2 to 4 originality to my knowledge the paper is the first to i formalise the infamous finesse plays ii propose an evaluation method for them in the game of hanabi iii propose to build an algorithm to maximise the likelihood of those finesse plays quality the quality of the paper is fair the math seem marginally sound on the surface but i will highlight a claritysoundness issues below reproducibility is maybe a bit lacking due to the lack of details on the limitations and dependences of the algorithm proposed eg the function f i find a strong issue with the strength of the numerical evidence line 361 significance in order to assert the actual significance of these results despite the sample size being slightly smaller than in previous papers eg 10k games in otherplay cf figure 4 q1 could you perform statistical analysis of each distributions using eg kolmogorovsmirnov tests 1 please this would strongly strengthen my appreciation of the work and contribution here and i would update my mark accordingly similarly considering the comparison with sparta as a baseline i think it would be important to provide rollouts and time complexity statistics please in order to truly assert whether your proposed algorithms win on most norms or whether there are tradeoffs to consider 1 scipys twosample kolmogorovesmirnov test httpsdocsscipyorgdocscipyreferencegeneratedscipystatsks2samphtml clarity soundness i have found the paper to be mainly wellwritten and to fairly substantiates its claims but there are a few minor and major issues i would hope to see addressed line 60 need to specify n in mathbbn as the number of playersagents line 153 soundness clarity timestep in fosg vs in singleagent public pomdp given the public pomdp transformation arent the players playing simultaneously for the joint decision rule to be defined i fail to understand what information are the joint decision rule actually considering indeed the notation defined around line 84 highlights that the joint decision rule is only using as input the set of information state sets over all players but the equations 4 and 6 that defines how bob plays highlights a dependence on alices action a1 which is not part of any information state set yet since it corresponds to a constituant of the next singleagent public pomdp timestep no or alternatively could you clarify when are the information state sets being updated i think it brings us back to how the history is being defined as its reliance on the time step t in the fosg makes it ambiguous between each players turn if they play sequentially ie alice plays at time step t and bob at time step t1 in the fosg but the joint decision rule in the equivalent singleagent public pomdp lacks clarification of when are the information state set computed maybe q2 if you understand the soundness and clarity issue that i raise here then may i propose the following 1 for clarification it mind be better to state the assumption 1 with respect to the fosg and maybe place it in the text before introducing the public pomdp transformation 2 for soundness it mind be better to rewrite the joint decision rule as a equiv gammas1t s2t1 sntn line 177 clarity function f q3 could you clarify how is f computed is it shared between players i remember line 173 stating that it is determined by the algorithm designer but i am not sure i understand the meaning does it mean that it is like an hyperparameter that needs to be provided for the algorithm to work if so i would flag it as a critical limitation that needs to be discussed and better detailed please i would welcome a very detailed table maybe showing the mapping that this function performs if it is relevant equation 1 clarity soundness operator circ i think that leaving the circ operator undefined is hurting the clarity and soundness of the claim indeed its usage on gamma1 is ambiguous as gamma1 outputs on action ainmathcala which is not from the input space of gamma2 i think this boils down to the lack of rigour when stating the dependencies of the history on the timestep t in the fosg eg line 67 rendering the the history variable as ht given the dependence on the time step t in its constituants and therefore the information state set would need such an addition too line 234237 clarity optimality argument lacking substance i am afraid i do not understand the optimality argument that is made in this paragraph could you clarify further please as it is fairly central to the contribution right q4 could a proof of the optimality be in order or maybe just the use of some formalism to ground the text to the equations discussed so far please line 244245 clarity jensen inequality alices conditional behaviour i am afraid i do not understand the causality link made by this piece of text nor how jensens inequality illustrates alices conditional behaviour it might be obvious but i unfortunately cannot see it in the present form thus q5 may i request a clarification please line 245 clarity shouldnt it be an argmax instead of max figure 3 result transparency i would suggest the author to display their algorithms results in figure 3 as well as the training curve of the blueprint behavior cloning alone as a baseline maybe in order to be complete and transparent table 1b clarity and soundness major issuemisunderstanding i fail to make sense of the blueprint policy suddenly performing 0 finesse out of finesseable situations while it is shown to perform 85 of those as presented in table 1a q6 or could it be that i am making a confusion between situations where a finesse play is performed and finessecomplete situations dont they mean the same could it be that they are different depending on which players action is being evaluated ie the player that initiate the finesse play and peform an sed being opposed to the players who recognises the sed and acts accordingly in any case i would recommend the authors to try to better organise with subtitles or bold summarising statements at the start of paragraphs maybe the section 54 it was difficult to read and make sense of i must confess s maybe all the information are too densely packed in it might need more breathing room for some examples eg an example of the typical play being evaluated with a turnbyturn analysis also the more i read the paper the more i am pondering whether evaluation for finesse in the general sense should be addressing each players turn separately indeed i am assuming it is one thing a to realise that a finesse coordination play can be performed in a given situation and then be the one to initiate it player 1 but it is an entirely other thing b to be able to understand the resulting sed that occured and choose the right move to maximise payoff player 2 no i am still feeling confused by the function f which is involved in b for more than half of the task i feel the need for it to be better detailed in that regard too please significance most of the results provided may be more compelling if the sample size was similar to previous papers that being said the theoretical significance of the paper is high but i would really like to see the significance of its results be better asserted please consider addressing the following points line 353 missing results for humancooperation significance what about improvisedp since it is more valuable for cooperation with humans could you report the number and percentages of finesseable finessecomplete situations encountered during these 100 games please like in table 1a i think that it would be important to perform this evaluation on a greater number of games similarly to previous works in order to assert the statistical significance of the results line 356 selfplay vs crossplay zeroshot coordination as only selfplay scores are presented it is unclear whether the blueprint agents have developed idiosyncratic policies or not especially if the behavioral cloning training data only contains one pair of players for instance thus to provide more transparency and be more in touch with the marl fields dilemma at large of which zeroshot coordination is a major problem especially on the hanabi benchmark could you present crossplay performance results with differently seeded blueprint agents like in the otherplay paper for instance please unless maybe it is irrelevant as the algorithm your propose manually computes its own belief at every move and the monte carlo rollouts are robust to noise or something of the kind i understand that this is going the extra step and i would not mind if the authors were not able to address it line 361 significance statistical significance tests see section quality above i believe the authors have mainly adequately addressed the limitations and potential negative societal impact nevertheless i would welcome the paper to discuss further the possible limitations of the design such as i the reliance on the shared function f and the fact that it is chosen by the user or ii the reliance on a blueprint policy eg does it have to be the same for all players and to what extend please
### Summary:
|
the reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field all reviewers generally shared a positive impression of this work one reviewer mentioned as possible limitations the lack of theoretical analyses and more empirical evaluation they also initially pointed out that the paper lacked statistical significance testing after the authors rebuttal however the reviewer said they better understood some aspects of the paper that they appreciated the additional statistical testing analyses and suggested adding more discussion on such points another reviewer agreed that this paper investigates an important problem and that it formalizes many relevant related concepts as limitations the reviewer said that the method requires assumptions that may be too restrictive eg assumption 2 they were also concerned since evaluations were conducted on only two games one of which is a toy domain nevertheless the reviewer was satisfied with the authors responses a third reviewer acknowledged that this work is novel that the paper is wellwritten and that its mathematical contents are well presented after reading the authors rebuttal the reviewer was still concerned with the fact that the computational costs were discussed only for specific runs and are a significant limitation as the authors noted this the reviewer believes suggests that the method might have somewhat limited practical applicability a fourth reviewer brought up many technical questions as part of their review which at the time they thought had not been adequately addressed in the authors rebuttal however the reviewer later said that some of their criticisms regarding the formalization proposed in the paper were not a sufficient reason for rejection they recognized that resolving such issues was not necessary given that this work aims primarily to communicate the main highlevel ideas introduced by the authors as final feedback to the authors this reviewer suggested that the authors could provide an opensource implementation of the finesseexecution testing benchmark so that subsequent works can build upon this new metric and evaluate their algorithms against it overall thus it is clear that most reviewers were positively impressed with the quality of this work and look forward to an updated version of the paper that addresses the suggestions mentioned in their reviews and during the discussion phase
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors introduce a new vqgan model with three improvements spatial normalization of the quantized vectors maskgit for quicker autoregressive reconstruction and multichannel feature quantization they show better reconstruction performance with similar code size to other models and methods this is obviously of interest to the neurips community and the results are impressive the implication that normalization the quantized code vectors adds substantial improvement is interesting and definitely opens up interesting areas to follow up on the use of maskgit seems to me to be not a large contribution of this work and while interesting can be downplayed a bit in comparison to the normalization and use of multichannel quantization the weaknesses are that the paper is not always careful in its comparison to other methods figure 3 doesnt show the code size or latent size in comparison between several different methods so it is fairly difficult to compare across methods more careful use of common tools from compression literature on ratedistortion would help clarify some of the comparisons across methods the authors address stock concerns about implications but i believe that larger implications are raised by the image of the child in figure 4 4th column the childs face has changed significantly from the original but unlike traditional encoding techniques does not betray any indication to the downstream user that the image has lost information or is in any sense uncertain traditional artifacts in simpler compression techniques may look bad to the eye but they at least faithfully convey to the user when information has been lost i think not enough concern is paid in this paper and in this literature to technologies that produce confident and clear images that are not what was captured and encoded on the other side and that may fool the end user otherwise docsepthe paper presents a new vqbased image synthesis method based on maskgit the paper proposes spatially conditional normalization and uses multichannel representation to improve the reconstructed image quality of the tokenization stage the proposed spatially conditional normalization modulates the quantized vectors for a better reconstruction performance by inserting spatially variant information to the vq decoder the multichannel representation subdivides the encoded continuous latent along the channel dimension into multiple chunks and quantizes them with a shared codebook which further improves the reconstruction performance by increasing the latent size for the generation stage the paper modifies maskgit to sample the multichannel latent experimental results on two benchmark datasets show the proposed image synthesis method is efficient and effective for generating diverse and highquality images strengths the paper is technically sound the paper is well structured the citations are extensive weaknesses the novelty is somewhat limited the idea to improve the reconstruction ability of the tokenization stage is original and interesting however the proposed method heavily relies on existing techniques such as multichannel representation and maskgit the performance on image synthesis is not good enough as shown in table 3 the proposed movqgan performs worse than maskgit for classconditional image generation on imagenet in terms of complexity and quality metrics it weakens the contribution since the proposed movqgan is based on maskgit the evaluation is somewhat limited the main contribution of the paper is incorporate the spatially conditional normalization to modulate the quantized vectors figure 6a has shown that spatially conditional normalization can improve the reconstruction performance rfid especially for fourier features however improving the reconstruction performance rfid does not necessarily improve the generation performance fid as shown in figure 6c in my opinion the experimental evaluation should report fid with or without the proposed spatially conditional normalization the paper discusses an interesting limitation of the proposed method the model sometimes generates images with a highfrequency appearance without the structure information which may be attributed to the generation of multichannel representation i think the choice of mask scheduling function in maskgit may be not optimal for the multichannel representation maybe the multichannels of the same location should be masked and generated together and the sampling can rely on the probability product of the multichannels docsepthe paper discusses a method for generating reconstructing images using quantized representations the key difference wrt prior work is that they modulate the quantized representations that is they propose to use adainlike modulation of the quantized features they claim that without this the results are often repetitive a further smaller contribution is to use several channels of quantized features ie when the image is encoded they split features into 4 blocks along the channel dimension and quantize them with the same dictionary they show better reconstruction results and comparable or worse results on image synthesis the paper is well written and addresses a challenging very competitive problem the key contributionmodulating quantized vectorsis interesting although perhaps not sufficient on its own right improving stage1 training is a very important problem which potentially can improve many applications including image synthesis in and outpainting texttoimage video synthesis the paper however needs to be improved to be able to claim that their contribution lead to these improvements motivation the paper is motivated by the claim that quantization results into repetitive structures i wonder if the authors could support this statement somehow since its not easily observable in images if thats the case furthermore if thats one of the motivations it should be supported by numerical experiments especially to show that the contributions improve the situation its not totally clear how do to this but a method based on image autocorrelation might do the trick the only support for this claim in the paper is fig 3 however i cannot say it shows any repetitive content besides this image can be repetitive finally even if there is repetitive content is it due to the exactly the same tokens a visualization of token indices will help here or because the decoder collapses currently the support behind the claim is not sufficient modulation can just improve expressivity of the the tokens allowing the generator to have shorter tokens methodology from a technical standpoint modulation of the tokens is a simple extension of the original quantized schema the proposed multichannel quantization is interesting too but i cannot say that both of these contributions combined together bring the paper above the bar significance improving only the first stage makes the paper less interesting why did previous papers gain attention i believe because they had good encoderdecoder frameworks which offered rich latent spaces these latent spaces can be used to solve a variety of generative tasks such as reconstruction generation completion outpainting texttoimage and even texttovideo if the current paper improves stage1 it can perhaps also show better results at stage2 in terms of stage2 the paper reports image synthesis comparisons in which the numbers are either on par or worse than the stateoftheart the authors say that their model is smaller but the model size is not the main contribution of the work i believe there are many ways of making the model even smaller if you make the model larger will it be better in the supplement they show classtoimage generation results according to table 3 there is no improvement over maskgit in terms of all of the scores maskgit is even a smaller model so in terms of significance of the proposed contributions its hard to tell whether stage1 improvements lead to improvements in downstream tasks the numbers show the opposite an intuition could be that the proposed framework reconstructs images better than others because it provides larger more expressive latent space at the cost of poorer structure somehow the authors admit this in the limitations paragraph it would be great if the paper could prove otherwise it would be great to better understand what the proposed stage1 can do and what it cannot do like can it do outpainting superresolution and other downstream applications reconstruction per se is a less interesting application
### Summary:
|
the three reviewers had significantly diverging final opinions strong accept borderline accept and weak reject the authors addressed many of the concerns in their rebuttal i read the paper carefully and i agree with the concerns from one reviewer about why the improvements in stage1 do not lead to significant improvements in stage2 i think this concern needs to be properly addressed because otherwise it is unclear what the benefit of this approach would be for real applications while previous work has shown that improved stage1 performance leads to improved stage2 performance why was it not replicated in this situation i also found the analysis of why the spatially conditioned normalization improves reconstruction to be lacking if the jagged structures are addressed by this work then understanding why with simple examples would have shed more insight into the technical contribution however in summary i think this paper is slightly above the acceptance bar and addressing the above concerns is recommended for the final version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work aims at accelerating pretraining by leveraging the contextual embeddings for the rare words it is argued that the inadequate training of rare words slows down the pretraining the authors then proposed to keep a moving average of the contextual embeddings for the rare words and use it to augment the input embeddings of the rare words this technique is applied to bert and electra and is shown to improve over the baseline strength 1 this work proposes a simple approach to accelerate the pretraining with only a small memory and compute cost during training the empirical study on bert and electra supports the claimed improvements 2 it provides an interesting view towards the rare words problem that the rare word not only has worse embeddings but also slows down training of the whole model weakness 1 it is argued that the proposed approach helps with rare words problem but it will help to add more experiments to see how much more benefit we can get from it for example maybe the use of contextual embeddings are actually helpful for all the words or subwords instead of just the rare words specifically regarding we define keys as those words with occurrences between 100 and 500 in the data corpus how are the range 100 to 500 chosen have you tried it on words appearing lower than 100 or higher than 500 as mentioned above it would be interesting to see if this approach can be applied to more words or subwords to get even more gains 2 some design choices needs more details or explanations for example why does the notedictionary use words instead of subwords as keys it seems using subwords could cover a broader range of sentences with a notedictionary of the same size it will also be easier to use during pretraining for example you could use the contextual embeddings to improve the word embeddings of the subwords directly to avoid having an extra notedictionary another example is how the window size is chosen since it seems an important new hyperparameter docsepthe paper proposes an external memory architecture when encountering the rare words with a frequency between 100500 the method will store the average contextualized word embedding of nearby words into a dictionary next time it encounters the same rare word it will retrieve the average embedding and input it into bert encoder the experiment results show that given the same number of training steps adding the external memory improves the mlm loss and significantly improves the results on rte recognizing textual entailment dataset which leads to a slightly better glue score the experiment also shows that keeping the external memory during the finetuning stage slightly degrades the performance pros 1 the method is simple and easy to understand 2 the experimental results on glue are quite surprising it shows that we should take note when training bert but throw away the note dictionary when finetuning the model cons 1 missing an important citation 1 2 the paper does not well explain the surprising results on glue this is a crucial weakness the comparison of the mlm loss is not very fair because the proposed method has a large external memory the benefit of the proposed method relies on the improvement of the average glue score however table 2 shows the most of the improvement of glue actually comes from the improvement of a single dataset rte without understanding why it improves rte the readers do not know when they want to adopt the proposed method for their downstream applications clarity the text is fluent but the main story is not well supported by the experiment results the story is that using an external dictionary could accelerate the training but the main experiment finding actually says that using an external dictionary can very significantly improve the results on rte dataset while performing similarly on other datasets in glue originality there has been some effort of using an external dictionary to help the training of bert 1 but i am not aware of existing papers that apply the dictionary to only the rare words i also do not know any other work that shows the external dictionary could improve the glue scores significance of this work if the authors could well explain the experimental results on glue and justify the explanation using some analysis this might lead to more important findings figure 3c seems to contradict with table 1 and 2 because in table 1 and 2 the glue score of bert ours is 831 but all the points in the bert curve in figure 3c is below 83 usually when a study tries to sell its method as a way to accelerate the training it means the method reaches some performance faster but the method will converge the same performance eventually however figure 3 does not show that they will converge the same value so selling the method as a way to accelerate the training is weird furthermore i think the lower mlm loss is due to the extra parameters in the note dictionary rather than the note dictionary accelerates the training it is not surprising that taking notes for rare words could achieve lower lossperplexity because the note dictionary gives the extra memory capacity 1 it is also not surprising that it can achieve better performance on gleu if using the note dictionary during the finetuning stage due to the extra parameters the really interesting results are that the authors report that the model could very significantly improve the rte task and mildly improve cola without using the note during the finetuning stage intuitively the proposed model stores lots of knowledge about the rare words into the note dictionary does the fact that the note is not needed in the finetuning stage imply that the knowledge about rare words is actually not needed does it mean the rte or cola do not contain many rare words or does it mean the rare words do not affect the decision of bert and electra in rte or cola is the reason of improvement that we could store more interactions between popular words in the parameters of bert itself because the information of rare words has been stored in the note maybe you can test this by reporting the mlm loss on the sentences without any rare words if that is the case why do we only stably improve rte and cola if the authors can show the above hypothesis is true i think this is a significant contribution because that means this paper provides a way to control what lm should learn when there is a mismatch between mlm training corpus and downstream applications eg mlm training corpus contains many rare words but we should ignore the rare words in the downstream applications this paper lacks a good explanation of the above weird result in my opinion the most valuable finding in this paper and lacks the analysis that supports the explanation the main paper says that taking notes improves the tasks with the small datasets the most the stsb 7k and mrpc 37k have smaller training datasets than cola 85k why are the results of stsb and mrpc cannot be stably improved if the authors really want to explain the performance improvement using the training dataset size the authors can just randomly sample several small subsets of training data from each dataset and show that the glue score improves a lot in that setting in the appendix a4 the authors hypothesize that the small proportion of rare words in each dataset of glue from 047 to 231 might be the reason that we can ignore the note dictionary during the finetuning stage this also did not explain why most of the improvement of the glue score comes from rte moreover if the rare words are not important in the testing datasets why do we want to take notes in the first place i will vote for acceptance if the authors could answer these critical questions i raise above strongly minor 1 although the chance is not high i think it is possible that parts of mlm improvement could be achieved by simply sampling the sentences containing the rare words more this is a minor concern if you do not have time to finish the experiments for this baseline you can choose not to do it or compare the results after training fewer steps 2 i guess the dictionary overhead is small but it should be measured and reported because you say the method accelerates the training 1 lample guillaume et al large memory layers with product keys advances in neural information processing systems 2019docsepthis paper proposes taking notes on the fly a technique to improve the training efficiency of languagemodeling style pretraining it works by identifying rare words in the pretraining and adding a notetaking component to the masked language model which augments these words with an extra note embedding at the input layer the note embedding is constructed from an exponential moving average of meanpooled contextualized representations of context windows in which that word was previously seen during training the notes are dropped in finetuning experiments find that this pretraining method improves finetuning results on english nlp tasks in the glue benchmark when used in the original bert pretraining setup in particular the model can achieve similar performance to the original bert model with less than 40 of the training steps and similarly for electra strengths this paper is clearly written the proposed technique is simple and the results seem strong it is laudable that the authors give experiments in the appendix to give a sense of hyperparameter sensitivity the paper has a strong backbone and it seems that the proposed technique or something similar may serve as the basis for solid future work weaknesses while the backbone of the paper is strong i think it could be improved in its head motivation and legs experimental studies first motivation while the framing around rare words with the covid19 example is interesting i think it has gaps the introduction argues that since covid19 is a rare word in the course of training the model may lack the necessary signal to predict the masked word lives but isnt this fact exactly what should lead the model to improve its embedding of covid19 because gradients flow into the embeddings both through the softmax layer and the input layer so while adding to the context may help the model get a foothold with more effective training signal for the masked token it seems to me that the note could also explain away the rare words embedding in the input layer reducing the learning signal on it if thats the case then to the extent that tnf works it would be by the tradeoff between improving the learning signal at the output layer for all words and in contextualization and degrading it at the input layer for rare words as a broader example see httpsopenreviewnetpdfid3aoft6nwfej that paper argues for a masking scheme which eliminates easy shortcuts from the prediction problem to increase learning efficiency whereas this paper argues essentially the oppositethat shortcuts must be added to hard cases in order to facilitate learning it seems that there may be a line to walk here between a task being too hard to learn from and too easy to be useful because its not clear where that line is i think its not enough to motivate tnf from only one direction it would be better to also have an explanation of why the notetaking approach does not also make things too easy its not obvious to me how to best make this argument though results from some of the ablations i will suggest below might help this brings me to my second point ablation experiments if the motivation is to improve the representations of rare words in the input then there are even simpler ways to do this experiments with simple baselines and ablations are important for figuring out why exactly tnf works first if the note is such a useful addition to the word embedding why not just use it to update the embeddings directly at that rate the method for constructing the note embeddings looks quite similar to word embedding training objectives like word2vec and glove this suggests a critical ablation initialize the word embeddings with word2vec glove or similar run over the wordpieces in the pretraining corpus weirdly i cant find an example of this in the literature it seems like an obvious thing to try i may have just missed it indeed it seems to me that the framing in the paper could just as easily motivate this much simpler technique than tnf if tnf outperforms the critical ablation that implies that its gains are coming from some of the other particulars of the technique such as 1 the extra degree of freedom provided by decoupling the note embeddings from the wordpiece embeddings or 2 the use of contextualized vectors for note embeddings rather than the noncontextualized ones in the word embedding objectives to investigate these issues i would suggest three more ancillary ablations on tnf directly update the rare words embedding with a version of eq 5 rather than keeping a separate note dictionary update the note embeddings via backprop instead of eq 5 this would amount to partially tying the input and output embeddings giving more freedom to the input layer which is partly whats happening in tnf pool over noncontextualized instead of contextualized representations in eq 4 finally to address the too easy vs too hard distinction two more ablations that might help would be instead of using an exponential moving average for the note embedding update just use the pooled context vectors from the last instance of the rare word ie set gamma to 1 in eq 5 instead of using an explicit note dictionary augment the input context with retrieved text containing the rare word see tekenriched representations httpsarxivorgpdf200412006pdf for an example of this for consistency the exact lastseen context of the rare word could be used the first will help identify to what extent aggregating over many multiple inputs to get a high quality representation is necessary for tnf this could then serve as a reference point for the second ablation which may help determine whether the fixed embedding size and pooling operation helps by creating a bottleneck for the retrieved information and preventing things from getting too easy although context window sizes might also be a confound here that could also be controlled carefully all together i think these ablations would shed a lot of light on why tnf works and make this work much more useful to researchers who wish to build on it in the future however i know ive suggested a lot of crazy experiments here i would not expect all of this necessarily to be done and i leave it up to the discretion of the researchers which are most important i am also sure the authors could come up with better ablations than these as well but my sticking point is the first ablation initializing with noncontextualized embeddings which i think is critical and i think it behooves the authors to address some of the lingering questions including more written below even if not all of them recommendation unfortunately reject the technique is simple and the results seem good but the paper does not provide empiricallyjustified insight on why tnf works i think ablations and investigation into the why aspect is the most important part of this kind of model engineering research more comments questions i am left with some more questions about how tnf works how does the quality of the representations of rare words specifically compare in your approach does it improve the representations of common words and contextualization at the expense of rare words while it may be tricky to try to directly assess embedding or contextualization quality breaking down the mlm perplexities by word frequency or presence of rare words in the context after removing the note dictionary might be informative i admit this might also be tricky because i imagine the model would have to be finetuned without the notes for a bit before doing such an experiment but any insight into this issue would be appreciated if this method indeed works by more narrowly refocusing the training signal on the masked token than the context tokens then would you be able to further increase the learning efficiency by oversampling rare words when determining the masks in training i am not aware of anyone showing such a thing to work though i might have missed it just a thought while the pretraining corpus is huge 100 occurrences still seems like a pretty high threshold for rare words given the justification provided in the paper questions what do the even rarer words look like are they just a source of noise eg because they are components of names or dont have clear and consistent semantic content what proportion of contexts contain words appearing less than 100 times it seems that the 20 figure in the paper is meant to apply to your definition of rare words which appear between 100 and 500 times what is the word vocabulary size ie how many words appear more than 500 times and less than 100 did you do any preliminary experiments with other thresholds would you expect this to work with more common words as well why or why not this may also relate to the too easy vs too hard issue on pretraining efficiency results i think figs 3a and 3b need to be explicitly qualified a little better afaict having lower loss here doesnt necessarily mean the model modulo the note dictionary is learning better because it sees the notes in the input so were looking at the loss in a different setting than we intend to finetune in its still interesting to see but i think its best to include an explicit caveat what about training the models for more steps will the trend hold and performance improve overall or will the gains eventually level off as the representations of rare words get better especially for pretrained models since they are used as the starting point for many models it is often worthwhile to train them longer as in the roberta paper so its important to understand the usefulness of this method in that regime typos etc p3 neglectable negligible p3 representation representations in bert acronym p6 sec 41 after mnli there is a space missing after the period p6 fullsentences would look better be consistent with liu et al if it were in small caps please cite the individual dataset creators for the datasets in the glue benchmark update upped score from 4 to 5 see comment thread update again score further updated from 5 to 6 with glove context ablations and perplexity results on sentences with rare words docsepsummary this paper proposes a method for improving pretraining convergence speed by augmenting the representations of rare words with the meanpooled representations from their previouslyoccuring contexts notes stored in a note dictionary the method considerably speeds up the convergence of pretraining bert and electra and the authors furthermore show that these models perform better when finetuning on downstream glue tasks likely because the models were undertrained to begin with so converging faster alleviates this issue strengths the method is surprisingly simple and empirically quite effective its especially interesting to see that bert tnf at 400k steps has better glue performance than bert at 1m steps weaknesses the paper does not do a convincing job of arguing that the reasons for the faster convergence comes from better modeling of rare wordsim still not entirely sure why this works so well do these rare words commonly show up in glue and thus the method is helping because your representations of rare words are better it seems like tnf is actually improving the representations of morecommon words as well recommendation 7 despite the lack of clarity around why exactly this method works so well the method seems empirically useful and straightforward to apply i expect that this will be useful to practitioners interested in applying bert and similar pretraining strategies to new corpora and domains questions its a bit unclear to me that notetaking itself is required for this to work wellin the covid example presented in the introduction if you see the sentence the covid19 pandemic is an ongoing global crisis isnt it possible that mlm itself is sufficient to associate the embedding of covid19 with pandemic and global crisis do you have further evidence to show that notetaking is actually improving the representations of rare words besides glue score which might not be very indicative since the rare words might not show up in glue the construction of note dictionary does 347b refer to the number of types or the number of tokens why not define keys with frequencies less than 100 in the dictionary as well since you only use types that show up between 100 to 500 times it means that to reach the same performance tnf can save 60 of pretraining time if models are trained on 16 nvidia tesla v100 gpus berttnf can reach berts final performance within 2 days while it takes bert 57 days is the 2 days vs 57 days an actual wallclock measurement or are you hypothesizing this based off of the loss curves missing erroneous citations it is wellknown that in a natural language data corpus words follow a heavytail distribution larson 2010 this is morecommonly known in the nlp community as zipfs law better cites would be zipf g the psychobiology of language london routledge 1936 zipf g human behavior and the principle of least effort new york addisonwesley 1949 miscellaneous comments moreover completely removing those sentences with rare words is not an applicable choice either since it will significantly reduce the size of the training data and hurt the final model performance i agree that its a bad idea to remove sentences with rare words but i disagree that the issue is reducing the size of the datayou can always go collect more data and filter it to not include rare words its more likely that the issue is that removing sentences with rare words would reduce the diversity of the pretraining data which would be harmful our method to solve this problem is inspired by how humans manage information i think the connection to human notetaking is tenuous at best and would omit it the motivation remains clear without this
### Summary:
|
the authors propose an approach for pretraining that involves taking notes on the fly for rare words the paper stirred a lively discussion on the reasons for the reported results which the authors followedup with new experiments and findings that convinced the reviewers that indeed their approach is valid and interesting thus i am recommending acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposed an adversarial extreme value analysis aeva framework to detect backdoors in blackbox neural networks specifically they first obtained a new upper bound backdoor detection formulation by using convex relaxation with linear model assumption and mean squared error loss they showed that the mass in the adversarial perturbation would be occupied in the mask area as the backdoor sample size goes to infinity hence a highly skewed distribution in the adversarial map for the infected label is expected based on the above observation they designed an aggregated global adversarial peak gap for detecting the adversarial maps with the limitation of the blackbox setting a monte carlobased gradient estimation was used in the gap finally they evaluated their methods on several realworld datasets i have several concerns as follow the relationship between adversarial examples and backdoor samples eg pictures with auxiliary patches is well known therefore it is reasonable to expect that the adversarial singularity phenomenon may not occur in backdoorinfected dnns will algorithm 1 be efficient under cases where the infected range and uninfected range overlap a lot and the threshold t is large as discussed in sec 33 it seems that the blackbox setting is somehow contrived since it only poses challenges on computing the gradient xyt what is the effect of the sample size used for gradient estimation on the detection accuracy with exact gradient computations will the proposed algorithm 1 agap outperform the existing whitebox detection methods the result of lemma 1 seems to be a natural consequence under the linear model assumption meansquarederror loss and the optimization formulation of eq 6 since this paper targets the adversarial extreme value analysis in dnn do we have a more formal understanding of the regime of modern neural network architectures overall i think the paper provides a new perspective of backdoor defense but it could be made stronger by addressing some critical aspects as listed above docsepthe paper propose a new backdoor detection method the method does not require the original poisoned training data or the parameters of the target dnns given an image the proposed method initiates adversarial attack on it if the model contain backdoors for predicting the target label then the adversarial noise will contain singular patterns which could be detected using extreme value analysis strengths 1 the method assumes the blackbox scenario which is practical 2 some theoretical analysis is provided to establish the connection between adversary and backdoor attacks 3 the idea is smart for using adversarial attack results to detect backdoors weakness 1 it seems that it is not difficult to avoid being detected if the backdoor patterns are smartly designed 2 there should be some work using interpretation to detect the backdoor due to the close relation between adversarial attack and interpretation i am not sure if the proposed method is still novel from this perspective the paper proposes a good idea for using adversarial attack patterns to diagnose if backdoors exist in models my only concerns are twofold first some adaptive studies could be conducted analyzing the scenarios where the proposed detection method could be circumvented this may not be very difficult since this paper assumes that the backdoor patterns are focused patches second the proposed idea seems to be similar to using interpretation eg heat maps to detect backdoor since adversarial attack inversed interpretation from this perspective the proposed idea does not look that novel docsepthis paper presents a novel approach for the detection of backdoored neural networks inspired by the deployment of thirdparty networks in cloud services it is argued that this detection task needs to be done in a blackbox hardlabel scenario in this regime it is assumed that the userdefender only has access to the model through making queries and getting back the labels for those queries to solve this newly proposed problem first a connection between the detection of backdoors and adversarial example generation with sparse perturbations is drawn then it is shown that for backdoor models solving this adversarial example generation objective leads to perturbations that are mostly concentrated in the backdoor mask area this argument is shown both theoretically for linear classifiers and empirically for deep neural networks based on this observation which the paper calls the adversarial singularity phenomenon a practical blackbox backdoor detection is proposed specifically this algorithm first computes the aforementioned adversarial perturbations for some validation data in each class to this end a montecarlo gradient estimation is used to make the algorithm suitable for the blackbox setting then based on the maximum value of these perturbations it is decided whether the network for this label is infected or not the effectiveness of this method named avea is shown through extensive experiments and ablation studies on various datasets cifar10 cifar100 and tinyimagenet and attacks badnets 1httpsarxivorgabs170806733 labelconsistent 2httpsarxivorgabs191202771 watermark 3httpsarxivorgabs171205526 and invisible 4httpsarxivorgabs190902742 attack strengths and weaknesses the proposed setting and algorithm are novel in particular the subtle connection between backdoored networks and sparse adversarial example generation can inspire further research in this direction the analysis in the paper is logical and it guides the reader well through the thoughtprocess behind the algorithm the experimental results are comprehensive although the experiments do not contain the most recent backdoor detection methods as baselines this reviewer believes that the current comparison baselines neural cleanse 5httpsieeexploreieeeorgabstractdocument8835365 and dltnd 6httpsarxivorgabs200715802 are enough to provide the bigger picture this is since almost all existing methods are typically designed for the whitebox scenario and as such they have a huge advantage compared to the current method the paper is generally wellwritten although some parts need proofreading see the minor comments about this below there are two gray areas regarding this submission that needs to be clarified further 1 although the authors talk about the practicality of the blackbox scenario in the introduction this assumption needs to be justified further i strongly suggest providing a concrete example where the usecase of this scenario is explained in this example please specify the training data the model the user and why the user who does not own anything from the training data to the model should be worried about backdoors 2 another interesting question that is not been explored is the whitebox performance of the algorithm currently the only part where the blackbox scenario is being dealt with is the adversarial example generation now assuming that the user has access to the model parameters what does the current approach provide in contrast to existing methods asked the other way around can someone use the montecarlo gradient estimation in conjunction with existing methods to make them blackbox since being blackbox is considered as the strength of the proposed method compared to existing ones these questions need to be answered further questions 1 can you clarify what this sentence below eq 3 means additional difficulty comes from the fact that 2 at the bottom of page 4 the paper reads following this the optimization in eq 3 converts to i am guessing that you meant only the first part of eq 3 right otherwise eq 4 is missing a m term 3 for the empirical study of section 32 do you solve eq 6 in a whitebox setting or using the montecarlo gradient estimator how about figure 6 is this figure generated in the blackbox or whitebox setting 4 does eq 7 mean that there are two ways to get the gap 1 by generating adversarial perturbations for multiple inputs as the avea does 2 by generating multiple perturbations for a single input 5 does the ablation study on the impact of trigger size mean that a scattered trigger is more likely to circumvent the detector than a sparse trigger 6 the results reported in appendix j indicate that in some cases the performance of the blackbox detector is better than the whitebox baselines especially neural cleanse what do you speculate is the reason behind this minor commentssuggestions across the paper the index i has been used to point to training samples eq 2 validation samples eq 3 and class algorithm 1 consider using a different index for each one of these to avoid misunderstanding consider using different line styles and a bigger plot size for auroc figures consider adding a table of contents andor explanation of different parts of the appendix right now there are sudden jumps from one section to the next with no explanations in the second sentence after eq 15 q has been used to point out the first and last dimensions of x the second one needs to be changed to p omit the in eq 16 in eq 19 nb to infty not n in the explanations that follow eq 24 k is used instead of k in eq 27 there is a hats missing in the second log figure 13 is colliding with the text while there are some gray areas around the proposed method please see the strengths and weaknesses i believe that this is a wellwritten thoughtprovoking paper that can be interesting to the community and bring forward fruitful discussions as such i vote for borderline acceptance of the paper if the authors can provide convincing answers to my two questions in the main review i would be happy to increase my score docsepthis paper proposes the first blackbox hardlabel backdoor detection where the defender can only access the predicted label of queries specifically the authors first prove that the detection problem is bounded by an adversarial objective based on which they reveal the adversarial singularity phenomenon that the perturbation generated by the adversarial objective is highly skewed distributed based on this phenomenon the authors propose the adversarial extreme value analysis aeva to detect backdoors in a blackbox manner the authors verify the effectiveness of their method in defending against badnets blended attack labelconsistent attack and invisible attack on cifar10 cifar100 and tinyimagenet dataset pros 1 this paper is wellwritten and easy to follow 2 the topic is of great significance and sufficient interest to iclr audiences in particular blackbox hardlabel backdoor detection is probably the hardest problem in backdoor defense and has the highest commercial potential as such i think this paper should be encouraged although it still has some problems 3 the authors provide some theoretical supports for their method and opensource their codes which should be encouraged 4 i appreciate that the adaptive attacks and potential limitations are also included in the paper it can prevent readers from being overly optimistic about safety in general i enjoy the reading of this paper and i think the proposed method is also moderately novel however i still have some concerns i will increase my score if the authors can address my concerns the detailed comments are as follows major comments 1 my main concern lies in the novelty of lemma 23 the optimization problem 6 seems to be a classical problem as such i have to worry about whether its analysis ie lemma 23 is not new i will not decrease my score even lemma 23 are not new if the author can provide proper references and illustrations 2 i would like to see the results of defending against attacks with dynamic and sparse but not compact triggers 3 please analyze the effects of key hyperparameters eg lambada involved in the proposed method 4 please provide more details and results in the analysis of potential adaptive attacks minor comments 1 please double check the reference format especially that of conference papers please cite the official version of all papers eg badnets should be ieee access dltnd should be eccv note i didnt check all proofs carefully but the lemma seems reasonable and consistent with the experimental results postrebuttal comments i would like to thank the authors for their clarifications since most of my concerns have been addressed i increase my score to 8 ps it would be better if you can point out whether there are similar theories in adversarial learning i think it will help readers to better understand them and it will not reduce your contributions a practical and novel backdoor detection with theoretical guarantees
### Summary:
|
this work proposed to detect backdoor in a blackbox manner where only the model output is accessible most reviewers think it is a valuable task and this work provides a novel perspective of using adversarial perturbation to diagnosis the backdoor some theoretical analysis for linear models and kernel models are provided there is still huge gap to analyze the dnn model but on the other side it provides some insight to understand the proposed method and could inspire further studies besides since there have been many advanced backdoor attack methods and many more are coming out i am not sure that the proposed detection criteria is well generalizable considering only some typical attack methods are tested however i think the studied problem is valuable and the presented analysis is inspired for future works thus i recommend for accept
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
review for manifold regularization with gans for semisupervised learning summary the paper proposed to incorporate a manifold regularization penalty to the gan to adapt to semisupervised learning they approximate this penalty empirically by calculating stochastic finite differences of the generators latent variables the paper does a good job of motivating the additional regularization penalty and their approximation to it with a series of experiments and intuitive explanations the experiment results are very through and overall promising the paper is presented in a clear manner with only minor issues noveltysignificance the authors add a manifold regularization penalty to gan discriminators loss function while this is a simple and seemingly obvious approach it had to be done by someone thus while i dont think their algorithm is super novel it is significant and thus novel enough additionally the authors use of gradients of the generator as an approximation for the manifold penalty is a clever questionsclarity it would be helpful to note in the description of table 3 what is better higherlower also table 3 seems to have standard deviations missing in supervised dcgans and improved gan for 4000 labels and is there an explanation on why there isnt an improvement in the fid score of svhn for 1000 labels what is the first line of table 4 is it supposed to be combined with the second if not then it is missing results and is the pi model missing results or can it not be run on too few labels if it cant be run it would be helpful to state this on page 11 in figure a2 the first word needs to be capitalized in figure a1 why is there a dark point at one point in the inner circle what makes the gradient super high there what are the differences of the 6 pictures in figure a7 iterations docsepthis paper builds upon the assumption that gans successfully approximate the data manifold and uses this assumption to regularize semisupervised learning process the proposed regularization strategy enforces that a discriminator or a given classifier should be invariant to small perturbations on the data manifold z it is empirically shown that naively enforcing such a constraint by randomly adding noise to z could lead to undersmoothing or oversmoothing in some cases which can harm the final classification performance consequently the proposed regularization technique takes a step of tunable size in the direction of the manifold gradient which has the effect of smoothing along the direction of the gradient while ignoring its norm extensive experiments have been conducted showing that the proposed approach outperforms or is comparable with recent stateoftheart approaches on cifar 10 especially in presence of fewer labelled data points on svhn however the proposed approach fails in comparison with kumar et al 2017 but performs better than other approaches furthermore it has been shown that adding the proposed manifold regularization technique to the training of gan greatly improves the image quality of generated images in terms of fid scores and inception scores also by combining the proposed regularizer with a classical supervised classifier via pretraining a gan and using it for regularization decreases classification error by 2 to 3 finally it has also been shown that after training a gan using the manifold regularization the algorithm is able to produce similar images giving a low enough perturbation of the data manifold z overall this paper is well written and show significant improvements especially for image generation however the novelty is rather limited as similar ideas have been undertaken eg mescheder et al 2018 but in different contexts the paper would be improved if the following points are taken into account a comparison with graph convolutional network based techniques seems appropriate eg kipf and welling 2017 how do the fidinception improvements compare to mescheder et al 2018 it would be interesting to discuss why the fid score for svhn gets worse in presence of 1000 labels although there is a clear improvement in fid scores for cifar10 it would be informative to show the generated images w and wo manifold regularization more analysis should be provided on why kumar et al 2017 perform so well on svhn it should be stated that bold values in tables do not represent best results as it is usually the case but rather results for the proposed approach docsepthe paper tackles the problem of semisupervised classification using ganbased models they proposed a manifold regularization by approximating the laplacian norm using the stochastic finite difference the motivation is that making the classifier invariant to perturbations along the manifold is more reasonable than random perturbations the idea is to use gan to learn the manifold the difficulty is that the gradient of laplacian norm is impractical to compute for dnns they stated that another approximation of the manifold gradient ie adding gaussian noise delta to z directly fz fgzdeltaf has some drawbacks when the magnitude of noise is too large or too small the authors proposed another improved gradient approximation by first computing the normalized manifold gradient bar rz and then adding a tunable magnitude of bar rz to gz ie fz fgz epsilon bar rz f since several previous works kumar et al 2017 and qi et al 2018 also applied the idea of manifold regularization into gan the authors pointed out several advantages of their new regularization pros the paper is clearly written and easy to follow it gives some intuitive explanations of why their method works the idea is simple and easy to implement based on a standard gan the authors conduct various experiments to show the interaction of the regularization and the generator cons for semisupervised classification the paper did not report the best results in other baselines eg in table 1 and 2 the best result of vat miyato et al 2017 is vatent 1315 for cifar10 4000 labels and 428 for svhn 1000 labels the performance of the proposed method is worse than the previous work but they claimed stateoftheart results the paper also misses several powerful baselines of semisupervised learning eg 12 the experimental results are not very convincing because many importance baselines are neglected the paper does not have a significant novel contribution but rather extends gans improvedgan mostly with a manifold regularization which has been explored in many other works kumar et al 2017 and qi et al 2018 im wondering whether other smoothness regularizations can achieve the same effect when applied to semisupervised learning eg spectral normalization3 it would be better to compare with them references 1 adversarial dropout for supervised and semisupervised learning aaai 2018 2 smooth neighbors on teacher graphs for semisupervised learning cvpr 2018 3 spectral normalization for generative adversarial networks iclr 2018
### Summary:
|
the paper proposes a method to perform manifold regularization for semisupervised learning using gans although the ssl results in the paper are competitive with existing methods r1 and r3 are concerned about the novelty of the work in the light of recent manifold regularization ssl papers with gans a point that the ac agrees with given the borderline reviews and limited novelty of the core method the paper just falls short of the acceptance threshold for iclr
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors have proposed a wellmotivated contrastive learning method for solving goalconditioned rl tasks thus removing the need for any auxiliary losses or data augmentations for representation learning the authors then show how this contrastive objective leads to a critic implicitly learning the qvalue function and empirically demonstrate the effectiveness of their simple approach on many tasks strengths 1 the proposed method simplifies the representation learning planning in rl problem by using the inner product of learnt representations as a correlate for the qvalue function 2 this is especially useful for the pixelbased environments for which the authors have shown performance gains in many continuouscontrol tasks 3 i also appreciate that the authors mentioned certain failed experiments in appendix h which help in better understanding of the proposed method 4 the paper presents theoretical motivation for their method and show how this contrastive objective relates to a critic in deep rl methods 5 thorough comparison with several related works weaknesses 1 it is unclear why random goals are sampled to train the actor loss can the authors shed more intuition on this decision 2 can the authors also comment if theres some minimum goal dimension after which contrastive rl starts performing better than the modelbased baseline 3 in figure 5 while contrastive rl and its variants surely perform better in simpler statebased tasks however looking at the plots for imagebased observation tasks fig 5b it seems that clearning alone does quite well especially in the challenging tasks of sawyer bin also given that in fetch push fig 5b contrastive rl nce clearning gets a huge boost over other the baseline i think thats also because of the addition of clearning pipeline only yes the authors have addressed the major limitations of their method docsepthis paper proposes leveraging contrastive learning to actionlabeled trajectories where the learned representation will connect to the goalconditioned value functions specifically they utilize contrastive learning to estimate the qfunction the critic function parameterized by the innerproduct between representation in contrastive learning can also be applied as the critic function in actorcritic algorithms in the rl context theoretical analysis of the convergence guarantees has been provided in the meantime the empirical results suggest that the proposed framework can lead to better performances on goalconditioned benchmarks compared with prior methods including methods using goalconditioned rl and different representation learning strategies strengths 1 originality and significance although this work is built upon existing literature on goalconditioned rl and contrastive learning in rl it has enough novelty since it provides a clear direction to link the reward maximization and learning representation using contrastive learning the way offered in the paper to directly use contrastive learning to perform goalconditoned rl is very simple and efficient this work may have the potential to benefit the rl representation learning and robotics communities 2 relevance the authors discuss most related works including goalconditoned rl representation learning for rl and contrastive learning for rl detailed comparison has also been given in the associated sections 3 algorithms theories and evualation the designed algorithm is decent and i feel this can broadly apply to other rl regimes the convergence guarantee makes it more solid the experiments part is also very solid including most relevant baselines and benchmarks the appendix h on failed experiments is also very helpful for the community weaknesses please note that below are only for questions and potential discussions there is no need to rerun experiments during the rebuttal phase 1 about experiments it seems that in some cases improvements compared with clearning are minor fig 5 can the authors give some explanation meanwhile why not compare with other contrastive learning approaches in rl mentioned in the early sections 2 about other common challenges in goalconditioned rl can this approach tackle common challenges in goalconditioned rl especially for offline datasets eg the generalization ability to different distributions 3 minor typos line 107 pilefttau mid stright pilefttau mid sgright eq 4 consider using another color the current v is hard to recognize when print out the paper to read the authors mentioned the limitations in the paper mainly about how to generalize the work into other rl settings i think this would be the potential direction for future works i raise a few questions which are more like vague points instead of limitations i think this is a decent paper and i would vote for acceptance docsepthe paper presents a novel view of how contrastive learning can be used by itself and without any additional rl training on top to learn goal conditioned policies from precollected data this is achieved by using the similarity learned during contrastive learning as a qfunction that is then used to improve a policy in the policy gradient step it is also shown via a proof that the contrastive objective estimates the qfunction and some additional convergence guarantees are provided the authors well motivate their approach compare it extensively with related work on learning goalconditioned policies as well as works that use an additional representation learning objective on top of a rl algorithm the paper shows a good comparison between prior work and the proposed method on a range of goalconditioned rl tasks that have been used in these prior works the overall conclusion is that a much simpler method like the one presented can perform similarly well and in some cases even better than more complicated baselines strengths 1 the paper presentation and clarity is excellent and of high quality the authors have performed an extensive evaluation of their method and compared it against two sets of baselines one based on representation learning methods and one based on learning goalconditioned policies the authors describe their method motivation and results very clearly and do a very good job at relating it to prior work 2 their method is grounded in theory though i cannot claim i could follow 100 of their proofs and arguments as i am not an expert on the theoretical side 3 simplicity of the method while simultaneously performing similarly well to other methods and in some cases outperforming strong and more complicated baselines 4 the authors present an original perspective on how contrastive learning can be used to learn goalconditioned policies directly i did not find any noteworthy weakness to the paper other than some typos and some more questions about the limits of this method that i have included in the next section with questions the limitations have not been added to the authors submission checklist yet but are addressed in the actual paper
### Summary:
|
how to design rl algorithms that directly acquire good representations this paper gives an answer that contrastive representation learning can be cast as a goalconditioned rl using the inner product of learned representations the technical novelty of this paper is sound with the thorough theoretic motivation of the proposed method and solid experiments the presentation of this paper is also satisfactory all the reviewers provided positive feedback on this paper i also enjoy reading this paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors show how model multiplicity ie different models which have similar test set performance can result in explanation multiplicity using shapely values for feature importance often models that are underspecified ie have the same hyper parameters setup but different initializations but have similar test performance can result in different feature importance rankings and even entirely contradictory ones the authors compare rashomon effect vastly different hyper parameters similar test performance and underspecification same hyper parameters different inits similar test performance the rashomon effect has a much greater impact on agreement and contradiction as expected but underspecification also results in fairly high levels of contractiction very strong experimental design tackling a very important issue results felt rushed things are pushed to the appendix that should have been kept in the main paper i found figure 2 to be confusing but the caption was very clear also the right hand side of figure 5 overall paper is very well written and the work is very significant the authors show how the arbitrary selection between similarly performing models can results in drastically different explanations they couch the work in datasets related credit and insurance an area which providing good explanations would allow the individual to modify their behavior such that they can score higher in their work they show how the initial explanation would not often holdup if a different but similarly performing model had been selected docsepthis paper studies how model indeterminacy impacts explanation multiplicity two aspects of model indeterminacy are considered predictive multiplicity underspecification rashomon sets and epistemic uncertainty as models change explanations are bound to change as the authors show via experiments however authors also show that predictive multiplicity or epistemic uncertainty are not good indicators of the extent to which the explanations agree there has been prior work on disagreement between explanations by lakkaraju et al where different local explainability methods with respect to a single ml model there is also follow up bayesian work on quantifying the uncertainty of local explanations by the same authors rashomon sets have been used by rudin et al as the authors point the problem studied by authors is very useful one especially in practise the experiments are quite thorough and good however the results and their implications are perhaps less clear when multiple almost equivalent models have the same prediction for a user even then local explanationsgradients tend to be different this is a useful insight authors hypothesise that this may be due to the fact that explanations depend on the shape of the predictive function also seen visually for experiments on small problem sizes the almost equivalent models are characterised using rashomon sets and underspecification sets randomness in training the results are for one local explanation method shap explainer that approximates shapley values the related insight is a bit confusing ie the extent to which the explanations disagree is not determined by the extent to which the predictions of multiple almost equivalent models differ is this correct pls clarify the other insight is if we consider epistemic uncertainty this might not be a good predictor of explanation multiplicity for that user across the almost equivalent models is this correct pls clarify the main issues with the paper are a lack of clarity on following 3 terms predictive multiplicity model indeterminacy and epistemic uncertainty b lack of clarity on results c experimental results are based on only 1 local explanation method shap d the analytical results are based on linear setting alone see above docsepthe paper introduces a notion of explanation multiplicity this is similar to previous works that look at the effects of model indeterminacy such as predictive multiplicity the authors first show how explanations can vary for models with similar performance on toy examples and theoretically in the linear case the authors demonstrate that model multiplicity can have noticeable effects on the explanations using shapley sampling explanations and this can lead to the shapley explanations contradicting in practice for models irl strengths this paper is very well written and easy to follow it was a pleasure to read and i had no issues understanding the work the intuition the paper builds with toy examples before providing real experiments is nice i enjoyed this aspect the idea is wellfounded previous works on things like predictive multiplicity are quite concerning and this work fits into that literature nicely the fig 1 is quite nice very clear descriptive this work also makes sense in the context of explanations where aspects like stability have been studied extensively in the past ie small perturbations leading to noticeable differences or simply rerunning explanations leading to large differences so this work is a nice extension in that regard to a related but different notion about how explanations that seemingly should be the same are in fact different the experiments are clear weaknesses my main critique of this work is that the effects of multiplicity in this case contradicting explanations is quite a practical concern and while the authors have shown this can appear in some cases the extent to which explanation multiplicity appears in practice is still somewhat left unanswered the authors empirically demonstrate their technique using a sampling shapbased approach with the mean as the baseline distribution choices around the baseline distribution vary pretty widely and often other variants of shap are used like treeshap deepshap practically because theyre more efficient its nice that this work provides evidence this can be a problem in the case presented but it would be really compelling to evaluate the degree to which explanation multiplicity occurs in other variants of shap that are practically used and other choices of baselines because i suspect readers will be looking for their irl use cases this paper is a weak accept for me because i think the work is quite solid but extending the evaluation to additional explanation use cases would make the impact much higher authors have done a sufficient job
### Summary:
|
this paper introduces the notion of explanation multiplicity the authors first show how explanations can vary for models with similar performance on toy examples and theoretically in the linear case they demonstrate that model multiplicity can have noticeable effects on the explanations using shapley explanations this paper puts forth a strong approach tackling a very important issue so all the reviewers concur that this paper should be accepted at this time
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the manuscript proposes a novel architecture with a slotbased decouplingaggregation framework for unsupervised sequence prediction the model is motivated by preventing spatiotemporal mode collapse which affects many existing methods the experiments clearly show that the model addresses this issue and performance comparisons across three commonly used datasets are presented strong points overall well motivated novel architecture for addressing the issue of mode collapse in unsupervised predictive training the proposed decouplingaggregation framework is demonstrated to learn a highly diverse set of modefactors which are combined to cover a diverse set of output modes with high accuracy the analytical experiments show that the performance of modernn consistently across datasets benefits from increased diversity of spatiotemporal modes whereas the performance of other methods often diminishes when trained on more diverse data the description seems sufficiently detailed for reproducing the experiments commentsquestions 1 inconsistencies between text equations and figure eq 3 is missing the shared ffn for dimensionality reduction text should explicitly state that the slot bus input gt is tanh activated unlike the gates id also name it something like slot bus input instead of input modulation gate since the input gate it is usually considered to do the modulation of the cell input 2 motivation for implementation details in adaptive slot fusion missing 3 for clarity is sigmait cdot wtextfuse0 it what you call the residual connection im not exactly sure about the terminology but i believe residual connections are usually linear 4 i think instead of slott1 you could just write qt since these queries do not correspond to the spatiotemporal slots of the previous timestep using the same notation is confusing 5 since the figure doesnt have a descriptive caption id consider placing it on the same page as the description of the architecture 6 a question about the modernn ie the stacked modecell do the higher layers receive only the cell output mathcalht as input or do they also have access to the slot bus mathcalbt 7 typo figure 5 ture should be true 8 did you also calculate the adistance using the cell outputs instead of the memory states 9 the tsne visualization of figures 3 and 5 for kth and the radar echo dataset would be interesting are the modes as clearly separated 10 theres a broken figure reference in appendix b3 11 i might have overlooked it but i didnt see a comparison of modernns with different numbers of modecells i can imagine you played a bit with this hyperparameter id be interested to know your observations 12 the method for selecting hyperparameters is not described 13 how do training times compare to convlstm rim and other architectures 14 no discussion of limitations 15 do you have initial results on datasets with a larger diversity of spatiotemporal modes eg human36m or kitti i have updated my score as my concerns are mostly addressed overall interesting work however for stronger recommendation of acceptance some of the points above should be addressed docsepthis paper defines a phenomenon spatiotemporal mode collapse in the training of unsupervised predictive models they propose an rnnbased approach to learning structural hidden representations in temporal data the proposed idea was experimented with and compared with respect to the convolutional lstm baseline and several other temporal modeling methods ie rim or convttlstm strengths the paper presents a novel temporal model to capture spatiotemporal structures in the data and perform better video prediction the quantitative experiments were performed on three datasets mixed moving mnist kth action and radar echo datasets weaknesses the cell of the proposed modernn is quite similar to an rnn cell with multihead attention one relevant work using a similar idea was not cited further it performs better or on par with the proposed approach in the kth action dataset lin et al selfattention convlstm for spatiotemporal prediction aaai 2020 httpsdoiorg101609aaaiv34i076819 in generative models mode collapse means the generated samples being identical or very similar to each other it may be caused by the imbalanced distribution of the training set or dependent structures and bias in the data the motivation of spatiotemporal mode collapse is valid however the examples and used datasets are not enough to represent for instance in moving mnist or other datasets do not contain any imbalanced or entangled factor of variation that will cause mode collapse even though it was argued on the contrary in the related work the proposed work is highly related to feature disentanglement phydnet guen thome 2020 is cited but did not included in the moving mnist or other dataset results how were the previous methods in table 2 and 4 compared did you retrainimplement or they were taken from the reference papers there are several confusing points for instance predrnns mse on radar dataset was reported 442 in the reference paper but it is842 in table 4 what is mixed moving mnist in reference works mostly moving mnist either freshly rendered or the version in shrivastava et al 2015 is being used several typo issues in the entire text for instance all titles resulsresults i think the proposed problem of spatiotemporal mode collapse was not covered and described clearly this makes it difficult to understand where the contribution of the proposed method comes from and what it improves spatiotemporal slots idea reminds the attention modules on the temporal data and leads to a weighted fusion of temporal weights it is very similar to selfattention convlstms mainly the introduced problem is not clear and also approach is limited in novelty these are the points affected my decision docsepthis paper proposes a mechanism to reduce spatiotemporal mode collapse in unsupervised predictive learning to achieve this goal the proposed method is built upon the idea that different latent modes in the same data domain should share a set of hidden representation subspaces which can be represented with various compositional structures based on the features in each subspace experimental results show improvement over simpler baselines such as predrnn or convlstm the first property mentioned in introduction section seems a bit simple why one would expect each sequence correspond to a single spatiotemporal mode while this may be true for very simple scenarios eg moving mnist or kth for more realistic scenarios which may not hold authors made a claim multiple times that they have demonstrated the effectiveness of their approach in dataset with highly mixed visual dynamics however i disagree that kth or moving mnist represent such scenarios moreover the training set of such dataset is very small given the complexity of the proposed method the baselines authors compared their approach to dont represent the sota or even rather old but strong approaches such as kalchbrenner et al 2017 comparing against convlstm or predrnn is far from enough for iclr 2022 regarding the method id see it as a good engineering effort of combining existing methods rather than proposing a novel one with new insights although this approach seems to be a good engineering effort to come up with an effective model that works reasonably well on very smallscale datasets with relatively low complexity i believe the novelty of this method is very limited moreover comparison to sota and evaluation on larger complicated datasets are missing
### Summary:
|
the paper presents a novel architecture modernn for unsupervised video prediction by learning spatiotemporal attention in the latent subspace slots modernn effectively learns modular features using a set of mode slots and adaptively aggregates the slot features with learnable importance weights the paper has promising results on several benchmark video prediction datasets during the postrebuttal discussion the reviewer wt6k and vmmf responded to the authors rebuttal but there was no discussion among them the consensus is that even though the paper is a very strong engineering effort it was not clear how the proposed architecture addresses the spatiotemporal mode collapse problem tsne in fig 31013 is insufficient to show disentangled feature space in fact phydnet was designed to disentangle different factors physical vs unknown hence not a good baseline hsieh et al 2018 is a better fit in addition synthetic data examples would be helpful to explain the underlying mechanism of the model and provide more insights for the video prediction community based on this reason i recommend rejecting this paper as it is now and encourage the authors to revise the draft and submit to future venues hsieh j t liu b huang d a li f f niebles j c 2018 january learning to decompose and disentangle representations for video prediction in neurips
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
strengths the results are presented clearly and intuitively the connection and explanations provided via ordinary linear sems is much appreciated similar to verma constraints for admgs the results on equality constraints here may be translated into improvements in structure learning algorithms that is more informative structures can be learned by algorithms that take advantage of such constraints weaknesses i think some of the theory could have been pushed a bit further many of the results stem from applications of existing criteria phrased for admgs open questions for the authors are there ways to think of these generalized equality constraints in linear hawkes processes as museparation in a conditional dmg just as we are able to some verma constraints as mseparation in a conditional admg that is can these be phrased as constraints that appear in postinterventional distributions at first glance it would seem so museparation between 1 and 4 should hold after cutting incoming edges intervening on 3 i wonder if that would lead to an interesting theory in the nonparametric form of these constraints is there always an equality constraint implied by a missing edge i think this probably isnt the case because its not true for regular admgs sorry for answering my own question there but hopefully the follow up is also interesting is there a notion of maximality that can be defined here similar to mags for ordinary conditional independence and margs for generalized equality constraints the latter might be quite hard but it may be worth thinking of a maximality condition for ordinary constraints if there isnt one already because maximal graphs form the natural units for structure learning in some sense additional related works the paper is very thorough in its literature review i only have a few suggestions that i think are relevant to the introductory discussion on structure learning algorithms that take advantage of generalized equality verma constraints many of the references listed there currently are technically papers that lay out theory for what these equality constraints may look like without proposing specific algorithms to take advantage of them structure learning with equality constraints in linear gaussian sems httpsarxivorgpdf201006978pdf structure learning with equality constraints in linear nongaussian sems httpsarxivorgpdf200711131pdfdocseporiginality the originality of this work is primarily in noting that graphical models of hawkes processes are closely related to linear structural equations models and in demonstrating properties of the parameters of the model this is an original insight and might motivate discovery of additional properties of such models significance the theory of causal inference in continuous time settings is not well developed this work makes some progress on developing this theory and will likely be of use to analysts working in these settings that said the main causal results of this paper in sections 4 and 5 have a somewhat sparse presentation the approach taken to presenting the results of these sections is that once the relationship between sems and hawkes process models is established results from linear sems can be used in the hawkes process models under consideration a simple example of the titular equality constraints is presented illustrating a setting in which the novel equality constraints can help adjudicate between candidate models figure 3 b c but it is not clear when these constraints arise in general it would be helpful to see some discussion of graphical conditions under which these constraints arise and when they are nontrivial even if that takes some recounting of material from chen et al 2014 such elaborations may be added as supplementary materials as they will not necessarily be original contributions technical quality and clarity the paper is clearly written and seems technically correct though i do not have the necessary background to validate all the proofs docsepidentification of a model is an important topic especially in causal inference the paper builds on prior work in the field of hawkes processes and testable constraints latent projection and algebraic equality constraints the authors study the new identification results for a wellknown hawkes process when there exist some unobserved variables though most of technics have been proposed their conclusions in hawkes processes appear both novel and interesting it would be helpful to describe some realworld implications of their conclusions i might have some misunderstandings about this paper so it is very likely that i will change my score if the other reviewers provide some new points
### Summary:
|
the authors study causal identifiability for linear hawkes processes by casting the processes into a linear sem under appropriate assumptions and provide the identification results for linear hawkes processes using the existing results for linear sem the reviewers appreciated the contribution as well as the quality of presentation
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a method for adversarial attacks on dnns strattack designed to exploit the underlying structure of the images specifically incorporating groupsparsity regularization into the generation of the adversarial samples and using an admm based implementation to generate the adversarial perturbations the paper is structured and written well with clear articulation of technical details the experiments and reported results are comprehensive and clearly showcase the efficacy of the proposed solution im not enough of an expert on the subject matter to comment about the novelty of this proposed approach however it would help to elaborate more on the related work section7 with clear contrasting of current method esp using structural information for adversarial samples theoretical implications underlying rationale and importantly calling out the benefit over the previous lp norm based approaches regarding group sparsity it is unclear as to the assumed structural constraints is the sliding mask expected to be only 2x2 13x13 for mnistcifar10 imagenet respectively impact of largersmaller or skewed sizes sensitivity to image types docsepthe paper proposes a novel approach to generate adversarial examples based on structured sparsity principles in particular the authors focus on the intuition that adversarial examples in computer vision might benefit from encoding information about the local structure of the data to this end lp group norms can be used in contrast to standard global lp norms when constraining or penalizing the optimization of the adversarial example the authors propose an optimization strategy to address this problem the authors evaluate the proposed approach on real data comparing it against stateoftheart competitors which do not leverage the structured sparsity idea the paper is well written and easy to follow the presentation of the algorithms for i the nonoverlapping and ii overlapping groups as well as iii the proposed refinement are clear the experimental evaluation is interesting and convincing the further experiments in the supplementary material add value to the overall discussion the main downside of the paper is that the proposed idea essentially consists in replacing the standard ellp norm penaltyconstraints with a groupellp one while this provides interesting technical questions from the algorithmic perspective from the point of view of the novelty the paper does not appear an extremely strong contribution docsepthe paper proposes a method to find adversarial examples in which the changes are localized to small regions of the image a groupsparsity objective is introduced for this purpose and it is combined with an lp objective that was used in prior work to define proximity to the original example admm is applied to maximize the defined objective it is shown that adversarial examples in which all changes are concentrated in just few regions can be found with the proposed method the paper is clearly written and results are convincing but what i am not sure i understand is what is the purpose of this research among the 4 contributions listed in the end of the intro only the last one interpretability seems to have a potential in terms on the impact yet am not quite sure how obtained groupsparse adversarial patterns better shed light on the mechanisms of adversarial perturbations i think the mechanisms of adversarial perturbations remain as unclear as they were before this paper i am not ready to recommend acceptance of this paper because i think the due effort to explain the motivation for research and its potential impacts has not been done in this case upd the discussion and the edits with the authors convinced me that i may have been a bit too strict i have changed my score from 5 to 6
### Summary:
|
this paper contributes a novel approach to evaluating the robustness of dnn based on structured sparsity to exploit the underlying structure of the image and introduces a method to solve it the proposed approach is well evaluated and the authors answered the main concerns of the reviewers
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper extends the lastiterate analysis to dpsgd on strongly convex and smooth loss functions the main contribution compared the similar analysis gd is to take two kinds of amplification into account 1 postprocessing by adding gaussian noise 2 subsampling the empirical results show that the proposed methods outperforms compositionbased algorithms this paper makes a solid step towards lastiterate analysis for dpsgd although there is some space for improvement i believe the paper surely passes the bar for acceptance strength 1 lastiterate analysis is a very important direction to further improve privacyutility tradeoff for dpsgd the authors manage to augment the previous analysis for gd with amplification by postprocessing and subsampling which is a solid contribution weakness 1 the paper is not well written enough there is huge space for improvement on the organization of the sections the section names and the way to present the contributions for example even with some background knowledge the titles for section 4 and 5 are somewhat confusing at the first glance for me i suggest merging 4 and 5 into one section because they are actually two steps in one proof there are also some sentences that do not read well eg line 109 uppercase and lowercase letters are not used in a universal way in section titles some citations are outdated for example tramer and boneh 2022 are accepted to iclr 2022 but the cited one is still the arxiv version 2 table 2 does not report what kind of clipping is used also the number of iterations used in the experiments are not reported 3 although hyperparameter tuning is typically not taken into consideration when counting privacy budget the authors should still expose the details of their hyperparameter tuning process 1 the writing is not good enough 2 many details of the evaluation is missing docsepthe paper focuses on improving the privacy analysis of dpsgd the paper studies the privacy guarantees when only the last iterate is released the papers main contribution in this particular setting is to provide tighter privacy bound for strongly convex loss functions finally the paper provides an example of a classification task where a tighter privacy analysis leads to improved accuracy compared to when compositionbased privacy analysis is used overall i think this is an extremely important result and strongly recommend the paper be accepted 1 the main result has significant practical implications on differentially private machine learning training 2 there are many similarities with 14 perhaps a minor weakness of the paper is that it isnt entirely clear what are the technical challenges the authors had to overcome to complete the analysis which were not already addressed in 14 yes docsepthis work improves the analysis of the lastiterate privacy bound which assumes the internal state of the algorithm is not revealed the new analysis derives a privacy loss that grows slower than the one derived from the composition theorem if the network converges slowly or is trained for multiple epochs i have a basic understanding of the theoretical part of this work but didnt go deeper into the proofs for me the strength and weakness of this work are strength so far dp learning still suffers from dramatic performance loss compared with nonprivate learning tailored and tight privacy bound can help reduce the required noise under certain privacy budgets and improve the performance weaknesses 1 this work extends the study of lastiterate privacy bound by considering minibatch gradient and privacy amplification by subsampling etc to apply the new analysis in practice many assumptions have been made which may not be satisfied when training a real network so i expect the resulting network obviously outperforms others so far its not perhaps a tighter bound is needed 2 i am a little confused about the evaluation of the proposed privacy analysis i see the baseline performance given in this work is lower than in its original work for example in the experiment on cifar10 scatternet can actually obtain an accuracy of 693 resnext29 can achieve 800 simclr can achieve 927 all are on par with using the method proposed in this work na docsepin this paper authors extend recent work in convergent privacy cost of convex optimization to subsampled gradient descent gd setting authors show that for strongly convex lipschitz loss functions the privacy cost of releasing last iterate of dpsgd does not grow to infinity as the number of epochs grows but that there is a finite upper bound for this loss authors present a novel privacy analysis that builds on top of recent work by chourasia et al 2021 and is able to capture the privacy amplification by subsampling finally authors demonstrate empirically how the reduced privacy cost improves the prediction accuracy of logistic regression model learned under dp in many applications the intermediate steps of dpsgd are not needed but are released which causes the privacy cost of these types of release schemes to grow as the number of iterations grows in this paper authors continue the research in direction of releasing only the last iterate and quantifying the privacy amplification due to keeping the internal states secret more specifically this paper extends the work by chourasia et al 2021 who proposed a bound for the privacy cost for dp gradient descent algorithm as the gd algorithm is often computationally expensive authors take the natural next step and quantify the privacy cost of stochastic gd authors show several novel theoretical results mainly thms 42 43 52 and 53 that are based on novel privacy analysis capturing the privacy amplifying effect of subsampling the new privacy bound is significantly tighter than the privacy bound of chourasia et al that does not capture the subsampling amplification i also think that the paper is generally well written and after couple of reads the story and the contributions become very clear there are some parts that i think could be further clarified and i will add those to the questions section the most significant limitation of this work must be the assumptions on the loss function the strong convexity and the lipschitzity rule out quite a few applications of dpsgd however i still believe that the paper makes significant contributions that will be beneficial in the future works on convergent bounds of dpsgd i think the limited use cases due to assumptions on the loss function could be further discussed negative societal impact is not discussed in the paper but i believe the paper is purely theoretical and does not have direct negative impacts
### Summary:
|
this work shows that for strongly convex and smooth loss functions running dpsgd has a privacy cost that stops growing at some point the answers a question that has been open and is of significant theoretical and practical interest they show empirically that this new result can allows one to get better privacyaccuracy tradeoffs in some cases this work is a big step ahead in analysis of dpsgd and i recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose zorro a posthoc explanation method for node classification with graph neural network architectures zorro leverages ratedistortion theory to generate masks that select nodes in the target node neighbourhood and their most important features the problem is very relevant to the gnn community and i am glad to see more works coming in on this topic the idea of relying on ratedistortion theory is interesting and original and to the best of my knowledge this is the first time i see it used to tackle this research problem the paper is well structured and organised the original contribution is sufficient related work is sufficiently well covered nevertheless the paper suffers from shortcomings a the paper claims that zorro can generate multiple disjoint explanations apparently all highly faithful this seems to be at odds with the authors claim to explain the behaviour of the model in other words if i were on the receiving side and i was given multiple disjoint explanations which one should i trust more how can explanations shed a light on the behaviour of models if they are disjoint i see zorro is able to generate overlapping explanations as well a property compatible for example with examplebased technique in xai literature such as counterfactual explanations but i have mixed feelings on the effectiveness of disjoint explanations in practice b a drawback of this work is the complete absence of humanbased evaluation i acknowledge explainable ai literature is ripe with examples of accepted papers but the authors seem to deliberately disregard this aspect we are not interested if an explanation is congruent to human understanding sec1 if humans are not important then what is the reason you explain your predictions if the goal is limiting to debugging a model perhaps the narrative should be revisited all in all i believe users should be central in an xai piece and papers in this area should help the reader understand if the generated explanations meet users expectations even in a ml conference such iclr even for a 8page paper c the paper does not include any examples of the generated explanations it is hard to figure out if the claimed fidelity brings meaningful results in practice and the reader is left with this doubt aside from a fullfledged evaluation campaign see a above some examples would really help make the case d experiments do not include any evidence of whether zorro explanations work in practice besides as i mentioned above the author should probably clarify which audience they are targeting ie engineers debugging a model end users trying to understand the reasons for a specific model outcome etc e i was expecting experiments to assess the impact of tau the userdefined fidelity threshold the authors experiment with 98 and 85 and the choice of tau has limited influence but i would have expected empirical evidence for such statements ie experiments on a wider range of tau to assess size and fidelity if choosing a desired fidelity is not crucial the paper should show so f there are not experiments on runtime complexity the reader is left without evidence of how long it takes to generate an explanation for a target prediction g it is unclear if zorro achieves more faithful and also smaller explanations than gnnexplainer sec 42 suffers from clarity issues as long as figure 5 h some sections could be better clarified to help the reader understand important aspects of the work example in sec1 the notations paragraph would benefit from proofreading and rewording sec 42 could also be refined minor size matters for explanations but smaller does not always mean better for example in medical decision support systems some clinicians may prefer longer and more thorough explanations your milage may vary figure 5 is poorly legible some typos along the way eg denotes the binary column vector of selected nodes and fs denote the binary row vector of selected nodes sec 1 effectively the complete input is presented as an input in 42 questions for the authors q1 how does zorro decide the size of each computational graph to work with ie the size of the neighbourhood q1 it is not entirely clear to me how you zorro generates multiple explanations i could not find how this is done in algorithms 13 could you please clarifydocsepthis work proposes to explain graph neural networks using hard masking techniques specifically it tries to find the node mask vs and feature mask fs which can identify the most important information of the input such that the masked information can yield a high fidelity score this work proposes a greedy method zorro to explore these hard masks which can be used as the explanations of the prediction experimental results are interesting and promising strengths the task is very important gnns are very popular but they are mostly treated as blackboxes interpreting gnns is still less studied compare with gnnexplainer this work focuses on using hard masks to explain gnn predictions it is a reasonable choice since soft masks which are used in gnnexplainer may introduce new semantic meaning or noise to the node representations since these representations are very sensitive experimental results are very interesting first there exist multiple explanations for the same input graph that they both lead to high fidelity scores second the proposed method can obtain high fidelity scores than gnnexplainer and more sparse explanations in addition this works studies several types of gnns such as gcn gat gin appnp weaknesses the connection between the proposed method and data compression is not convincing from my understanding it belongs to the maskedbased interpretation methods which is widely studied in other domains such as image and nlp then i do not think it is something new from other fieldsdata compression in information theory in the proposed method all nodes share the same feature mask vs is it a proper choice is it possible that different nodes may have different important features then probably it is better to not share the vs in the proposed method the ordering information rv and rf are stored it is computed in the beginning and keep fixed for later steps however in the later steps the algorithm will update the vs and fs then why do we use the same ordering information top nodes in the beginning may not be top any more after some nodesfeatures selected the method itself is very straightforward which is a simple greedy algorithm then i believe the technical contribution may not reach the bar of iclr the algorithm is not clearly explained whats the meaning of vr fr rvp and rfp etc how are they initialized for the comparisons with gnnexplainer we need to see some real examplesexplanations for both correct predictions and incorrect predictions it is not enough to just report numerical numbers i am willing to adjust my score if my concerns are properly addressed update after rebuttal i have read the authors rebuttal however my concerns are not well addressed 1 there are a lot mask based methods for interpretation in different domains 1 2 3 4 existing methods 123 are providing postdoc explanations for a pretrained model i still believe the connection between the proposed method and data compression is not convincing 2 i still believe the novelty is limited hence i am keeping my score unchanged 1 gnnexplainer generating explanations for graph neural networks nips 2019 2 real time image saliency for black box classifiers nips 2017 3 learning to explain an informationtheoretic perspective on model interpretation icml 2018 4 rationalizing neural predictions emnlp 2016 docsepthe authors address the problem of explaining the behaviour of graph neural networks which operate on a computation graph based on their khop neighbourhood such as a graph convolutional network gcn the core idea is to identify for each node v in the graph the nodes and features of the graph most relevant to the behaviour of the gnn for node v that is the goal is to find a subgraph of the computation graph associated with a node v in the graph importantly the authors propose to test whether the chosen subgraph is relevant and the complement wrt the computation graph irrelevant by adding random noise on the parts deemed irrelevant by their method the method is evaluated through a metric called fidelity which is the agreement in label output between the behaviour of the original and masked gnn in expectation over the noise distribution while overall a wellwritten paper a source of confusion is the authors tendency to conflate the computation graph and the graph to which the gnn is applied the most important notion here is s defined as a subset of the computational graph when defining this it is important to also define precisely this computation graph what does it look like abstractly independent of the gnn instance used for instance there is a nice compact way to unify most messagepassing neural networks see eg httpspytorchgeometricreadthedocsioenlatestnotescreategnnhtml when you look at this definition you see that there are several learnable functions phi etc the aggregation function and finally the classification layer now in your definition of a computation graph what are the nodes are the applications of the learnable functions each a node what about the aggregation one node again i think for the reader to fully understand how your explanations s look like this needs to be rigorously defined my assumption here was that the computation graph groups computations such that nodes in the computation graph and nodes in the graph to which the gnn is applied coincide generally i think you should spend more effort on section 3 the notation in the argmax statements in section 31 is also strange for instance s is defined as a pair so it should be written as argmax fvp f also what is the p here another worry i have is the efficiency of the approach if your average number of features and nodes that exceed the fidelity threshold is k the average size of s and the graph has n nodes and f features you need to evaluate the gnn knkf times to obtain an explanation for one node for large graphs andor graphs with numerous features this can be expensive and this is for the case when you compute the expectation with one montecarlo sample from the noise distribution the most disappointing aspect of the paper however is the experimental evaluation sure it is interesting to assess the multiplicity and size of explanations what would be more interesting however is to evaluate how faithful your explanations really are and here is where i have a disagreement with your assumptions you write that is completely faithful to the model ie the explanation achieves the fidelity value of 1 but achieving a fidelity of 1 does not mean that your explanation is faithful we could only know this if you removed the nodes and features and retrained the model with the same seedsinitialization there is an intricate interplay between the nodes and features during training of a gnn what you evaluate is how close to the original behaviour the gnn is when you remove certain nodes and features but i would question whether this is a proper definition of faithfulness my suggestion would be to also run experiments where you retrain gnns and check whether the behaviour is indeed such that removing the nodes and features your method deems unimportant leads to a minor change in behaviour the synthetic experiments of the gnnexplainer paper are not included but it makes sense to me to define synthetic graph classes where the presence of certain featuresnodes is known to cause the node label by construction this way one can check whether those features are the ones identified by the xai method i would encourage the authors to also run these experiments and compare to the results from the gnn explainer finally it is not entirely fair to compare other methods to yours through the notion of fidelity alone your method is defined to optimize for it as i mentioned above fidelity is one way to measure the quality of a reduced graph but not the only one it is by no means the only one to measure faithfulness as i have outlined above for instance it would also make sense to compare based on the measures introduced in the gnnexplainer paper
### Summary:
|
the paper provides a simple approach to explaining gnn predictions for each node by greedily selecting nodes or features in each computation graph so as to increase the fidelity score the fidelity score is based on comparing the original gnn output to what is obtained with noisy versions of the masked nodesfeatures while simple the approach seems somewhat inefficient efficiency should be assessedcharacterized also several improvements to the evaluation expressed in the reviewsdiscussion eg human evaluation practical utility comparison to gradient based methods would make the submission somewhat stronger
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors describe the construction of a 16tb multilingual text dataset purposely built for training large language models the work has been valuedriven and is inclusive of diverse language families and mindful of potential legal issues documentation detailed documentation regarding data sources processing and filtering applied code to reproduce publicly available ethics valuefirst approach to data collection can inspire the broader research community to make this standard practice significance datasets this size have traditionally not been published the bigscience corpus and the bloom model trained using it are a significant step towards democratizing access to large language models accessibility dataset currently not publicly available and authors stated intention to only publish subset of data would like for the authors to clarify the release strategy and governance docsepi assume that this is the first paper in which bigscience as a large multilingual set is introduced see relation to prior work where i express some uncertainties the paper describes bigscience corpus a very large 16tb multilingual dataset it is a very interesting initiative due to 1 its multilinguality and 2 care taken as far as quality is concerned large multilingual data set created in a principled manner and can be used for training better large language models and not just for english procedures for processing the data are clearly described and have interesting technical ideas and the scripts were made publicly available good comparison to other data sets using the tokenizer 43 it is not clear to which extent bigscience corpus is about science judging by the name id expect a resource much more focused on science interestingly this type of focus is not clearly expressed in the abstract or the first paragraph of introduction the paper is lacking clear evidence for bigscience corpus being superior to stateoftheart data sets pile c4 etc especially as far as the science aspect is concerned the analysis based on tokenization is an exception as i mentioned in strengths in particular is it really significantly more sciencey than pile c4 etc and not just in terms of raw statistics but as a more indepth assessment eg try to estimate whether you could squeeze more knowledge from this data set docsepthis paper presents the bigscience corpus which can be used to train large language model its contributions are two folds with the help of international collaboration between multidisciplinary researchers it provides a massive multilingual corpus including text and code which can be used to train bloom the complex processing tools of developing this corpus are released which may be helpful for future researchers to createadd more corpus data although numerous tools and texts are presented in this paper preprocessing examples and model performance trained on this corpus are missing since this paper provides a large corpus to train large language model its reasonable to provide performance of trained smallbig model for better insights overall the corpus presented in this paper is valuable and could be useful for other researchers this paper presents bigscience corpus which is used to train bloom this paper releases preprocessing tools to produce bigscience this paper gives a simple and first analysis at the bigscience corpus details of how to produce this big corpus are given this paper also gives a good example of collaboration between researchers around the world the bigscience corpus contains both document texts and code texts the bigscience corpus may have high influence on the nlp research community since it has both large scale clean data released preprocessing tools and a trained bloom model this paper lacks details of how to preprocess text data from various resources more demo examples can help readers to understand how to deal with diverse text preprocessing problem the personally identifiable information section is too rough which might cause privacy problem in the future since this corpus contains so much text data and code data the analysis of this corpus is simple since this paper wants to show the power and help of big corpus it would be more clear if the author show some trained model performance with different data sizes one more concern is that it contains many existing nlp datasets section 21 which might cause label leakage in following zero shot or few shot testing of pretrained language model docsepthis paper described an international effort of collecting a largescale highquality open dataset for language processing i believe this dataset could be of interest to a lot of people working on this topic and could be a solid ground for future research the effort of sourcing and filtering the corpus is detailed many tools are used and developed i think this could also help future works the issues of license of code and open access documents are considered which i think is the most critical factor in collecting such a huge dataset and training a model on it overall i believe this paper could be of interest to a lot of people in fact there is already a lot of people who are interested in this work and could be the cornerstone of future research i would recommend strong accept for this work 1 the development of this dataset is highly collaborative and open 2 many interactive tools are developed during the dataset construction and could be helpful for future researches 3 the data source selection considers many aspects and helps in reducing biases in the proposed dataset i spot no major weakness in this work minor 1 the submitted main text does not have line numbers 2 i cant help but notice that the number of data sources for china is only 5 and for japan it is 3httpshuggingfacecospacesbigsciencesourcingcatalog why is this imbalanced i think there are a lot of internet users in china and japan hence more possible data sources maybe there is still a bias towards the eastern world docsepthe authors created a new multilingual corpus for training large language models they crowdsourced language corpora all around the globe and used them to create a new big one they will also share as far as im concerned part of it for further research this paper describes process of creation of the dataset which contains crowdsourcing of corpora pseudocrawling deduplication filtering and cleaning there is also provided analyze of the data 1 big open collection of diverse texts should be useful for nlp community 1 datasets were chosen by native speakers 1 filtering of each language was done by a person proficient in this language 1 i liked especially part where the authors investigate subset in terms of fertility i think tokenization part is often overlooked in language modeling and this kind of analyzes shows that the authors are aware of that 1 i think part of programming languages should be another paper they need a different approach for filtering and deduplication i think they tend to have different set of biases eg towards student assignments in my opinion topic of creating corpus of code should be examined and described deeper than it is possible in this work 1 id like to see a size comparison to existing datasets 1 if i understand correctly the authors are not sure at the moment if data may be published without publishing the corpus this paper would be just an interesting analyze of internet content in terms of corpus creation docsepthe paper presents the 16tb multilingual dataset built for training the 176billionparameter bigscience large openscience openaccess multilingual language model bloom the dataset contains both natural languages and programming languages collected from two sources crowdsourced datasets and oscar a common crawlbased online data repository details of collecting cleaning merging deduplicating on the two data sources are presented in addition to the data processing pipeline a preliminary analysis on the distribution over languages document sizes filter values and tokenizer analysis are conducted on the presented dataset the major contribution of this paper is to open source a large multilingual dataset with processing tools and details on data collection and processing to the community 1 the paper presents a 16tb massive multilingual corpus as a result of 1year international collaboration containing curated data for 59 languages 46 natural languages and 13 programming languages which could be used to support the training of large multilingual language models 2 the paper releases the tools and codes of the processing pipeline used in the curation of the dataset which could be beneficial to the community 3 the paper presents a good documentation on the data sources and processing pipeline 4 native speakers are involved in the dataset curation which could help improve the quality of the dataset 1 the dataset covers only 46 natural languages and the motivation of choosing such languages is not well presented 2 regarding pii although the paper presents muliwai as a filtering tool in the appendix a rulebased approach was actually used additionally the paper does not empirically evaluate the effectiveness of the rulebased approach in comparison to the more sophisticated muliwai 3 it is also disappointing that the paper does not remove toxic texts with eg social bias discrimination etc 4 the paper conducts deduplication on the two data sources crowdsourced data and oscar separately however it is not clear if it is necessary to deduplicate between the two data sources 5 it would be very interesting to see how the data processing pipeline with different settings affect the finally trained llm
### Summary:
|
this paper describes the creation of the bigscience corpus used to train the bloom model the paper describes steps taken to curate and construct the corpus as well as analysis of what it contains this is an impressive effort and blazes a trail among such data collection and governance efforts reviewers appreciate the democratization of a dataset of this size and generally appreciated the care taken in its construction such as the filtering of data by native speakers of each language the authors convincingly rebut a few points about pii criticisms which could be leveled against nearly any model at this scale and which require whole lines of research to address in complete detail and the aims of the corpus the most major point brought up is about the release process particularly the decision to release a subset of the dataset however i am satisfied with the authors responses particularly to reviewer ntin i also agree with the ethics reviewer that significant care has been taken in this project and although the data governance aspects of it are not discussed as much here an accompanying publication describes them in detail taken together these two papers are a model for how other efforts should proceed down the road finally in a similar vein there are a few comments about different choices that couldve been made eg inclusion of source code toxic content etc however unlike some neurips papers i dont think this project can be reasonably expected to jump through hoops for reviewers the main question is is the effort itself worthy of publication yes and is the documentation of the different parts of the effort clear and useful to the community in its present state also yes the decisionmaking process is laid out clearly and elements from the rebuttal could be integrated to strengthen the paper further and it seems clear that this effort is bestinclass in terms of transparency and other factors here
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors combine numerous automatic music transcription datasets to devise a training framework they train an offtheshelf t5 architecture on the combined data and outperform the current stateoftheart reported on individual datasets they report numerous experiments to demonstrate the robustness of the model on low resource datasets outofdataset transcription against taskinformed variations of f1scores including a novel multiinstrumentbased f1score introduced in the paper strong points contributions combining amt datasets to provide a unified training framework outperforming the relevant stateoftheart in amt widening the scope into multiinstrument transcription systematic analysis of model training under diverse setups comparison of numerous architectures against numerous evaluation metrics a novel musically relevant evaluation metric taking instruments into consideration outofdataset transcription experiments below i suggest minor improvements and future work an ideal amt system should be capable of transcribing multiple instruments at once i think the notion of ideal is illdefined any music transcription let it be human or machine annotated and carried for an engineering or musicological task in mind should be tailored for the analytic purpose the purpose does not necessarily encompass robust handling of multiple instruments timepitch precision or else for example the most crucial goal may be obtaining a music score like explained in carvalho r g c smaragdis p 2017 towards endtoend polyphonic music transcription transforming music audio directly to a score in 2017 ieee workshop on applications of signal processing to audio and acoustics waspaa new paltz ny usa pp 15155 and if the methods output notation is not satisfactory for human users the results will be far from desired the authors clearly demonstrate the case of amt having relatively low resources however they omit to discuss the even lower resourced studies applied to music out of 12 toneequaltemperament see suggestions below the authors should include at least the recent work such as holzapfel a benetos e 2019 automatic music transcription and ethnomusicology a user study in proceedings of the 20th international society for music information retrieval conference ismir delft netherlands pp 67884 v s viraraghavan a pal h murthy and r aravind statebased transcription of components of carnatic music icassp 2020 2020 ieee international conference on acoustics speech and signal processing icassp 2020 pp 811815 doi 101109icassp4077620209054435 the authors should contrast their metrics of choice and the novel metric in the paper with the mv2h metric proposed in mcleod a steedman m 2018 evaluating automatic polyphonic music transcription in proceedings of the 19th international society for music information retrieval conference ismir paris france pp 4249 figure 2 the color selection is not print or more importantly colorblind friendly i would suggest the authors rerender the figures see httpswwwnaturecomarticlesnmeth1618 for a reference section 32 it would have been better to introduce the explanation in the order of the tokens in figure 2 for readability onoff events are interpreted as noteon or noteoff it might be better to explain what onoff means eg a note is playedreleased for an audience unfamiliar with midi specifications audio is split into smaller nonoverlapping segments apart from the turnoff handled by the end of tie token does the model performance degrade around the edges of the segment 4 experiments labeling issues with certain datasets a few other dimensions could be different analytic purposes difference between the granularity of the transcriptions across the datasets transcriber consistencyreliability it could be helpful to exemplify such issues at least qualitatively in the appendix section c section d i think they should be appendix c appendix d ethical considerations i think the text in this section does not describe ethical considerations but technical constraints nit see httpglobalnotationorguk as another alternate representation for encoding microfinergrained music suggestions for future work entirely outofdomain a nitpick to rather suggest future work im afraid i have to disagree that leaving each dataset out is an entirely outofdomain setting the datasets share common instruments genres rhythmic structure temperament etc having said that it would be interesting to observe how mt3 behaves as the test data gradually becomes outofdomain eg instrumental vs voice inserting compound rhythms altering the timbreinstruments zeroshot transcription on a lakhneslike dataset or traditional music datasets meertens tune collection greek folk tunes in benetos holzapfels recent publications compmusic corpora in the future as the authors argue the characteristics and the annotation alignment varies between the datasets the work may open an exciting path towards studying data valuation for automatic music transcription eg which parts of the data are more informative inconsistent or erroneous ds3labss httpsds3labinfethzcheasemlhtml work may be of inspiration the paper is well written the literature review is extensive even though there can be several additions see the specific comments in the main review the work extends the scope of existing work in amt significantly the experiments are thorough and the authors declare that they will present the necessary code experimental setup and results in the cameraready version so that the work may be reproducible given the works strong points as described above i would like to recommend the paper for publication in iclr 2022 docsepthis paper proposes a multitask multitrack music transcription framework music transcription task has mainly been tackled individually for each instrument type however in this work the authors jointly trained the model using several datasets with different instrument types as a result the proposed model provides better transcriptional results compared to models trained on individual data sets the contribution of the paper follows three parts 1 multitask transcription model this is a good direction of tackling lowresourced transcription task 2 midilike representation for learning multiinstrument piano rolls similar ideas are explored in midibased music generation work recently and it is good to see this direction is also proposed in multiinstrument transcription task 3 multiinstrument f1 score metric since multiinstrument transcription task is somewhat new having this kind of metric will be a good for following researches the contribution of the paper is mainly written in the summary of the paper part i will write some questions in this section i think multiinstrument music transcription task can be regarded as two parts which are transcription addressing note and classification instrument as the performance of the transcription part can be enhanced apart from the instrument classification part when a large transcription data is used for training i wonder the performance of a model trained without instrument supervision without instrument parts in token this may give some insights about the effect of the multiinstrument token since the main target was on datasetwise experiment to measure zeroshot performance the main results are all reported in datasetwise however i wonder the performance difference between instrumentwise split for example piano bass guitar split no matter each instrument contains mixed dataset i think this will give some insights on instrumentlevel analysis overall the paper is wellwritten and i vote for accepting the paper the authors proposed dataset split model and some evaluation metric for somewhat new task docsepthis paper presents a universal automatic music transcription amt model agnostic to the number of instrument tracks the model is trained across various amt datasets each of which was used as a different amt task and less biased to the volume of individual datasets this is enabled by the novel midilike output data representation designed to include instrument tokens and also supplementary tokens such as tie or eos tokens the authors conducted comprehensive experiments with 6 datasets calculated not only regular accuracy metrics frame f1 onset f1 onsetoffset f1 but also multiinstrument f1 which is a new metric for the universal amt task they significantly improved the accuracy over all datasets and also showed that the model trained with mixed datasets are generally more effective than those trained with a single dataset this paper seems to be a great milestone in the amt research it is probably the first unified amt model that can take music audio with an arbitrary number of instruments most of all leveraging the ability of being trained with all types of amt datasets it achieves impressive improvement in the accuracy good parts drum is always a special track in midi because the same contains different drum samples over different midi notes it is great to have drum tokens explicitly added the introduction of end of tie section token is a nice idea considering the nature of music data which requires input audio to be segmented with a fixed length in the practical experiment setting the evaluation of instrument labeling is conducted as three different granularity of instrument grouping in particularly midi class based on the instrument family make a great sense the generalization test to outofdomain data with a series of leaveonedatasetout experiments is also valuable it emphasizes the strength of the proposed model and wellaligned with the universality it is interesting to see the performance of the commercial software melodyne which is known to be based on dsp in comparison to the previous models weak parts the main contribution is the midilike data representation for the output this representation was handdesigned rather than learned i understand that it is natural to design the output form manually and can be seen as a case of knowledge integration to learning models however the output representation was simply applied to an existing transformer model t5 small with minor change in other words technical novelty is weak the proposed model was not evaluated with datasets with vocal which is the most essential sound source in popular music this might be due to the lack of datasets which include vocal tracks in mixtures and their labels however the lack of vocal track evaluation limits the capacity of the model since vocal is one of the most expressive instruments it would be interesting to see how the model works for vocal with background instrumental sounds questions minor parts the demo examples show that the input audio and output midi are temporally aligned well i wonder if this is because of the absolute positional encoding what happen if the relative positional encoding page 7 if it is has the same pitch if it has the same pitch this paper achieved a great milestone in amt the demo examples are very impressive and show great potential as a musical tool or a data generator for symbolic music modeling the main weakness is that the main contribution is designed by hands in other words the technical novelty is low also vocal tracks are not included in the experiment but this seems to be a matter of dataset availability docsepthe paper introduces a system called mt3 that performs music transcription with various types of datasets in the experiment the system showed stateoftheart performance in all the relevant metrics and on the selected datasets which are comprehensive should we really call it multitask im not sold that the difference in target instruments is significant enough to call it multitask experiment frame f1 is there any reason for the choice of 625 frame per second i might have missed this from the text how exactly were the singleinstrument metrics computed considering every note a single instrument if its not specified in the paper please add it its great that even melodyne was included in the experiment re table 3 please add the grouping strategies in the appendix or at least at the demo page i expect this work will be impactful in the field and as a consequence the grouping strategies would need to be shared somewhere appendix a seems to be a bit too simple to me t1 small model does it really define everything about the model very solid work a few more details about the experiment and the model would be great
### Summary:
|
this work concerns automatic music transcription amt transcribing notes given the audio of the music the paper demonstrates that a single generalpurpose transformer model can perform amt for many instruments across several different transcription datasets the method represents the first unified amt model that can transcribe music audio with an arbitrary number of instruments all reviewers rated this paper highly and are excited about seeing it at the conference one reviewer noted that this paper seems to be a great milestone in the amt research it is probably the first unified amt model that can take music audio with an arbitrary number of instruments the reviewers had some suggestions and comments which appear to be addressed by the authors
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes to train faultaware neural code rankers i would prefer to call them classifiers that can predict the correctness of sample programs without executing them using unit tests the rankers are trained to predict different execution information such as compileruntime error types eg an indexerror or a typeerror the paper shows that the trained rankers can significantly increase the passk accuracy of various code generation models eg codex gptneo gptj on popular benchmarks eg humaneval apps and mbpp significance the paper pointed out that current literature assumes that the unit tests are generally available to evaluate code generation models via execution and it is safe to execute the generated programs the paper argues that such assumptions are impractical in realworld software development making the work significant to investigate alternatives originality while i am unaware of any prior works that exactly do the same training rankers to predict errors the proposed idea is intuitive anyone can come up with the idea with careful thinking though i see this works technical contribution as thin i appreciate the authors effort to study the effectiveness of the proposed approach clarity the paper is written fairly clearly except for one crucial piece of information i did not feel anything important was missed also the paper should rather use classifiers in place of rankers quality overall the quality of the work is good however i felt the authors didnt go deeper in investigating stronger code representation models they chose codebert which is a weaker model in literature i would love to see a few more models being investigated as rankers this was important since there is a significant difference between pass100 and ranked pass1 on the benchmarks the major limitation of this work is the conflict between the motivation of the work and what is being proposed while the paper motivates that in the real world we cannot assume that there will be unit tests readily available to evaluate models via execution however the paper used the apps dataset composed of unit tests the authors didnt discuss how their work could be leveraged in the real world for example can we use the authors proposed idea for generic programming not competitive programming only i would prefer to see some discussion docsepthis paper complements a deep learning based program synthesis module with a separate fault aware ranker framework that learns execution pattern of the generated program without actually executing it the work is inline with recent advancements in large language models that show the inherent understanding of the problem by the model but its inability to search for the correct user intent based on the first greedily sampled decoder output hence it is necessary to use complex search strategy which in the case of this paper happen to be another neural network the results shown in the paper show consistent improvements in potential user experience the paper has some noteworthy strengths they are the first to effectively use neural techniques to classify intent errors for general purpose programs and do not restrict themselves to simpler domains this is an important finding as it can lead the community into building better models for detecting code errors which can be used to judge model generated code as well as human written code they set a strong ground work for detecting code errors using deep learning models and show the potential of such approach by experimenting over a diverse set of domains and diverse set of sota models finally they show the effectiveness of complementing such synthesis models with ranker systems thereby improving their applicability in real world scenarios the paper show the effectiveness of the approach in detecting bugs in model generated code thereby improving those systems the presented results are to some extent expected given that the model is augmenting the strong generative models with additional data of detecting intent errors the system is also developed with the motivation of detecting errors early when they are generated by a generative model such to avoid unnecessary scenarios resulting from executing buggy codes it is not exactly clear if running the system in a sandbox environment will completely remove any necessity of using this approach finally the experiments were run solely with the purpose of improving the performance of code generation models it will be interesting to find the effectiveness of the ranker in detecting bugs on unseen real code the authors have adequately addressed the limitations some preliminary analysis on the limitations discussed can benefit the reader in understanding the robustness and current state of the error detection module developed by the authors docsepthis work trains a model to predict errors in generated code and rerank the generations current models are often evaluated using the passk metric which assumes that the unit tests can be run on k generations to rank them they motivate their method with these arguments 1 not test dependent it could fail on unseen tests 2 no burden to create tests 3 no need to install dependencies 4 no security risk linked to execution their model is trained on a dataset generated from apps where unit tests are available but the model learns to find errors from the code only they evaluate it for reranking the outputs of language models on apps validation and test datasets humaneval and mbpp they substantially improve the pass1 scores of codex gptj and gptneo 13b and 125m params on several benchmarks overall i found the method in this paper simple but their motivation is compelling it is wellwritten and it is well executed strengths clear and wellwritten the motivation for this work is clear and compelling executing many generated snippets is often not doable in practice even though their reranking is still far from perfect as shown by the difference with the pass5 they substantially improve the pass1 for every model and dataset showing the robustness of their method they give good insight into their model the exec1 improvement shows that execution errors are understandably easier to find weaknesses limited technical novelty the datasets they tested their method contain mostly functions from programming contests it is fine since it would be more difficult to test it on real data when no unit tests are available however they mention real usecases in the introduction such as use in vscode and i believe that their method may not transfer as well to real data with expected inputoutputs that are more difficult to infer i would have liked to see a discussion about that
### Summary:
|
the paper was wellreceived the main idea is fairly simple but the problem is important and the writing and empirical evaluation are solid based on the reviewers advice i am recommending acceptance please make sure to incorporate the reviewer feedback into the final version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper studies the use of channel suppression in improving robustness to adversarial examples the authors make a convincing illustration in section 3 on how adversarial examples tend to activate more channels compared to natural examples and adversarial training is not effective in reducing them this provides a convincing motivation to their design of the channelwise activation suppression cas module their cas module is also effective in improving adversarial robustness when used in conjunction with different adversarial defense methods including adversarial training trades and mart i think this paper is of high quality but i do have several questions on the details 1 in section 41 there is a difference in how the mask m is produced in training and test phase how important is it to have the correct y available for the mask as oppose to haty from the channel predictions for example it might be difficult to predict the target class from the lowlevel features eg block 2 channel features leading to inaccurate haty for channel suppression could this be a reason for lower performance of inserting cas into block 2 in table 3 2 just to confirm are both losses ce and cas in eq 5 taken into account in the generation of adversarial perturbations with fgsm and pgd 3 in table 2 what does it mean to have cas without channel suppression is it effectively just a cnn with predictions made from features in different layers 4 do the authors have any intuitions on why having cas module alone on block 4 is better than having it on both block 3 and 4 in general i am leaning towards acceptance of this paper if the authors can address the above questions sufficiently after author response the authors have sufficiently addressed my questions and also the other reviewers questions i am keeping my score of acceptance docsepthe authors studied the behavior of adversarial examples from the channel view of activations which is very novel they focused on the magnitude and frequency of activations and found that stateoftheart adversarial defense adversarial training only addressed the magnitude issue but the frequency distribution issue remains this provided a novel perspective for us to understand why stateoftheart adversarial training method works to a certain extent but not so good then the authors proposed a channelwise activation suppressing cas to address the frequency distribution to further improve the adversarial robustness cas is generic effective and can be easily incorporated into many existing defense methods pros 1 the authors studied adversarial examples from a new perspective of channels in activations previous works focusing on activations usually assumed that each channel is of equal importance while the authors focused on the relationship between channels from two aspects of activation magnitude and frequency the authors found two novel characteristics of adversarial examples adversarial examples have higher activation magnitude and more uniformly activated channels compared to natural examples the findings were convincingly evaluated on different neural network architectures and different training methods this hints at a very interesting phenomenon 2 the proposed method is generic the authors found that the activated channels are still uniform under adversarial training that is some redundant and low contributing channels are still activated to suppress the redundantly activated channels the authors proposed channelwise activation suppressing cas training strategy it dynamically learns and incorporates the channel importance to the class prediction into the training process the motivation is very clear and the method is easy to follow more importantly cas can be widely applied to strengthen existing adversarial training approaches since it suppresses those less important channels 3 lots of experiments are provided to understand and evaluate the proposed methods the experiments covered lots of aspects including channel suppressing effect of cas representation learning ablation studies and extensive robustness evaluation on whitebox and blackbox attacks the authors also tested the adaptive attacks strongest autoattack and the optimizationbased blackbox attack which definitely convinced me of the effectiveness of the proposed method overall the paper hints at an interesting phenomenon and inspires an indepth understanding of adversarial training the proposed method is elegant and generic the empirical evidence is solid and extensive cons 1 how does the activation threshold effect figure 2 2 in the testing phase the predicted class of the auxiliary classifier is used for the channel importance is it vulnerable to attacks if the predicted label is incorrect how it will affect the final performance 3 how well the auxiliary classifier works with the limited information from the output of gap it is likely that the classifier performs poorly and thus results in bad channel importance weighting 4 cas could both improve natural acc and adversarial robustness why cas could achieve this both and how is the overhead of casdocsepthis paper investigates the adversarial robustness from the activation perspective specifically the authors analyzed the difference in the magnitude and distribution of activation between adversarial examples and clean examples the activation magnitudes of adversarial examples are higher and the activation channels are more uniform by adversarial examples based on the above interesting findings the authors claim that different channels of intermediate layers contribute differently to the class prediction and propose a channelwise activation suppressing cas method to suppress redundant activations which can improve the dnn robustness some highlights in this paper the cas strategy is simple and can be easily applied to existing models combining cas with the existing adversarial training methods leads to better dnn robustness the experiments are wellconducted and convincing the authors not only provided ablation experiments to verify the effectiveness of cas but also provided both the performance of the last epoch and the performance of early stop which confirmed that cas can improve the dnn robustness the paper is wellwritten and the idea is easy to follow however there are some downsides id like more details about adversarial training inhibits the magnitude of activation what is the connection between this and network robustness the closer the activation distribution of the adversarial example is to that of the clean example the better the robustness of the network it would be good to provide more discussions and explanations here overall the paper is easy to understand and interesting docsep summary this paper uncovers interesting phenomenons of adversarial training ie more uniformly distributed adversarial data activations than those of natural data to force the behaviors in this paper channels activations of adversarial data to be similar to those of natural data the authors explicitly suppress the redundant channels by reweighing the channel activations reason for score overall i vote for accepting i like the uncovered phenomenons of larger and more uniformly distributed activations of adversarial data than those of natural data technically this paper proposed effective training strategies ie channelwise activation suppressing csa to enhance adversarial training pros 1 this paper provides the understanding of adversarial training from the channel activation perspective showing that adversarial training can reduce the magnitude of the activation of the adversarial data but fail to break the uniform activations by the adversarial data 2 figure 2 shows the efficacy of the proposed csa methods for breaking the adversarial datas uniform activations compared with standard adversarial training csa can further suppress the redundant channel activations 3 the experiment evaluations are comprehensive showing csa strategies efficacy across various adversarial training methods network structures and attack methods cons 1 what is the side effect of redundant channel activations specifically what is the side effect of uniform activations of the adversarial data would you mind explaining more 2 although csa successfully suppresses the redundant channels of the adversarial data csa also seems to suppress the activations of natural data is this the reason for the improvement on natural accuracy
### Summary:
|
this paper focuses on two new characteristics of adversarial examples from the channelwise activation perspective namely the activation magnitudes and the activated channels the philosophy behind sounds quite interesting to me namely suppressing redundant activations from being activated by adversarial perturbations this philosophy leads to a novel algorithm design i have never seen ie channelwise activation suppressing cas training strategy the clarity and novelty are clearly above the bar of iclr while the reviewers had some concerns on the significance the authors did a particularly good job in their rebuttal thus all of us have agreed to accept this paper for publication please carefully address all comments in the final version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors investigate whether a pretrained model can be enhanced when pretraining data are used during finetuning they develop an approach that performs data selection to choose an appropriate portion of pretraining data to enhance the phase of finetuning specifically the authors propose a selection algorithm to obtain a subset from pretraining data closest to the target data by solving an unbalanced optimal transport uot problem which chooses data classes from the pretraining set whose distributional distance to the target data set is small strength the paper is well written the author provides both theoretical and empirical investigations the proposed selection method considers to handle pretraining data with and without labels the paper has an extensive comparison with other strategies weakness the reason why source data can help for target tasks is not well understood the theoretical study does not provide clear insights on how similar the source datasets affect the finetuning performance it is not clear whether the target task can always benefit from the techniques the author discussed the impact of the domain gap however there is no clear criteria whether the method should be used for example will a medical dataset benefit from the imagenet pretraining data there are also methods that directly select the data that can benefit the target task which are not discussed the limitation is not discussed docseppretrained models are widely adopted for various downstream applications and have been demonstrating superior performance a defacto technique to utilize such a model is simply finetuning the pretrained model to a small downstream tasks dataset this paper proposes a way to improve this process by selecting a subset of relevant pretraining data and training together with the downstream training data strength employing uot unbalanced optimal transport to select the subset of the pretraining data is novel and was very effective from their empirical evaluation it is quite surprising that utilizing the pretraining data can improve the finetuning performance this much weaknesses selecting the subset of the pretraining data for the downstream task is not novel as described in line 92 authors do not have many baseline methods in their empirical evaluation other than very simple ones i encourage they compare their work to comparable methods such as 1 or methods that actively prevents catastrophic forgetting 23 3 was not evaluated on the vision but in a similar fashion minor as noted in the limitation sometimes we do not have the access to the pretraining data in the contrary 3 does work without the pretraining data 1 chakraborty shuvam et al efficient conditional pretraining for transfer learning proceedings of the ieeecvf conference on computer vision and pattern recognition 2022 2 hayes tyler l et al remind your neural network to prevent catastrophic forgetting european conference on computer vision springer cham 2020 3 chen sanyuan et al recall and learn finetuning deep pretrained language models with less forgetting arxiv preprint arxiv200412651 2020 commonly we do not have access to the pretraining data because of various reasons cf access storage space time etc this limits the use of this method in such scenarios docsepthe paper shows how leveraging some pretraining data during finetuning can boost a models pretrain finetuned performance the authors suggest a few methods to this end for both the scenario when pretraining data is unlabeled ie selfsupervised learning as well as labeled selecting pretraining data by label randomly and via similarity with finetuning data by way of optimal transport the latter is the most innovative and performs the best with several key assumptions the most important of which is the bound on the distance between the finetuning and pretraining loss gradients the paper provides bounds for the excess risk under normal finetuning as well as under their proposed modified finetuning loss they compare against a single recent baseline cotuning and study image classification on 8 datasets using resnets pretrained on imagenet and do several ablation studies strengths experiments have decent coverage selfsupervised supervised pretraining limited data 8 datasets pretty solid ablation studies proposed method with uot selection generally outperforms the other methods excess risk bound shown seems semi interesting weaknesses novelty is limited while i cannot comment extensively of how noteworthy the theoretical analysis is the suggested approaches for selecting pretraining data seem straightforward only cotuning is compared against though this method is claimed to be the most competitive recently and only in the supervised setting the proposed method with uot selection loses to cotuning on 28 datasets performance is measured as the best among 3 trials running this for more trials and reporting error bars would make the trends more convincing more experiments would strengthen the paper the authors do mention limitations in the conclusion section which i felt was adequate however more discussion around the assumptions made for the theory would be nice how strong are the assumptions do they hold in practice docsepthis work studies the problem of using data of the pretraining task pretraining data for the finetuning stage they first analyze the excess risk of the target task when the pretrained model and the pretraining data are used and show that the proper use of pretraining data can tighten the bound of excess risks for selecting the proper subset of pretraining data they propose a similaritybased selection method based on the distributional distance between pretraining classes and target data their empirical results on several imageclassification tasks validates the findings of the previous analysis and the several improvements of their method over the conventional finetuning without pretraining data problem setting the use of selected source data to improve target task has been touched before in the transfer learning literatures this paper studies a seemly new setting both pretrained model and pretraining data are used in the finetuning task though its new my question are how important is this setting and under which scenarios is this setting useful to be specific i see the overlap of pretrained model and the pretraining data if the selected pretrained data is more valuable than the pretrained model why dont we train the model from pretraining data and the finetuning data otherwise as the purpose of the pretrained model is to provide an initially good and general representation if the pretrained model is welltrained it already well utilized the pretraining data also as training from scratch and pretraining are usually expensive and intensive does the use of large pretraining data contradict to the purpose of finetuning in the experiment the paper use1 up to 100 classes of imagenet which is approximately 100k samples while the target data eg stanford cars has much less data analysis the lemma and theorem are simple but useful to illustrate the idea of this paper from my understanding the main contribution of these analyses is to show the usefulness of proper pretraining data selection rather than study the defined problem in general similaritybased method the method is intuitive one concern is how kmean on highdimensional data is useful to provide distinguishable clusters especially with euclidean distance if these clusters are uniformly sampled from the data the method is likely to be the random selection another question i have is the justification of the real effect of pretraining data since the paper simply incorporate the pretraining data via extra regularization term loss in the finetuning its intuitive that this regularization can at least achieve better performance than the original loss finetuning with proper regularization parameters and the experimental results also show the marginal improvements of the modified loss over the standard finetuning and sometimes gets lower performance table 1b clarify the paper is pretty clear to follow and easy to understand quality writing can be improved in paper 2 the statement from the perspective in lines 6063 is mostly duplicated with lines 6466 some statements need to be provided with evidences and more precise contexts for examples lines 3233 training from scratch never matches the performance of finetuning minors there are some typos eg datausing figure 3 caption images line 172 should be samples experiments some results seem to be not representative to me for example the fig3a shows that 100class setting provides the best overall performance of method while other settings result in lower performance than the labelbased selection itll be better to provide more comparison and robust metrics to the table 1 while table 1 shows the improvement of using proposed method the improvement is marginal and unclear whether the gain is from the pretraining data or proper parameter tuning as mentioned above all considered target tasks have smaller number of data compared to the pretraining data i think its better to show with more settings when the pretraining data is much smaller baselines i suggest the authors to try simple but related baselines training both pretraining data and target data without pretrained models some results in table 1 are far below the best transferlearning results on the target datasets for example on the stanford cars the latest transfer learning may achieve up to 96 accuracy what happen if we use stronger pretrained models in figure 3a2 why dont we have the labelbased line in figure 3b what is the accuracy gap i do not see the limitation and negative societal impact sections please see my mentioned limitations above for the societal impact the authors may consider the impact of the work on utilizing more pretraining data then standard finetuning
### Summary:
|
the paper studies reuse of source data originally used for pretraining in the finetuning phase due to the difference between source and target data use of the entire source data for finetuning can degrade generalization for the target task however the paper shows that by carefully choosing a subset of the source data the generalization performance can exceed what finetuning on target data alone can achieve the scheme used for subset selection is based on unbalanced optimal transport and is theoretically justified via a theorem in the paper empirical results on different datasets show that the proposed scheme indeed adds some gain in generalization the authors and reviewers were engaged in active discussion reviewers raised interesting questions including when the source data and really benefit learning the target task choice of neural architectures relations to catastrophic forgetting sensitivity to hyperparameters usefulness of euclidean distance for clustering in high dimensions and practicality of the assumption that both pretrained model and its data are available at the finetuning time authors provided a thorough answer to these questions reviewer 51vh who was the most skeptical raised their score after the rebuttal while the papers final score ends up being in borderline all the scores are on the accept side i think the contributions of the paper are interesting enough to be published and i recommend accept i encourage authors to incorporate the feedback they received from the reviewers in the final version of the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper describes a new large scale dataset of aligned midi and audio from real piano performances and presents experiments using several existing stateoftheart models for transcription synthesis and generation as a result of the new dataset being nearly an order of magnitude larger than existing resources each component model with some additional tuning to increase capacity yields impressive results outperforming the current stateoftheart on each component task overall while the modeling advances here are small if any i think this paper represents a solid case study in collecting valuble supervised data to push a set of tasks forward the engineering is carefully done wellmotivated and clearly described the results are impressive on all three tasks finally if the modeling ideas here do not the dataset itself will go on to influence and support this subfield for years to come comments questions is maps actually all produced via sequencer having worked with this data i can almost swear that at least a portion of it in particular the data used here for test sounds like live piano performance captured on disklavier possibly im mistaken but this is worth a double check refering to the triple of models as an autoencoder makes me slightly uncomfortable given that they are all trained independently directly from supervised data the maestrot results are less interesting than they might appear at first glance given that the transcriptions are from train the authors do clearly acknowledge this pointing out that val and test transcription accuracies were near train accuracy but maybe that same argument could be used to support that the pure maestro results are themselves generalizable allowing the authors to simplify slightly by removing maestrot altogether in short im not sure maestrot results offer much over maestro results and could therefore could be omitted docsepthe paper addresses the challenge of using neural networks to generate original and expressive piano music the available techniques today for audio or music generation are not able to sufficient handle the many levels at which music needs to modeled the result is that while individual music sounds or notes can be generated at one level using tools like wavenet they dont come together to create a coherent work of music at the higher level the paper proposes to address this problem by imposing a midi representation piano roll in the neural modeling of music audio that serves as an intermediate and interpretable representation between the analysis music audio midi and synthesis midi music audio in the pipeline of piano music generation in order to develop and validate the proposed learning architecture the authors have created a large data set of aligned piano music raw audio along with midi representation using this data set for training validation and test the paper reports on listening tests that showed slightly less favorable results for the generated music a few questions and comments are as follows midi itself is a rich language with ability to drive the generation of music using rich sets of customizable sound fonts given this it is not clear that it is necessary to reproduce this function using neural network generation of sounds the further limitation of the proposed approach seems to be the challenge of decoding raw music audio with chords multiple overlayed notes or multiple tracks midi as a representation can support multiple tracks so it is not necessarily the bottleneck how much does the data augmentation audio augmentation helpdocsepthis paper combines state of the art models for piano transcription symbolic music synthesis and waveform generation all using a shared pianoroll representation it also introduces a new dataset of 172 hours of aligned midi and audio from real performances recorded on yamaha disklavier pianos in the context of the pianoecompetition by using this shared representation and this dataset it is able to expand the amount of time that it can coherently model music from a few seconds to a minute necessary for truly modeling entire musical pieces training an existing state of the art transcription model on this data improves performance on a standard benchmark by several percentage points depending on the specific metric used listening test results show that people still prefer the real recordings a plurality of the time but that the syntheses are selected over them a fair amount one thing that is clear from the audio examples is that the different systems produce output with different equalization levels which may lead to some of the listening results if some sort of automatic mastering were done to the outputs this might be avoided while the novelty of the individual algorithms is relatively meager their combination is very synergistic and makes a significant contribution to the field piano music modeling is a longstanding problem that the current paper has made significant progress towards solving the paper is very well written but there are a few minor issues eq 1 this is really the joint distribution between audio and notes not the marginal of audio table 4 what do precision recall and f1 score mean for notes with velocity how close does the system have to be to the velocity to get it right table 6 nll presumably stands for negative log likelihood but this should be made explicity figure 2 are the error bars the standard deviation of the mean or the standard error of the mean
### Summary:
|
all reviewers agree that the presented audio data augmentation is very interesting well presented and clearly advancing the state of the art in the field the authors rebuttal clarified the remaining questions by the reviewers all reviewers recommend strong acceptance oral presentation at iclr i would like to recommend this paper for oral presentation due to a number of reasons including the importance of the problem addressed data augmentation is the only way forward in cases where we do not have enough of training data the novelty and innovativeness of the model and the clarity of the paper the work will be of interest to the widest audience beyond iclr
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper extends the method and theories from 1 below to the case of multiple sources it is nice to have theories for the generalized case of multiple sources experiments are also good validating their methods practical performance 1 nicolas courty rmi flamary amaury habrard and alain rakotomamonjy joint distribution optimal transportation for domain adaptation in advances in neural informa tion processing systems 30 pages 37303739 2017e this work is too incremental from 1 they do compensate it with careful experiments however more theoretical or methodological contributions would strengthen the paper i think settings like federated learning would benefit most from multiple source domain adaptation considered in this paper the authors can consider developing algorithm that solves their problem msdawjdot in fl setting this would also better motivate their problem docsep1 authors clearly point out the differences between their proposed method and existing methods 2 proposed solution has a clear motivation and authors back their claims using both theoretical and empirical results 1 missing literature some of the related problems like domain generalization should briefly be touched and some methods that use distribution similarity or optimal transport in the domain generalization literature should be included 2 some missing details like tuning and time complexity discussions 1 missing literature some of the related problems like domain generalization should briefly be touched and some methods that use distribution similarity or optimal transport in the domain generalization literature should be included in the related work for example blanchard et al 2021 use a similarity between distributions to address the issue one may use the optimal transport type of metric to get these distancessimilarity between distributions 2 how to perform the validation of the parameters including early stopping we propose here to use the sum of squared errors sse between the target points in the embedding and their cluster centroids is this well accepted in the literature if yes would it be possible to cite it how were baselines tuned 3 in table 1 why is target performing so much worse in the dslr method 4 one thing that was missing from the discussion was the time complexity of the proposed method and other baselines can authors give some information about how long was the trainingoptimization for experimental results in the paper 5 interestingly msdawjdot can even outperform target due to its access to a larger number of sample and since we have access to labels for target and baselinetarget we validate the model by using the classification accuracy on the target validation set making those two approaches clear upper bounds on the attainable performance for each dataset these two statements are contradictory can authors clarify these 1 blanchard gilles aniket anand deshmukh urun dogan gyemin lee and clayton scott domain generalization by marginal transfer learning journal of machine learning research 22 2021 155 docsep the method looks like a clear improvement over other methods for multi source domain adaptation the motivation makes sense the experiments are done correctly and described in enough detail the only comparison is with other jdotstyle methods what about comparison with other da methods such as dann ganin et al 2016 rtn long et al 2016 mdd zhang et al 2019 etc these are all single source algorithms but they can be applied in the multi source setting in the same way as cjdot table 2 the results from montesuma and mboula 2021 use the wrong table from that paper they included both a music genre recognition mgr and a musicspeech discrimination msd task in their table 2 this work is doing msd and should therefore compare to their msd results lemma 1 seems trivial and is not specific to convex combinations of source distributions at all minor issue two different meanings of are used as the weight for zz in the equation for d between 2 and 3 and as dual variables at the end of section 32 the appendix c2 seems to introduce new variants of the algorithm surprisingly montesuma and mboula 2021 also include results from an earlier version of this paper under the name wjdot i found a reference to an arxiv version of this uai2022 paper in their work this looks like an opportunity for creating a cycle in the citation graph why was the name changed to msdawjdot or is that a different algorithm the name is a bit of a mouthful docsepthe main strenghts of the paper are the following 1 the tackled problem is a relavant one and it is tackled in a new and fresh way when compared to existing approaches from the specialized literature 2 the theoretical analysis which provides bound on generalization over the target source is extremely important for practically exploiting and applying to real world cases what presented 3 the numerical experiments which have been performced are rich and well designed while achived results are commented in a clear and effective manner i see few weacknesses in this paper in particular the results of the performed numercial experiments do not clearly establish the proposed approch as superior when compared to existing methods from the specialized literature however it is clear that the proposed approach achieves results which are comparable to those achieved by competing methods and no unique method appears to be winner under all numerical experiments settings with specific reference to whatreported in table 1 an din table 2 it is also clear that this could be normally expected for all machine learning algorithms i also would like to read more criticisms about your approach i mean i would have appreciated to read which the weks point of the proposed approach from you i enjoyed reading your paper which tackles a relevant problem both theoretically and practically and it is well structured and clearly written i have no major criticisms but i would like to know more on what your toughts concerning the results achieved by the proposed method as reported in tabel 1 and in table 2 i also would like to know whether you think that testing robustness of your method with respect to how many sources are truly to be included into the linear combination i mean what happen when the number of sources increases from asmall number to a big one it may be that the target is the in covex hull of only some of the available sources what happens to your method under this setting
### Summary:
|
meta review this paper proposes a method for multisource domain adaptation the basic ideas is aims to find simultaneously an optimal transportbased alignment between the source and target distributions and a reweighting of the sources distributions the method is an elegant extension of previous work with interesting theoretical advances and convincing empirical results
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work builds on the previous work1 and extends it for a more general scenario where per sample intermediate distribution shift is hard to quantify they propose gradual feature interpolation for the case where samples from intermediate distribution are missing iterative selftraining fails in such cases the work presents results on synthetic and natural distributions to evaluate their claim 1 ananya kumar tengyu ma and percy liang understanding selftraining for gradual domain adaptation arxiv preprint arxiv200211361 2020 the problem is well motivated and illustrated through different experiments and toy examples paper is well written and easy to follow clarification need in algorithm 2 matching scores are 1 for the same class so if there are false positiveswhich happens a lot in real datasets then the alignment will be biased towards them leading to poorer student network training the choice of target examples to be mixed may create difficult examples early on it is not clear how random assignments can overcome this situation also the align function along with the lambda value will together decide the hardness of the intermediate examples created lambdascheduler tuning is not discussed properly will it depend on the gap between the two domains if yes then how should be handled mismatch in lambdascheduler step size notation delta in algorithm 1 and sigma on page 3 figure 1 has gradual selftraining as a caption in one of the subplots but this method is not mentioned or cited the work puts forwards an interesting work to gradually align two domains though different experiments are shown to support their claim there are few clarifications needed as mentioned in the main review docsepthis work proposes an iterative selftraining approach for unsupervised domain adaptation in particular the authors aim at gradually adapting a model trained on a source domain to the target domain based on the claim that previous work on this setting assumed that samples from distributions that represent gradual changes from the source to the target domain are available when adapting the model the authors proposed a strategy to generate such intermediate samples for cases where they are not available the introduced approach named gift consists in performing manifold mixup between representations of examples from the source and target domains considering an increasing value of the hyperparameter that accounts for the weight of the representation of the target domain example by doing so the authors claim an automatic curriculum is introduced in the training moreover since labels from the target domain examples are not available the authors also introduced a heuristic to pair the examples that are mixed gift was empirically validated on experiments with synthetic domain shifts on the cifar10 dataset and was shown to outperform iterative selftraining in terms of target accuracy experiments on two datasets presenting natural domain shifts were also performed this work considers the problem of gradual domain adaptation where a model should be gradually adapted from the source to the target domain i found the idea of generating data at a lowerdimensional space from intermediate distributions interesting and appealing from a practical perspective in the following i present my major concerns questions and suggestions it is not clear from the manuscript why gradual domain adaptation is a setting that should be considered in cases where no samples from intermediate distributions are available as far as i understand gradual domain adaptation is a setting to be considered when the distribution yielding the data is evolving over time kumar et al 2020 and the goal is to incorporate this knowledge about the problem structure in the model adaptation process given that it is not clear to me why considering this setting in a case where the data distribution is not gradually shifting would make sense if this is the case why not directly use any other domain adaptation approach a major claim of this contribution is that by mixingup representations of both source and target domains examples from intermediate distribution would be generated although i can get a rough intuition of what an intermediate distribution could be eg a mixture of source and target domains there is no definition or discussion regarding this in the manuscript i suggest the authors include in the manuscript a clear definition of what such distributions are as well as include evidence that manifold mixup is capable of generating samples from them experiments the authors mentioned that the decrease in confidenceaccuracy observed on examples with higher levels of perturbation observed in figure 2 confirms that the reason behind the success of iterative selftraining is the implicit curriculum strategy although i understand that this observation indicates the existence of an implicit curriculum it is not clear to me why it indicates it is the reason behind the success of iterative selftraining it is not clear to me whether the results presented in tables 1 and 2 are good even though i understand that the goal of this work isnt to achieve stateoftheart results in benchmarks it is hard to assess the merit of the improvements reported in these tables without knowing how established baselines would perform in such test cases i strongly encourage the authors to include for comparison at least dann and cdan in table 3 the authors presented results obtained on two datasets they mentioned that in the case of camelyon17 hospitals 0 1 and 2 were considered as source domains while hospital 3 was the target however in table 3 it seems that only results with domain 0 as the source were reported why is it the case please clarify this and modify the text accordingly it is hard to tell whether the results presented in table 3 indicate a relevant improvement of selftraining approaches and the proposed gift in comparison to the considered baselines the authors did not report if more than one run was performed in case only one run was performed i dont think it is possible to draw conclusions from these experiments since models adapted via dann for example are known to be quite sensitive to the initialization in addition to the aforementioned concerns this manuscript presents several presentation and clarity issues that make it difficult to understand in the following i outline the major issues in section 2 the authors introduced the considered setting but there are several missing points regarding the introduced notation please properly define ns d k and nt moreover the authors did not specify the underlying assumptions of the proposed approach ie is the covariate assumption required is label shift allowed furthermore in section 2 the authors mentioned that the goal is to bridge the domain difference and learn a good classifier for the target domain what exactly does a good classifier mean in this sentence i believe the authors are considering the risk minimization setting from bendavid et al 2010 but please let this clear in the text it is quite difficult to parse the information contained in algorithms 1 and 2 several variables were not introduced and there is no comment to explain what each line is doing summing this up with the fact that there is barely any explanation about gift training procedure and the labelbased random alignment throughout the text it is hard to properly understand what the main contributions of this work are really doing moreover the clarity of the algorithms and the aforementioned contributions should be improved to facilitate the reproduction of the reported results minor it is hard to compare the results reported in tables 1 and 2 because they are placed too far apart in the manuscript i think it is possible to merge both into a single table the symbols ps and pt are used to denote different objects in the text in section 2 they are referred to as the source and target domains respectively while in algorithm 1 they denote source and target datasets respectively this work proposes an iterative selftraining approach for unsupervised domain adaptation in particular the authors aim at gradually adapting a model trained on a source domain to the target domain the main contribution is a strategy to generate examples from the socalled intermediate distribution between the source and target domains despite the reported empirical improvement over the considered baselines i found the motivation of the proposed approach unclear and i found that the main claim is not wellsupported ie is gift indeed capable of generating examples from intermediate distributions also what are such intermediate distributions moreover the manuscript lacks clarity in several aspects and i have concerns regarding the significance of the reported results since it seems only a single run was considered in each test case all in all my initial assessment is that this manuscript is not ready for publication yet docsepthis work adddresses domain adaptation da by a gift method gift consists of a manifold mixup technique which generates virtual samples by mixing up the features of source and target domains samples the mixup coefficient is annealed over time to bias towards the target domain another ingredient of gift is a coteaching strategy that lets two networks teach each other on a few small synthetic as well as natural image datasets gift outperforms a few baseline methods 1 this work seems to have certain novelty i havent seen papers that anneal the mixup weights however its unclear to me how much this strategy helps da i didnt find an ablation study that use a fixed mixup weight 2 the technical novelty is a bit limited as its a simple extension of manifold mixup to convince readers of its practical value the authors should do more extensive experiments as well as ablation studies in natural images the two used datasets fmow and camelyon17 are not typically used to evaluate da the authors also should compare with many more existing methods one important related method is tent 1 1 tent fully testtime adaptation by entropy minimization iclr 2021 the technical novelty of gift is limited hence the authors should focus on showing the empirical benefits of gift however the experiments are highly insufficient with missing ablations and baselines moreover the evaluation datasets are not so popularly used therefore its hard to judge how much empirical value this work brings docsepthe paper proposes to address the gradual domain adaptation problem by creating virtual samples from intermediate distributions by interpolating representations of examples from source and target domains strengths 1 this paper is wellmotivated since the gradual domain adaptation is still an underexplored problem 2 the motivation and the overall idea of the paper is easy to follow weaknesses 1 novelty concerns a the main concern for this paper is the novelty the main idea to address gradual domain adaptation in this paper is to create synthetic data from source and target domains there are already several previous papers working on this idea such as dlow 1 gvb 2 fixbi 3 although the above methods are not proposed to directly solve the gradual domain adaptation problem they can be adapted to this problem to prove that the proposed gift is better the above methods should be compared after applying to the same problem references 1 rui gong et al dlow domain flow for adaptation and generalization cvpr 2019 2 shuhao cui et al gradually vanishing bridge for adversarial domain adaptation cvpr 2020 3 jaemin na et al fixbi bridging domain spaces for unsupervised domain adaptation cvpr 2021 2 technical detail concerns a lambda scheduler it should be critical to find the best way to transfer the intermediate data smoothly from source to target however the proposed lambda scheduler is a linear function with a fixed step size the step size could be different when facing different datasets making the current scheduler heuristic and not generalized b alignment method it is still unclear which method is adopted for the final model according to the context pseudorandom alignment or costbased alignment this part is confusing 3 experiment concerns a table 1 the number of selftraining iterations is different for iterative selftraining and gift 5 and 20 this would make the comparison unfair why not keep the same training setup b separating tables 1 and 2 is confusing it is unclear what the difference is between these two tables is the main difference in the total number of training steps 1000 vs 500 moreover it is not clear why we can claim gift is more robust than iterative selftraining for translated 50100 cifar10 iterative selftraining raises the accuracy from 0477 to 0658 when increasing the training step number from 500 to 1000 however gift drops the accuracy from 0832 to 0729 in this case it is weird to claim gift is more robust c figure 3 it is weird to compare with different numbers of teacher updates it would be great to provide an explanation about this d table 3 with the same baseline ie best a b c d gift does not have a clear improvement over iterative selftraining 4 other concerns a the publications in the reference section should be updated 1 dan hendrycks steven basart norman mu saurav kadavath frank wang evan dorundo rahul desai tyler zhu samyak parajuli mike guo dawn song jacob steinhardt and justin gilmer the many faces of robustness a critical analysis of outofdistribution generalization iccv 2021 2 ananya kumar tengyu ma and percy liang understanding selftraining for gradual domain adaptation icml 2020 3 behnam neyshabur hanie sedghi and chiyuan zhang what is being transferred in transfer learning neurips 2020 4 pang wei koh shiori sagawa henrik marklund sang michael xie marvin zhang akshay balsubramani weihua hu michihiro yasunaga richard lanas phillips irena gao tony lee etienne david ian stavness wei guo berton a earnshaw imran s haque sara beery jure leskovec anshul kundaje emma pierson sergey levine chelsea finn and percy liang wilds a benchmark of inthewild distribution shifts icml 2021 5 qizhe xie eduard hovy minhthang luong and quoc v le selftraining with noisy student improves imagenet classification cvpr 2020 6 barret zoph golnaz ghiasi tsungyi lin yin cui hanxiao liu ekin d cubuk and quoc v le rethinking pretraining and selftraining neurips 2020 the proposed method in this paper is not novel although creating virtual examples is a reasonable direction for addressing gradual domain adaptation i suggest the authors can still work on this direction but propose a more novel way to create virtual examples that contain better intermediate information the experimental results do not support what the paper claims and do not show significant improvement over previous methods therefore i recommend rejecting this paper docsepthis paper introduces how to deal with the situation when intermediate distributions are unavailable the basic idea is to create virtual samples by interpolating source and target representations the effectiveness of the proposed method is evaluated and results in several interesting conclusions strength this paper is well written and easy to understand the proposed method is simple but effective the analyses of how why and when iterative selftraining is helpful can benefit the community which is important weakness 1 generating intermediate data in domain adaptation is not a new thing it is recommended to discuss with related works such as 12 2 as shown in table 3 the proposed method does not bring significant improvement over iterative selftraining it is recommended to perform evaluations on more datasets with natural distribution shifts 1 dlow domain flow for adaptation and generalization 2 unsupervised adaptation across domain shifts by generating intermediate data representations my major concern is the effectiveness of the proposed method on datasets with natural distribution shifts although this method works well on synthetic data its applicability to real datasets should be further justified and evaluated
### Summary:
|
most reviewers agree that the paper addresses a relevant problem however they also believe that the paper lacks in several points notwell supported claim sometimes clarity incremental in term of novelty
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work provides a clear explanation for the relationship between normalizations and adversarial transferability which further inspires the authors to propose a random normalization aggregation method where the adversarial robustness can be significantly improved strengths 1 using adversarial transferability to boost adversarial robustness is interesting and inspiring which is technical and methodology sound 2 establishing the relationship between normalizations and adversarial transferability is technically sound and in my opinion it will make some impact on our community and give some insights to more researchers weaknesses 1 the connection between the mentioned novel viewpoint of using adversarial transferability and improved adversarial robustness should be highlighted ie why transferability can contribute to boosting robustness similarly why rna can contribute to boosting robustness moreover do these two reasons share the same inspiration 2 for the introduction to motivation the authors claim that line47line50 the performance gap between bn and other normalizations results from the adversarial transferability among normalizations which is a bit confusing and needs further justification specifically the diagonal results are evaluated using whitebox attacks and the robust accuracy is about 50 so that the gap may come from the difference between white and blackbox attacks as this is an essential part of inspiring the authors to investigate the relationship between transferability and normalizations line50 i suggest the authors provide a detailed justification 3 one concern is that the robustness may come from the designed random selection operation so it is necessary to employ expectation over transformation eot 1 for the robustness evaluation to defend against the random operation typos tildex makes eq2 confusing 1 synthesizing robust adversarial examples please see strengths and weaknesses docsepthe paper discusses the relation between normalization layers used in dnns and adversarial transferability the authors shows that the choice of normalization layer highly influences the success rate of transferred adversarial examples in detail it is shown that adversarial examples transfer worse ie have a lower success rate of fooling the network if a different normalization layer is used in the attacked network this fact is used to motivate a novel technique to robustify existing neural network architectures the key idea is to randomly select the used normalization layer during inference experiments on cifar10 cifar100 and imagenet show that this approach is effective for the commonly used resnet18 and wideresnet32 strengths the paper proposes an effective clear and easytoimplement method that is wellmotivated by experimental and theoretical results the extensive experimental evaluation on resnet18 and wideresnet32 architectures using a plethora of attacks fgsm pgd20 cw mifgsm deepfool autoattack is convincing an ablation study is conducted to show the effectiveness of the different components and the importance of different combinations of normalization layers the paper presentation is well done and only contains only a few errors see minor remarks used computational resources are mentioned weaknesses the used notation can be improved at some places 1 equation 1 xsimmathcalx ysimmathcaly suggests that x and y are drawn independently form the dataset here x ysimmathcalxmathcaly would be correct also in algorithm 1 mathcalxmathcaly has to be a tuple instead of set 2 in equation 3 hatybgnik suggests that haty is a function in terms of bgn which is not the case i would suggest to use lefthatytextbgnkrighti instead 3 different symbols are used to denote multiplication equation 3 vs equation 8 the stated theorem 21 is hard to understand although i see some value in the theorem i think the authors should spend some time to reformulate it 1 it stated that the gradient norm and beta in equation 7 can be bounded as however there is no beta in equation 7 only betaa and betab 2 the variable t is introduced multiple times in a single sentence the level of adversarial transferability and attack success rate minor remarks typo adversarial transfersability in abstract hessian is sometimes not capitalized subscript and supperscript gn and bgn should not be typeset in math mode line 96 extra period in front of references 5 20 limitations and potential negative societal impact have not been addressed docsepinspired by the limited adversarial transferability across different normalizations the authors proposed to involve randomness into the types of normalization layers and introduce a rna module that reduces the adversarial transferability in a predefined random space which improves the defense against adversarial examples to evaluate the effectiveness of their algorithm the authors provided experimental results on cifar10100 and imagenet the authors studied a simple yet effective randomized mechanism with normalization layers in general the paper reads well and the presentation is clear the paper is original in that it studies the connections between normalization layers and adversarial defense the theorical analysis well explains the principle of proposed algorithm the paper includes a clear experimental setup and a meticulous comparison with other variant baselines as well as stateoftheart algorithms on popular benchmarks the results seem convincing despite its contributions i have several concerns 1 the authors mentioned that they adopt random sampling strategy to utilize the adversarial transferability in random space however there exists a chance that the similar paths are sampled during both attack and inference stage and there is no discussion of this scenario as well as the probability of it 2 besides normalization layers there exist wide components in the networks which can involve randomness such as weight parameters and architectures although the transferability evaluation in fig 1 shows that there exists a poor transferability of adversarial examples among different types of normalization layers it seems to me that this transferability also holds true for weight parameters and architectures there is no discussion of these variant baselines 3 the authors claimed that the smoothness of different normalization layers directly controls the adversarial transferability however it is hard for me to find corresponding evidence in experiment section the limitations have been addressed and there is no potential negative societal impact of this work docsepresearchers explore how different normalization layers affect adversarial transferability they provide a theoretical upper bound on the adversarial transferability between normalization layers then they proposed a module named random normalization aggregation rna rna replaces the normalization layers in the network and samples normalization layers randomly at each forward pass this can generate an exponential number of possible paths which makes it harder for attacker to exploit strengths 1 the paper is clear and easy to follow 2 theoretical part is sound 3 the idea to generate random normalization paths is new to the literature weaknesses 1 experiment section presents some suspicious results the results with resnet vs wrn are counterintuitive and results on autoattack vs pgd also do not make sense it can be a sign of obfuscated gradients additionally some of the reported results are not in line with known results see the questions section 2 no codemodels are provided for evaluation of the results can the authors provide the codemodelsgithub link for additional verification 3 no related work section specifically the authors did not address some related literature that explores the same field see questions section no authors should discuss the limitation of adversarial training in general and their method in particular
### Summary:
|
this paper intrdouces the relation between normalizations and adversarial transferability and proposes a method using random normalization aggregation for enhancing adversarial robustness three reviewers agreed with the interesing idea thorough expreiments theroetical analysis and the effectivess so they gave acceptance score however one reviewer iobb raised a concern on the results of autoattack aa and lack of indepth discussion on the results unfortunately ac and the reviewers failed to make a consensus on decision during the discussion period ac carefully read the paper the rebuttal and the reviewers discussion the main remaining issue raised by revieweriobb is it is not clear the reason why the results of aa are highter than pgd the authors provided more extensive experimental results focusing on comparing the results of aa and breakdown of four aa and pgd also they conjecture these result from handling adversarial transferability under randomnessbased method via normalization aggregation ac also agrees with the authors and reviewerq4wp that the indepth analysis on the reason of aapgd results is oufofscope of this paper and theses results are consistent to recent works on adversarial transferability so this issue might be left as future work because it seems that the contribution of this paper is enough for machine leanring community except the discussion on the aapgd reason ac recommends accepting this paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this is an interesting paper that seeks to propose novel machine learning methods to extract information within continuous and infinite fidelities to bolster the prediction accuracy the key novelty here is the means to develop a infinite fidelity coregionalization method by introducing a lowdimensional latent output and multiply it with a basis matrix for solution output prediction the experimental data is comprehensive overall the work is well done and provides some new methodology to this field strength a novel infinite fidelity coregionalization method is proposed which improves upon existed models with finite and discrete fidelities the paper is wellwritten empirical results on different types of pde problems are provided weakness one of the main advantage of multifidelity models is are the improve learning and sampling efficiencies therefore besides the comparison on accuracy it would be helpful to see the training time comparison for at least 1 of the problems it would also help the readers to assess the method if the authors can present a comparison when using different amount of training data the discussions on limitations are adequate docsepthe authors propose two novel methods for coregionalization ie projecting multiple grid resolutions onto a common grid similar to superresolution while previous works assume a discrete number of resolutions the proposed work contributes the first coregionalization with a continuous change in resolution the methods combine neuralodes and gps to project lowresolution inputs parameters and bcs onto a higherresolution grid the main idea is by training a neural ode to interpolate the meshes the node can learn from other meshes in the latent sstate and outperform discrete methods that learn one model per mesh the authors provide background methodology and support the claims with extensive empirical results the empirical results confirm that the proposed methods ifcode and ifcgpode both outperform discrete methods for coregionalization 1 strength 11 the broader research topic of mlbased surrogate modeling of pdes is significant to computational fluid dynamics climate modeling chemistry biology etc the narrow topic of coregionalization ie projecting data from various resolutions to a common grid or learning from data of various resolutions is relevant to practical settings the more narrow topic of infinite resolutions would allow for higher flexibility in using the method in practice 12 the authors provide extensive empirical results on five problem settings and compare to five relevant methods while the comparative methods could have been selected broader eg including gan flow or diffusionbased superresolution the selection seems sufficient the empirical results support the claims of the paper 13 the author choose advanced methodology to handle gps in highdimensional settings 2 weaknesses 21 it is still a bit unclear what exactly the contributions are if zhe et al 19 and li et al 21 have already provided scalable gps for regression of latent outputs and bases the authors acknowledge this in l218 i am assuming that the paper is the first paper to do learningbased coregionalization with continuous fidelities section 4 needs to clarify that matrix gaussian distribution is taken from a different paper and only applied to coregionalization here as far as i understood the clarity of the paper could be improved by adding a list of contributions at the end of the intro 22 the related works section is very detailed with respect to the most similar works the related works section could be improved by mentioning a practical usecase of coregionalization and deterministic and stochastic superresolution methods with gans flows or diffusionbased methods 23 the method is quite complicated as it mixes matrix gps and neural odes i am not sure if introducing active learning to the mix would make for a very interesting paper as it might become really complicated to use this method in practice 41 the authors do not address limitations or potential negative social imapcts 42 it is not mentioned on which kind of grids or finite element topologies the proposed model would work i am assuming that it would only work for equispaced grids 43 it would be helpful to explain the implications of gaussian assumptions during modeling 44 the authors claim that the surrogate model is trained only using lowfidelity examples but we can still expect to obtain highfidelity predictions ie more accurate than the training data l389391 it seems to me that there is not sufficient evidence to support this statement and i would alter or explain it for a cameraready version i do not fully understand how the proposed model could create for example higherorder frequencies that have been seen in the training phase docsepthis paper introduced ifc an infinitefidelity coregionalization method for physical simulation they designed odebased modeling to capture information for continuous fidelities and combined it with gaussian processes or another ode to estimate the fidelityvarying bases the result shows their method outperforms the baselines among several benchmark prediction tasks at the highest fidelity level strengths 1 wellwritten paper in general 2 the experiment is extensive five benchmark study included 3 the performance of the proposed model seems good weaknesses 1 i disagree with the statement in practice the fidelity choice is often continuous and infinite for all five experiments in the paper the data has finite and discrete fidelities in practice people pregenerate the simulation data the fidelity level is also preselected which is also finite and discrete although the performance of the proposed method is good 2 the definition of fidelity m seems confusing and not consistent in the background m looks like a discrete value 123 but starting from the model section m becomes a continuous value theres no explanation of how to map a fidelity level to m value 3 only one evaluation metric nrmse is included no potential negative societal impact i can see
### Summary:
|
the paper tackles the multifidelity simulation problem by modeling the grid variation with node coupled with a gp experiments on multiple physical simulators show better performance compared to baselines please also report computational efficiency and sample complexity in the final version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper targets at developing new dfa method to replace bp for neural network model optimization in order to speed up the training process the paper is generally written clearly and relatively easy to follow my main concern is about significance of the contribution of this paper 1 the novelty is limited this paper only simply combines two wellknown approach bp and dfa together 2 performance contribution seems not significant from the proposed approach in the implementation the authors only apply their approach to optimize a few top layers a majority of the layers in the nn model are still optimized via bp 3 the authors should provide more evaluations on different nn backbones and datasets to make the experiments stronger and more convincingdocsepthe paper propses to use a combination of direct feedback allignment dfa and backpropagation bp to improve upon standard back propagation to understand what is done consider the following feedback alignment is equal to back propagation when using random but fixed weights in the backwards pass direct feedback alignment uses random backprojections directly to the layer of interest the advantage of dfa is that it bypasses the normal computational graph the advantage of this is that if compute is infinite all of these updates can be computed in parallel instead of pipelining them as is done in standard bp in the current paper the use of dfa for dense layers and bp for conv layers which is named cdfa is proposed in addition the paper also proposes a binarized version of bdfa to limit memory consumption and communication it is claimed that the proposed techniques improve upon standard back propagation overall the paper is easy to understand but i lean towards rejecting this paper because i am not convinced by the experimental evidence as outlined below the key issue is that the baseline appears to be weak additionally the main limitation of the proposed approach can only benefit a very limited set of architectures positive points the authors did an excellent job of introducing bp fa and dfa in the paper this makes the core concepts and ideas accessable without having to delve through prior work the own contributions and the key idea is easy to understand limitations and possible improvements a core limitation is that recent networks do not have a combination of dense layers and convolutional layers in many cases the networks are fully convolutional this limits the applicability of the proposed combination of dfa and bp the use of additional networks would benefit the paper currently only vgg 16 on cifar 10 is used also the data augmentation strategy is not discussed of course it would be nice if additional datasets could be included as well but this of course depends on the computational resources the authors have available the key issue to me is that performance improvements for cifar are reported but i fear that the baseline accuracy for vgg16 might be a bit low if i memory serves me well it should be able to achieve around 90 at least on cifar 10 using vgg style networks i did a quick search and found httptorchchblog20150730cifarhtml corroborating this but i did not verify this directly related to the previous point since this is an empirical paper describing the hyperparameter optimizations and final settings in detail can convince the reader that the study is exectued correctly much of the information is missing now similarly i have trouble understanding section 41 and section 42 since i do not know the exact details of the experiments this can be fixed easily however provide complexity estimates of the potential speedup or provide actual timing information although this might not be that meaningful without much additional work given that gpu kernels are often heavily optimized last year there was a submission to iclr about fixing the final output layer and only learning the convolutional layers if we consider that random projections work remarkably well and can be considered approximations of kernels it could be interesting to add a baseline where the fully connected layers are fixed and only the convolutional layers are trained the error signal can be propagated using standard bp fa or dfa methods but it would shed light on whether learning in the higher layers is actually needed or bp in the conv layers is sufficient minor possible improvements finally i would strongly suggest that the authors perform some additional proofreading there are quite a few strange formulations and spelling mistakes that being said it did not prevent me from understanding the manuscript so this remark did not factor into my judgement in addition to remark above i would suggest removing the second paragraph from the introduction it feels out of place to me and the vanishing gradient effects are not discussed in the remainder of the manuscript the list of possible optimizers before the selection for sgdmomentum is not needed simply stating that sgd with momentum is used should be sufficient training from scratch instead of training from the scratch docsepthis manuscript extends the direct feedback alignment dfa approach to convolutional neural networks cnn by 1 only applying dfa to fc layers with backpropagation bp in place for convolutional layers 2 using binary numbers for feedback matrix originality wise i think 1 is a very straightforward extension to the original dfa approach by just applying dfa to places where it works it still does not solve the ineffectiveness of dfa on convolutional layers and there is no much insight obtained 2 is interesting in that a binary matrix is sufficient to get good performance empirically this would indeed save memory bandwidth and storage this falls into the category of quantization or binarization which is not super novel in the area of model compression the experimental results show that the proposed approach is better than bp based on accuracy however these results might be called into question because the shown accuracies on cifar10 and cifar100 are not stateoftheart results for example the top 1 accuracy of cifar10 in this paper 8111 but with proper tuning a cnn should be able to get more than 90 accuracy see this page for more details httprodrigobgithubioarewethereyetbuildclassificationdatasetsresultshtml therefore though the claimed accuracy of the proposed method is 89 it is still not the stateoftheart result and it seems to be lack of tuning for the bp approach to perform similar level of accuracy the same conclusion applies to cifar100 in fact from figure 4 the training accuracy gets 100 while the testing accuracy is around 40 for bp which seems to be overfitting with these results it is hard to judge the significance of the manuscript minor typos in equation 1 the letter i is overloaded
### Summary:
|
this paper proposes a training algorithm for convnet architectures in which the final few layers are fully connected the main idea is to use direct feedback alignment with carefully chosen binarized 1 weights to train the fully connected layers and backpropagation to train the convolutional layers the binarization reduces the memory footprint and computational cost of direct feedback alignment while the careful selection of feedback weights improves convergence experiments on cifar10 cifar100 and an object tracking task are provided to show that the proposed algorithm outperforms backpropagation especially when the amount of training data is small the reviewers felt that the paper does a terrific job of introducing the various training algorithms backpropagation feedback alignment and direct feedback alignment and that the paper clearly explained what the novel contributions were however the reviewers felt the paper had limited novelty because it combines ideas that were already known that it has limited applicability because it will not work with fully convolutional architectures that the baselines in the experiments were somewhat weak and that the paper provided no insights on why the proposed algorithm might be better than backpropagation in some cases regrettably only one reviewer r2 participated in the discussion though this was the reviewer who provided the most constructive review the ac read the revised paper and agrees with r2s concerns about the limited applicability of the proposed algorithm and lack of insight or analysis explaining why the proposed training algorithm would improve over backpropagation
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
1 for theorem 1 as the reviewer understands it for an optimization problem whose only critical point is a strict maxima it only has four outcomes which are listed in the theorem the result seems quite intuitive and provides very limited understanding for the problem please list other possible outcomes for the general problem and state in such way that the paper finds some impossible outcomes which can be excluded for consideration 2 theorem 2 states that a twoplayer game with a specific loss cannot converge to the only strict minimum as the major result of the paper this finding however has very limited value in practice the specifically designed loss is not justified to be used in reality and the result is built upon this loss if the loss has nowhere to be found in use investigating such a loss inspires the community very little 3 similar to theorem 2 it states in theorem 3 that a twoplayer game with a specific loss cannot converge to the only strict maximum even more unclear to the reviewer such a result can provide useful value to the community or not a game with only strict maximum cannot converge to the maximum is not quite interesting to the community as the reviewer believes some minor comments the paper use the term of simultaneous quite a lot in the first part of the paper this is quite confusing for the readers as this term is not quite acknowledgeddocsepthank you for the explanation and the update of the writeup i still find the main message interesting but not critical for any real application hence my evaluation remains as weak accept thank you the paper provides an impossibility type of result for convergence of reasonable dynamics in general multiloss settings specifically the paper constructs a game between two agents whose only simultaneous critical point is a strict maximum thus under the definition of a reasonable algorithm as an algorithm that avoids strict maxima the dynamics cannot converge to equilibria in this case and has to therefore cycle the paper builds on a series of recent papers that show that many reasonable dynamics fail to converge in reasonable multiloss settings eg gd in zerosum games on the positive side the results accounts for a large class of optimizationgradient driven dynamics also the example of the multiagent setting is relatively small two agentstwo degrees of freedom on the negative side the example is rather artificial and it does not seem to capture any specific setting of independent interest moreover there is little intuition in the current writeup behind the construction of the example the appendix just shows that this example satisfies the target properties the main of message of the paper is that multiloss settings are very different from single loss settings and we cannot quickly and easily apply tools from optimization and hope to succeed at least not in all cases the predominance of multiloss architectures in ai settings makes this a reasonable message for iclr on the other hand it is clear that we are not moving away from these architectures any time soon so such a message is less effective than explanationtools of how to understand multiloss environments or make them work better docsep summary this paper deals with the impossibility of global convergence to stationary points in multiloss optimization the authors introduce some problems for which any reasonable method has an undesired behavior the notion of reasonable method is quite general and makes the result of this paper interesting pros the framework of reasonable methods is quite general the paper answers an important question of the game optimization community the paper is easy to read and well written cons there are some imprecisions see my section about questions questions the questions are asked by decreasing order of importance proof of theorem 2 in the proof of theorem 2 page 19 you claim that the function htheta infk f0ktheta is continuous i do not know if it is true or false it is likely to be true but this claim is not obvious since it is an infimum of continuous functions also the statement in your theorems only considers the reasonable algorithms with the hyperparameters ensuring that r2 is valid an easy way to get rid of this additional assumption may be to remove the mention of hyperparameter in the sense that an algorithm with hyperparameters that make it locally converge to local maxima is not a reasonable algorithm definition of ftheta at the end of page 3 you have f that can depend on all the previous iterates fthetakldotstheta0 but then in your definition r1 f only depend on a single iterate theta how can you reconcile that a method may depend on all the previous iterates and assumption r1 one simple way is maybe to consider a timedependent operator fk such that fktheta0 thetak1 about proposition 2 it seems to me that most of the results of this proposition have been already covered by daskalakis and panageas 2018 adolphs et al 2018 mazumdar et al 2020 andberard et al 2020 what is your relative contribution also note that hd and s are not defined global convergence in balduzzi et al 2018 i am not sure what global convergence result you are mentioning also in your related work section you should differentiate hsieh et al 2020 from the other related work while hsieh et al 2020 is trying to tackle the general nonconvexnonconcave which is very related to your work the other related work mentioned in the first paragraph of page 2 consider restrictive assumptions that allow them to get a lyapunov function related to convex optimization while your work implies that such a function does not exist in general minor comment the sentence lipschitz continuity of nabla f is not assumed which would fail even for cubic polynomials is a bit confusing since you mention the lipschitz constant of nabla f in proposition 1 i suggest emphasizing that the global outside of u0 lipschitz continuity is not assumed in the contribution section mathcal a is not defined i suggest to point to the appendix or specify that mathcal a is the restricted class of function you mentioned at the beginning of your sentence note that it would be interesting to discuss the lyapunov function used to show the global convergence in the nonconvex minimization setting the suboptimality ftheta f the current illustration you have page 8 also works in the multiobjective setting by considering that h has a positive definite symmetric part thus it is not very insightful of the main difference between single and multiobjective minimization docsepthis paper shows that the class of twoplayer markets have no satisfactory outcome in the usual sense players should neither escape to infinite losses nor converge to strict maxima or noncritical point some concrete examples are analyzed with negative results this paper is a reminder of researchers we should carefully model the objective functions of multiple interacting intelligent agents and the interactions between them weakness it is better to describe the alpha and gamma in sec 33
### Summary:
|
this paper presents a series of negative results regarding the convergence of deterministic reasonable algorithms in minmax games the defining characteristic of such algorithms is that a the algorithms fixed points are critical points of the game and b they avoid strict maxima from almost any initialization the authors then construct a range of simple 2dimensional market games in which every reasonable algorithm fails to converge from almost any initialization the paper received three positive recommendations and one negative with all reviewers indicating high confidence after my own reading of the paper i concur with the majority view that the papers message is an interesting one for the community and will likely attract interest in iclr in more detail i view the authors result as a cautionary tale not unlike the neurips 2019 spotlight paper of vlatakisgkaragkounis et al and a concurrent arxiv preprint by hsieh et al 2020 in contrast to the type of cyclingrecurrence phenomena that are welldocumented in bilinear games and which can be resolved through the use of extragradient methods the nonconvergence phenomena described by the authors of this paper appear to be considerably more resilient as they apply to all reasonable algorithms determining whether gans or other practical applications of minmax optimization can exhibit such phenomena is an important open question and one which needs to be informed by a deeper understanding of the theory i find this paper successful in this regard and i am happy to recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposed a single imagebased manipulation method deepsim using conditional a generative model the authors addressed this problem by proposing to learn the mapping between a set of primitive representation which consists of edges and segmentation masks and an image they also adopted a thinplatesplines tps transformation as augmentation which enables the model to robustly manipulate an image by editing primitives pros this paper is clearly written and easy to follow the authors proposed a novel conditional manipulation method based on a single image which is new in this area deepsim is capable of generating the plausible results by manipulating its contents in both a low and a highlevel manner maintaining its realism and fidelity cons the authors need to clarify why the vggnetbased perceptual loss encourages the model to maintain the fidelity as johnson et al 1 argued that this loss is defined as humans perceptual difference between the images it would be helpful to further explain how the model with this loss between gx and y better reflects the primitive representations in the generated output than the model without it additional explanation and experiments such as ablation study would make the paper convincing compared to the segmentation the edge map seems to less contribute to the image manipulation most of the results are mainly attributed to the segmentation changes and only slight modification is caused by the change of edge primitive as shown in appendix a removing the teeth of the top lama additional qualitative results for drastic manipulation in a lowlevel manner caused by the edge primitive would be necessary i think the technical novelty of tpsbased augmentation in the paper is not significant in that the tps transformation has been widely used in existing literature 23 for learning correspondence between two images as the authors mentioned in the conclusion training a network for a single image manipulation would be a critical bottleneck in practical use in this respect the training time on a single image should be reported additionally detailed descriptions how to obtain the primitives edge and segmentation for the input image would be required 1 johnson et al perceptual losses for realtime style transfer and superresolution eccv16 2 han et al viton an imagebased virtual tryon network cvpr18 3 lee et al referencebased sketch image colorization using augmentedself reference and dense semantic correspondence cvpr20 after rebuttal thank you for the dedicated consideration of my comments but there are a few remaining concerns that are not clear 1 the editing effects of edge maps are not distinct from those of segmentation maps more specifically except for the background in fig3 and the second face in appd most of the examples demonstrate the same kinds of manipulation as shown in the samples of the segmentation map manipulations this includes moving stretching and erasing the objects i think that the qualitative results of edge modification are not sufficient to prove its effectiveness compared to those of segmentation maps 2 as shown in the image segmentation video the authors provide a user needs to segment every single object which are selected for the manipulation moreover segmenting small and fine objects requires further elaborate and laborious annotations from the user resulting in a critical bottleneck for practical use due to these concerns i would keep my previous rating of 6 marginally above acceptance threshold docsepthis paper provides an augmentation method to enable single image training the network learns to map between a primitive representation of the image eg edges and segmentation to the image itself during manipulation the generator allows for making general image changes by modifying the primitive input representation and mapping it through the network on the positive side the paper proposes an interesting mechanism to train conditional generators from a single image the proposed super primitive works well for single image manipulation tasks some good image editing results are shown in the experiments on the negative side the method requires a professional editing ability for editing edges of a super primitive the generation of primitives also highly depends on the accuracy of semantic segmentation if the segmentation is done manually the editing process maybe timeconsuming the technical contribution is limited it seems that the kernel of the framework is a direct use of cgan without introducing many new ideas the training speed is not reported the reviewer thinks that a fast training process is important for single image manipulation the qualitative evaluation is not very convincing its better to conduct a user studydocseppaper summary this work proposes a method to design conditional generative models based on a single image in particular while some recent models have enabled one to sample unconditionally images from a generative model learned from a single image like singan this work explores a way of conditioning the generation on a primitive which can be userspecified as a result one can produce realistic modifications to a given image by modifying or sketching some primitive review summary i like the simplicity and the ingenuity of the approach this reviewer is not aware of any method that can produce similar results and as such it represents the state of the art for deeplearning based singleimage manipulation at the same time clarity of both writing and technical aspects could be significantly improved details strengths novel formulation to train single image generative models flexible framework compelling experimental results weaknesses while it is true that this model trains a generative model from a single image it does so by deploying traditional largescale learning on many modified versions of a single input image this is in contrast to other methods dip singan that train completely on a single image this distinction could be made clearer in the text many comments and expressions are too vague making it difficult to fully understand the approach in gan models the discriminator d is typically a deepnetwork model parametrizing a function from the space of images to the reals say rn rightarrow 01 representing the probability that the input comes from the distribution of real images as opposed to from a synthetic generator g in eq 2 however the discriminator model seems to receive two outputs xgx which makes little sense to me did the authors perhaps meant to write simply dy and dgx in eq 2 more broadly this confusion comes from a lack of a clear definition of the employed functions also in eq 2 gans typically have a large collection of training samples and so writing the expectation over the distribution of images pdata makes sense in this case however one has only 1 sample its true that the authors are artificially creating a distribution around the given sample so perhaps they could make this more precise and clarify what they refer to as pdata it would significantly help the presentation to define the different quantities and spaces used by the authors for example consider defining the domain and codomain of the warp f which they employ to generate the augmentation it is also not totally clear how these transformations are sampled the tijs later in sec 34 the authors mention that they randomly sample a new tps wrap but sampled how and from what distribution theres no comment as to how computational intensive the method is how many new samples augmentations are generated for every image how long does it take to generate these and how long does the subsequent training take the long tail of images doesnt mean anything to this reviewer it is clear that the authors intend to refer to images that occur very infrequently in the distribution of real images but the authors should make this more precise and avoid comments of the sort of primitive from the longtail which make no sense
### Summary:
|
the reviews are a bit mixed while all the reviewers feel that the paper proposed an interesting mechanism to train conditional generators from a single image and demonstrated good image editing results in the experiments there are also common concerns about the practicality of the proposed method for interactive image editing all the reviewers asked for the computation time and some expressed the concerns about technical contributions while these concerns were somewhat addressed in the rebuttal the ac feels that its a hard sell to bet on the dramatic increase of computational capacity to make the computing time from an hour to realtime concerns about novelty also remained given the drawbacks the final decision was to not accept however this work is promising and can be made stronger for publication in a later venue
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work attempted to extend the sliced wasserstein distance to be the convolution sliced wasserstein distance by replacing vectors with tensors and radon transforms with convolution operators respectively then this work compared its proposed methods with different convolutions usual convolution convolution with stride convolution with dilation with the original sliced wasserstein on image generation task with various datasets cifar10 celeba stl10 celebahq claiming to show favorable results its theorem showed that this new convolution sliced wasserstein distance is a pseudometric it is always less than or equal to the max sliced wasserstein distance and its expectation is upper bounded with a specific value even though this manuscript claimed that we derive convolution sliced wasserstein csw and its variants in the abstract csw was not derived but was defined in this work see definition 5 while this work seems to assume that the distance between images are compared with the wasserstein distance there are a number of cases where vectorized feature tensors of two images are compared with the sliced wasserstein distance eg see 35 and in that case this work can be seen as a simplified version of these works in that sense it can be seen that there is no novelty in the definitions of this manuscript while this defined csw was shown to be a pseudometric in theorem 1 it may be a serious weakness as a loss to be used in the optimization will the different image with the distance 0 be good to represent unfortunately there is no discussion on this issue nonlinear extension of csw was mentioned in lines 328332 it does not seem to be meaningful as a metric considering that linear version of it is a pseudometric while its experiments were showing results on image generation task for various datasets it was only compared with a conventional sw method not a recent max sw or generalized sliced wasserstein 23 moreover there were many different convolutions defined in the main text and compared in the experiments the results seems to suggest no clear winner among these convolutions thus the experiments in its current form seem weak comparing with recent wasserstein variants and investigating proper or optimal convolution operators could improve this manuscript even though the authors indicated that the limitations and potential negative societal impact of their work i was not able to find them docsepthis paper proposed convolutional sliced wasserstein distance and compared it with the conventional sliced wasserstein distance in addition the authors also introduced a convolution base slicer a convolution stride slicer and its variant with dilation the paper provided the details about how the convolutional sliced wasserstein is calculated and analyzed several properties the experiment results on the multiple datasets demonstrate that for the generative tasks the proposed csw achieved better performance than sw while using similar computational resources the paper is well written and provides a lot of details about sw which is very helpful for readers like me with little sw background the idea is well motivated and the math seems convincing to me i must apologize i dont have enough expertise to judge the merit of this paper below are some of my questions and concerns 1 in l201 the authors did notice that convolution is a linear operation thus consecutive convolution operations can be replaced with a single convolution except for the nonlinear convolution slicer what is the purpose of using multiple linear convolution operations moreover convolution is often implemented as matrix multiplication which also flattens the image and converts the kernel into a toeplitz matrix so the proposed convolutional sw is similar to sw with a special r in eq 1 2 i wonder how the kernel is determined from l228230 it seems the kernel is uniformly sampled from the set kl do we sample once for all the images that is using the same set of kernels for all the images or we repetitively sample kernels for each image 3 some training details are missing what is the loss function based on the main script it seems the network does not need a discriminator as csw directly computes the distance between the generated images and real images and the network is trained to minimize the distance however in the supplementary discriminators are still used l732739 if discriminators are used i am a little confused as sw gan 13 used discriminators to map images into 1d vector and compute sw which is similar to nonlinear csw introduced in this submission except that kernels in 13 are learnable while the kernels in this submission is uniformly sampled 4 the authors might miss one reference about introducing convolution in wasserstein to reduce the computational cost justin solomon fernando de goes gabriel peyr marco cuturi adrian butscher andy nguyen tao du leonidas guibas convolutional wasserstein distances efficient optimal transportation on geometric domains proc siggraph 2015 minor l126 rctimes times d should be rctimes dtimes d the authors did not provide the discussions or analysis about the limitations and potential negative societal impact if i did not miss anything overall i think it is fine as the paper focuses more on the fundamental sw distance computation see the weakness section for potential technical limitations for example the choice of kernels docsepthis paper presents a new methodology for comparing two probability measures over images the key idea of the paper is to apply convolution operators on probability measures over images the proposed method named convolution sliced wassersteincsw makes use of the spatial structure of images and needs less slicing memory the authors provide the metricity of csw as well as its sample complexity its computational complexity and connection to sliced wasserstein distance beyond this theoretical contribution the authors discuss numerical considerations and perform a thorough real data analysis showing the favorable performance of their method the idea of applying the convolution operators on probability measures over images is a very natural idea the proposed method is novel and simple to describe also the authors show the efficiency numerically docsepwhile the slicedwasserstein distance is widely used to compare distributions on images which can be seen as tensors it requires to perform a vectorization step using a reshape operation which is not suited to images and loses spatial structure while being memory inefficient in this work authors propose to define a convolutional slicedwasserstein on the set of probabilities over tensors using convolution operations which are better suited to images some properties are derived such as the sample complexity pseudodistance and many experiments are performed on classical image datasets this paper proposes to use convolutions to project images on the real line in order to use a sw distance between distributions over images while the convolution idea is not novel since it has been widely used in neural networks for some time the idea of using it in the context of optimal transport is new and very interesting moreover the results seem quite convincing strengths well written and very clear using convolution to define a new sw distance application on different datasets which give good results theoretical results weaknesses only a pseudodistance yes
### Summary:
|
the paper presents a new slicing methods for the wasserstein distance between probability measures over images based on convolution operators this way memory requirements can be reduced and locality can be better preserved experiments are conducted on generative modeling problems reviewers noted that the idea of convolution operators on probability measures over images is natural and simple yet novel and acknowledged theoretical and practical results the rebuttals were indepth and provided additional clarifications on the other hand reviewers note that csw only defines a pseudometric overall this paper is an interesting contribution to the neurips community and should be accepted
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
in this paper authors have introduced an overfitting phenomenon that has not been addressed in previous works related to policy optimization and importance sampling they then propose a new constraint on policy and a new algorithm which avoids the overfitting they have provided theoretical justification on why their proposed method works and show experimental results to show the effectiveness of their approach one major issue with the proposed approach is the expressivity of the policy class although authors in theorem 2 have shown that asymptotically the expressivity stays the same one question here is that what really happens in the nonasymptotic setting and how much we loose expressivity with the introduced constraint in other words the reviewer believes one missing part of this paper is to show how much local constraints on actions eq 5 is hurting the performance in the nonasymptotic setting how does this loss change as sample size grows showing empirical results even on synthetic datasets will be helpful one issue with table 1 is that authors say their approach does not make any markovian assumption and hence its performance is worse than other competitors in the mdp setting while it outperforms other methods in the nonmdp setting one question here is that is this comparison fair in other words do other methods make markovian assumption and at the same time we expect them to perform well in a nonmdp setting the writing of the paper needs a major review examples are but not limited to page 1 introduction section different than page 2 line 2 learning an policy page 2 section 2 line 6 over next context page 2 section 2 also sometimes refered page 3 section 3 but a similar results page 3 section 3 for unnormalized estimator page 5 section 4 function class is closed respect page 7 section 6 euclidean distance in over page 7 section 7 details are provided are in the appendix page 7 section 7 the reward consist of page 9 section 8 in a policy that underweigh certain lower performing initial contexts this paper introduces a legit overfitting issue and proposed a solution for it however the reviewer is not convinced enough about the effectiveness of the proposed approach due to the concerns raised in the main review plus the fact that the paper needs a major writing revision docsepwhen optimizing an policy under batch data under importance weighting adjustment of batch data this paper observes that methods can learn to ignore states with highly negative rewards by having lower support to the behavior policy and this paper proposes to solve this problem by only constraining actions that have observed in the nearby states in the batch data although various similarly ideas have been proposed like regularizing toward behavior policy this paper improves on importance sampling methods that does not assume mdp that separates itself from recent baselines like pql they also show some analysis on this problem they show that compared to recent baselines they improve in a tumor growth simulation environment when mdp assumption is violated and a realworld sepsis dataset i find the proposed method of this paper is quite intuitive and the baselines compared seem to be appropriate the example 1 is good to understand with more motivating examples in the appendix i also love the clinicallyinspired evaluations my concerns are as followed 1 regarding the methods a this phenomenon seems to only apply when highly negative reward are present one way is to shift and scale the rewards to be positive and capped the importance weights to avoid overmaximize the reward i am curious if authors can compare with this method or at least illustrate why this way of shifting reward will not work b the distance considered in the paper uses the euclidean distance in the raw input space which may not work when lots of missingness present in the data or in settings when some features of the inputs are not relevant to the reward i think learning a meaningful latent space that depends on the reward like 1 could further improve this method c the novelty of this work is not high as similar ideas have been proposed but not particularly in the nonmdp setting as acknowledged by the authors 2 regarding the experimental results a although its intuitive that this overfitting phenomenon could happen theoretically only qualitative figures in fig 1b support this claim i believe its crucial to support this statement in the experiments i think authors can quantify it in both simulations and the mimic3 by showing that the importance weights are close to 0 for the baselines in lowerreward states b since you use wis as the validation metric it might favor methods also optimized under isbased methods but might give an disadvantage to nonis based methods like pql though i understand you have to choose a metric c in fig 1a why there is no blue point around x200400 just want to see if blue actually is better in those regions 3 regarding the presentations this has little effect on my evaluation a bolding table 1 and 2 can help readers quickly understand which method is better b in sec 3 please list the source of equation 2 1 zhang amy et al learning invariant representations for reinforcement learning without reconstruction international conference on learning representations 2020 pros the method is easy and simple and shows improvement in both experiments that include nonmdp simulations and a realworld dataset the writing is mostly clear the improvement shown in a realworld clinical environment is encouraging cons no quantitative analysis if the overfitting actually happens simple heuristics like shifting rewards to be positive and avoiding overmaximizing reward is not compared the novelty of this work seems limited since similar ideas have been proposed the distance considered in this paper may not handle missingness or when some input features are irrelevant to the reward although i feel the proposed overfitting phenomenon can be an interesting contribution the authors should quantify if it happens in the experiments and justify their method by comparing with heuristics like shifting the reward i think addressing these can further push the paper over the acceptance bar docsepthe paper considers offline policy optimization the authors discussed the issue of overfitting of the importance weights in existing offline algorithms and developed an algorithm to alleviate the issue results are supported via theories and real datasets from healthcare applications major comments in section 3 the authors decomposed the importance sampling estimator into three parts and argued that the second term can be problematic i suggest the authors to design a toy simulation example to evaluate the order of magnitude of these three error terms the first error term being ehatp vpix ep vpix where p corresponds to the oracle context distribution in equations 3 and 4 suppose we plugin the empirical behavior distribution for mu then hatpx equals wx this would solve the overfitting problem in practice according to the semiparametric theory even if the true importance sampling ratios are known to us the resulting estimator with an estimated ratio would be more efficient see eg tsiatis 2006 semiparametric theory and missing data would you please clarify why you did not use such a simple method with an estimated importance sampling ratio would you please elaborate the constraint in equation 6 why would such a constraint help solve the offline learning problem the condition piinfty is not sufficient to guarantee the validity of theorem 2 as shown in the proof pi deltan shall decay to zero the authors needs to provide an upper bound for deltan in the proof as a function of m and n and presents the corresponding condition for pi in addition naively apply the bonferronis inequality would give a loose bound concentration inequalities would be preferred to relax the resulting condition for pi the proposed algorithm relies on several tuning parameters such as delta m lambda and alpha is your algorithm sensitive to the choice of these tuning parameters how would you recommend practioners to select these hyperparameters could crossvalidation be possibly employed in addition what policy class do you consider in the numerical experiments just curious how the proposed compares with existing baselines in standard openai gym environment have you conducted some related experiments since the numerical experiments use policy evaluation algorithms to compare different policies i wonder if the estimated values are sensitive to the choice of the policy evaluation algorithm or the tuning parameters used in the algorithm would you please discuss minor comments page 3 line 12 shall hatpx be hatpx page 4 lines 17 and 18 missing parentheses in e page 6 theorem 2 m as a function of n page 13 line 14 to be be my detailed comments are given under the main review section i have some concerns about the motivation of the method 2 the theoretical results 4 the choice of the tuning parameters 5 and the sensitivity of the estimated values in the empirical studies to the choice of the policy evaluation algorithms 6 so i give a score of 5 however i would like to increase my score shall my comments be addressed docsepthe paper studies the overfitting issue in counterfactual policy learning the paper first identifies an important overfitting phenomenon in optimizing the importance weighted return and proposes an algorithm to avoid the issue the notable limitation of some previous approaches is that they use penalization at the policy level and do not directly address the problem with avoiding contexts with low reward the proposed poela method addresses deficient support in the observed finite data rather than in the expected behaviors some experiments show the benefit of the proposed approach strengths the paper tackles the important and relevant issue of overfitting in counterfactual risk minimization the motivation is also nicely supported with some toy examples the paper identifies the source of overfitting that is not addressed in previous works the paper proposed a novel algorithm to address the overfitting issue with some gurantees related work is covered rigorously the code is provided as supplementary material i encourage the authors to publicize it upon publication to ensure the reproducibility weaknesses i skimmed through the proofs of theorem 1 and corollary 1 they look correct but the bounds seem very loose and might be meaningless it would be more interesting to see how the empirical sum of weights are different among different methods using synthetic data all experiments are based on realworld data meaning that the policy performances are now measured with ope on a test set which might not be accurate having synthetic experiments as a complement would strengthen the empirical contribution do we still observe the overfitting issue when we use kfold cross validation the paper identifies the unexplored aspect of the overfitting issue in offpolicy learning the motivation was easy to follow with some easy examples i would recommend a weak accept at this moment some additional efforts in the experiments as i described above would strengthen the contribution more
### Summary:
|
in this paper the authors proposed an offline policy optimization algorithm motivated by an analysis of the upper bound error of importance sampling policy value estimator specifically by the decomposition of the error in a particularly way the authors identified some error which does not converge then the authors introduce the contraints over feasible actions to avoid the overfitting induced by such errors finally the authors tested the proposed algorithm empirically the paper is wellmotivated and the authors addressed some of the questions in their rebuttals however there are still several issues need to be addressed the alternative practical estimator with plugin behavior distribution would perfectly avoid the overfitting which is however ignored this is an important and easytoimplemented competitor the pessimistic principle in the face of uncertainty pfu has been exploited extensively in offline policy optimization problem how the proposed algorithm is connected to the pfu has not been discussed carefully especially in terms of nonasymptotic sample complexity which makes the paper is not wellpositioned while the motivation is derived from the unbiased importance sampling estimator the counterfactual risk minimization in equation 7 is introduced suddently without clear justification in my opinion for a better clarification of the paper the expressiveness of the policy family should not be discussed in this way i understand the authors would like to avoid any possible degeneration and explain the asymptotic lossless in terms of policy flexibility however the whole point of the paper is trying to introduce some mechanism to avoid the possible overfitting by regularizing the policy family in other words the restriction is on purpose and beneficial i think the argument of policy family expressiveness should be reconsidered and rediscussed minor markovian vs nonmarkovian baseline comparison is not fair and more comparison on wellknown benchmarks eg openai gym should be conducted the sigma upper bound should be explicitly provided and verified in practice in sum the paper is wellmotivated however need further improvement to be pulished
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper propose music an rl algorithm for learning controllers in a unsupervised way they key idea of the proposed algorithm is to separate the state of the robot like joint angles from the state of the environments such as location of an external object and optimize the mutual information between the two set of states by maximizing this mutual information the resulting policy learns to better control the environment and can be used to train downstream tasks the paper experimented with different ways of training the downstream tasks and demonstrated favorable results compared to prior methods i think the paper introduces an interesting idea for training unsupervised skills for manipulation tasks the results also seem very promising however i have the following concerns that i hope the authors could help address 1 the exposition of the paper can be improved for example eq 4 is referred and discussed multiple times in the text before it is defined in the first sentence of 32 what does adjacent state mean does it mean ss and sa whats the difference between tau and tau in eq 4 in general although the proposed algorithm seems reasonable the derivation is a bit confusing 2 the agentsurrounding separation seems to apply to the full state space but im not sure how we can apply it to observation spaces that contains partial information of the full state for example if the robot is equipped with a camera is tasked to train policies with vision input how does the proposed algorithm handle such cases 3 i feel that it would be more interesting if the agentsurrounding separation idea could be generalized to be separating the observation space into two subsets where one is optimized to control the other for example if we are dealing with a legged robot which is trained to move forward the current framework wouldnt be able to handle it because there is no environment state however if we separate it into the space of joint angles and velocities and the space of base position and orientations would the current framework be able to obtain a policy that can take the robot to a large variety of positions and orientations if so a related example would greatly strengthen the paper in my opiniondocsepsummary this paper introduces music a reinforcement learning approach that separates the agent state from its surrounding state and trains the agent to maximize the mutual information between the two this implies that the agent has control over the surrounding state the approach is evaluated within four environments and compared to multiple baselines this paper is wellmotivated and the approach is interesting the paper is mostly wellwritten though i found parts to be somewhat confusing code is provided as well as hyperparameters so the approach seems reproducible the experiments are strong as the approach is evaluated within multiple environments with extensive comparisons to relevant baselines music is shown to achieve very good performance on simulated robotic tasks and was able to improve performance when combined with other intrinsic reward and rl methods i think this is an interesting direction and it does make sense to separate out the agents state from the environment state for these reasons i do think the paper should be accepted however i found the description of the methodology in section 32 to be very confusing the equations are referred to before they are introduced which was unexpected hence this section would be greatly improved by some rearranging i also did not understand what exactly t was what does this function output and how is it trained comments some other related works are 1 which uses an intrinsic reward to maximize the controllable entities in the environment and 2 which learns an intrinsic reward that maximizes controllable features question 3 in the paper does not refer to any figure does this correspond to figure 5 where are the music diayn results is the reward in question 8 the negative l2 norm how does music alone perform in table 1 this should be included here as well 1 megareward achieving humanlevel play without extrinsic rewards song et al 2 feature control as intrinsic motivation for hierarchical reinforcement learning dilokthanakul et al docsep summary this paper proposes the use of state control as intrinsic motivation it does so by separating the overall state into an agent state and a surrounding state the idea then is to maximize the mutual information between the agents internal state and the environment state this mutual information is given as a reward to the agent which it attempts to maximize using policy gradient algorithms experiments attempt to validate the usefulness of such an intrinsic reward signal for pretraining an agent or for augmenting a given task reward positives the proposed idea of tying an agents internal state to the surrounding environment is an interesting new metric a practical usecase for embodiment is extremely interesting and this paper champions such a usecase fairly well adapting the use of mine for computing the music reward is a useful contribution experiments sufficiently showcase the viability of the idea in various manipulation tasks the video showcasing the various behaviors learned using music rewards in different environments as well as the different skill learned when combined with diayn give a clear understanding of the effect of this intrinsic reward comparison with various exploration bonuses skill learning priors and prioritization schemes gives a clear idea of how music compares with other similar techniques drawbacks while i find the idea and the experiments quite interesting i do feel there is some improvement necessary in presenting the idea and comparing it with the idea that is most related to music which is empowerment my main gripe with the paper is that the background of mutual information estimating mutual information using mine and the adaptation in this paper in order to compute mutual information for agent trajectory is not sufficient or clear while an interested reader can go to the referenced work and glean more information the idea itself should be sufficiently understood by reading this paper i would argue this is not true with the current state of the draft while there is an experiment for comparison with empowerment more explanation in the related work is necessary for a reader unfamiliar with that body of work while the paper deals with the idea that an agent uses its knowledge of itself and the environment to affect the environment as much as possible it does not discuss sufficiently how such an agentenvironment divide might be possible or not possible in various scenarios where rl would be a useful solution mechanism for example would it be possible to use music in atari or for optimizing a recommendation system from my understanding it would not be straightforward but this is a question that seems like it deserves more discussion in the paper questions it is unclear why computing the mutual information between the agent and surrounding state needs information of the trajectory is it just a practical issue as referenced in section 32 related to the above and quite possibly answered by it why are all the states in a trajectory used for computing the mi is correlation in the samples not an issue why not in the experiments with the fetch robot the offpolicy algorithm used by her is labeled as sac figure 4 however the experimental details in the appendix as well as the code provided with the supplementary material seems to be specific to ddpg is there a reason for this discrepancy where can we expect music to not provide informative rewards will it work in scenarios where agent actions will change the surrounding state in a temporally offset manner what happens when the surrounding state changes due to some change in agent state but delayed for example in the fetch slide task the agent must slide the block across the table but after the initial push any movement the robot makes does not affect the blocks trajectory how does this dynamic affect music rewards and subsequent performance on the task i understand if this particular question is out of the scope of the paper but curious as to the authors views conclusion music rewards are an interesting idea and would be a good addition to intrinsic motivation literature a more wellrounded explanation of the techniques used in this paper and the capabilities and usecase for music would elevate the paper docsepthis work introduces music a framework for intrinsically motivated rl where the intrinsic reward comes from maximizing the mutual information between the agents state and the surrounding environments state the authors motivate and describe this approach explain its incorporation into various training modes exhaustively characterize its properties and compare to numerous related past approaches although it is somewhat specific to particular environment domains music offers a compelling addition to the family of intrinsically motivated rl algorithms based on concepts of mutual information clarity this paper is for the most part a model of clarity the proposed algorithm follows from a clear intuition and the results present a detailed and organized characterizationvalidation the authors could further improve clarity by adding a bit more exposition around the training of the mutual information estimator this is only a minor issue since the mutual information framework itself is well established by now still it would help to more concretely describe the implementation details covered in section 32 perhaps in another small section of the appendix quality this paper is very high quality the experiments are thorough and well organized addressing an impressive number of literally enumerated questions this also serves to demonstrates an impressive versatility of music the proposed technique while simultaneously enabling comparison to a wide range of past methods and showcasing a variety of potential uses however there is a lack of guidance around practical challenges facing this technique and its potential pitfalls for example campos et al icml 2020 describe some failure modes associated with simultaneously learning networks for estimating mutual information and using them to train policies it would be valuable to know if music has similar or otherwise noteworthy failure modes originality and significance as the paper describes there is a large body of work on intrinsically motivated rl as well as intrinsic rewards derived from mutual informationbased objectives in many cases the types of behaviors those alternative approaches would hope to encourage are the same as those learned by music from what the paper demonstrates it seems like music is a more successful iteration of these attempts that is not meant to diminish its significance its a very hard problem ostensibly music incorporates the right inductive bias through its decomposition of the state i expect that the field will find both the technique to be useful as well as the general insights brought about through this work my only concern regarding the significance is whether music is only applicable to a relatively narrow set of domains even if we assume that the state can be cleanly decomposed into the agentsurrounding constituents how might music handle things like partial observability or other types of uncertainty that may affect estimations of mutual information the paper may benefit from a brief discussion around generality and if appropriate how future work may address any issues therein pros simple extension of an existing framework leading to clear and versatile improvements within a challenging problem paper is exceptionally clear and wellorganized impressively thorough experimental characterizationvalidation code is provided to help improve reproducibility and external adoption cons some gaps in practical guidance and discussion of potential pitfalls generality of the approach with respect to environmenttask setting is a bit unclear
### Summary:
|
the paper introduces music a method for unsupervised learning of control policies which partitions state variables into exogenous and endogenous collections and maximizes mutual information between them reviewers were uniformly positive agreeing that the approach was interesting and wellmotivated and the experiments convincing some concerns were raised as to clarity which were addressed through several revisions of the manuscript i am happy to recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper is focused on predictive multiplicity predictive multiplicity is when all the models in the rashomon set set of models with similar performance statistics on the test set have different predictions for the same input the goal is to define a new metric rashomon capacity for predictive multiplicity that considers the soft probabilistic outputs of the classifier models rather than the onehot class predictions this helps to distinguish rashomon sets where different predictions are a result of highly different predicted probabilities vs sets where the predictions are different but the come from similar soft outputs the general idea is to find a prior distribution over members of the rashomon set such that the divergence from the center of gravity of predictions is maximize the paper continues by computing and approximating the rashomon capacity metric on various realworld datasets using a several algorithms the target problem of the paper is important and wellmotivated the paper is wellorganized and the writing is smooth the paper would be easier to understand with a stepbystep algorithm box either in the main text rather than referring to the algorithms name and the provided code the experimental section focuses on useful data set examples but no case study has been discussed to have a better understanding of how rashomon capacity can be crucial in realworld scenarios it would be very useful to have an example the paper seems to be an extension of existing work in order to prove its contribution there needs to be an either empirical or theoretical comparison with existing work in the experimental section no comparative analysis is performed to empirically showcase the advantage of using rashomon capacity versus ambiguity and discrepancy it would be more comprehensive if the greedy algorithms performance is compared for the other two datasets cifar10 and adult income all in all the paper does a decent job in motivating and explaining the rashomon capacity but does not provide enough empirical evidence on its advantage and utility in order to satisfy the papers motivation the main algorithm the results in the finding the largest rashomon capacity awp is computationally expensive after rebuttal i thank the authors for providing a comprehensive rebuttal i will take their response to my review and other reviews into consideration throughout the decision process docsepthe paper proposes ways to quantify predictive multiplicity the extent to which equally accurate models disagree on individual predictions for probabilistic classifiers specifically it introduces a quantity called rashomon capacity and demonstrates its usage to report predictive multiplicity on different datasets and models strengths the problem of predictive multiplicity is important and timely the paper is written quite clearly the notion of rashomon capacity is natural well motivated weakenesses in terms of significance i found the main argument for extending existing notions for binary classifiers to probabilistic classifiers not completely clearsatisfying for example it is mentioned that two scores can be very close eg one example has score 049 for class 1 the other 051 and the existing methods would report multiplicity when there isnt really any multiplicity but doesnt this depend on how the probabilistic classifier ends up being used eg if the probabilistic classifier is the extracted probabilities from a logistic regression model that will then be used to make binary decisions by thresholding the scores at 05 then i should treat 049 different from 051 no i think this approach would make sense if binary decisions are obtained by randomizing according to the scores eg classifying someone as 1 wp 049 but i dont think anyone uses ml models this way in terms of clarity 1 the paper makes some bold claims which i felt were unjustified since the claims themselves were not really made formal for example it is mentioned in several places that the proposed methodology resolves predictive multiplicity but what does resolve even mean in this context 2 in many places i felt like important details were omitted from the main text for example it is mentioned somewhat in passing rashomon capacity can be computed by standard procedures such as the blahutarimoto algorithm the details of the algorithms are fine to omit but a description of the assumptions is important for example its not clear to me if this assumes the rashomon set itself is already known and whats the computational complexity of this procedure similarly for the following result proposition 2 the statement claims the existence of at most c models but can they also be found efficiently finally in terms of quality the connection between the definitions and theoretical results to the experimental section was often unclear or lacking for example if i understand correctly the proposed methods eg sampling and awp are heuristics and do not compute the rashomon capacity but some approximation of it i couldnt really understand if sampling is a baseline and awp is the proposed approach and whether the claim is that it is better because the capacities are larger also i would have liked to see some semisynthetic experiments where the real rashomon capacity is known or can be computed efficiently so that the performance of the heuristics could be evaluated eg for all we know the rashomon capacity is large for every input and we only observe its small on some inputs because of optimization issues overall i think the paper could have high impact but at its current form im left with more questions than answers regarding many of the concepts used and the claims made hence my current score yes docsep this paper proposes a new metric to measure predictive multiplicity called rashomon capacity rc unlike previous metrics rc depends on the probability prediction instead of the discrete predictions to compute rc the authors propose to perform awp which is more efficient and random initializations strengths the problem of predictive multiplicity seems important proposition 2 makes reporting rc possible in practice this paper is wellwritten and easy to follow the introduction lays a good background of this area weaknesses 1 rc seems very expensive computing rc for one data point requires performing c optimizations while random initializations can be shared for different data samples and save time from fig 3 it is clearly greatly underestimating rc 2 the motivation seems weak given the cost of computation and the unclear message it conveys without as specific epsilon it could probably be used to measure model robustness but i think thats just 31 can the authors briefly explain how rc will be used in a decision marking process my question 4 is also related to motivation 3 as an evaluation metric rc awp requires running multiple optimizations it seems like unlike metrics like accuracy it is unclear how to compare rc across different mathcalh or even within the same mathcalh but with different optimization methods if the optimization is not finding global optima see weaknesses above in general i think the motivation is the major limitation minor issues 1 typo l145 given given
### Summary:
|
the paper presents a new metric for predictive multiplicity which is the tendency of different models from a hypothesis class with similar overall performance to make different predictions on individual samples predictive multiplicity is relevant to fairness and interpretability of ml models the paper also presents analysis and algorithms for computing the metric the three reviewers generally agreed in their characterization of the paper the highlevel goal was well motivated and timely the paper was very well written and technically solid they raised concernsquestions about motivation and connection to other ideas eg ensembles calibration as well as specific suggestions for the experiments and writing the authors made substantive changes in response to suggestions especially those of reviewer hcae and wrote very detailed responses to questions overall the remaining hesitation from reviewers centers on the significance and usefulness of these ideas in practice this left the paper as borderline in its ratings to the metareviewer who also looked at the paper some skepticism about whether this is the final solution for characterizing the reliability of ml model predictions is certainly warranted however the paper appears to be a solid contribution to a nascent area that provides a starting point and is likely to provoke discussion and followon work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
in this paper the authors consider quadratic network games with payoff functions of the form jixixi alphai xi xi22 sumnolimitsj1n xi xj where alphai0 denotes the marginal benefit of the ith player from playing xi in mathbbr and gij is a matrix of interactions that determines whether players i and j are adversaries gij0 friends gij0 or noninteracting gij0 the authors assume that players follow a perturbed bestresponse model of the form xit1 argmaxnolimitsxiinmathbbr jixixit uit1 where ui is a perturbation that affects only a subset m of manipulable players possibly empty the authors are interested in conditions under which the underlying interaction matrix ggijij1dotsn can be learned from the outcome of the players learning process in this regard they provid the following results 1 if all player actions are observable g is learnable from a single trajectory of play if and only a certain matrix involving x1 x0 is controllable theorems 1 and 2 the condition in theorem 2 involves a nonempty set of manipulable players and is less stringent 1 if only a subset of players are observable g is learnable from a single trajectory of play if and only if a certain markov equation admits a unique solution theorem 3 1 finally if the learner has access to infinitely many trajectories g is learnable if and only if two pairs of matrices are controllable observable two algorithms for identifying g are described in sections 3 and 4 with full partial action observations respectively though it was not clear to me what the exact relation is with respect to theorems 14 finally in section 5 the authors provide a series of numerical validation results over different random network structures erdsrnyi barabasialbertetc i do not believe the paper is a good fit for iclr for the following reasons 1 the authors model is highly stylized and i was unable to see any connection to the type of applications andor theoretical questions that are relevant to iclr for instance the role of the overseeing adversary is never explained nor the player dynamics nor the specific game model even though the paper cites some papers that have appeared in ml venues and seem to be relevant there is no denying that the core of the paper lies in control theory or possibly econometrics in particular the authors do not attempt to provide any intuition or motivation for many of the notions that they employ for instance the notion of controllability so the paper would be inaccessible to the wider iclr community 1 in addition to the above the paper suffers from imprecise writing to the extent that the main theorems are impossible to parse for example theorems 1 and 2 state that g can be uniquely determined by the adversary for sufficiently large t if and only if a certain condition holds but what does uniquely determined mean in this context what algorithm is the adversary following one of the algorithms described in sections 3 and 4 if so the corresponding theorems should be stated as guarantees for the algorithms in question and the paper completely rewritten accordingly if not what is the notion of learnability that the authors refer to and what is the role of t likewise what does access to infinitely many trajectories mean the adversary is assumed to be able to maintain a variable with infinite memory 1 the authors make no attempt to explain whether the conditions provided in theorems 14 are light or stringent and they likewise provide no intuition as to what they mean for the underlying game the notion of controllability is classical in control theory but this does not shed any insight on what this actually means for the system at hand and the authors make no effort to explain any of this incidentally this further reinforces my belief that this paper is more appropriate for a control venue like cdc and not iclr i do not think that the above can be fixed with a revision in a few days hence my strong reject score the authors should not interpret this as a critique on the worth of their results but as an indication of a the suitability of this paper to the iclr community at large and b the level of rewriting that would be required to make the paper accessible to said community in the first place specific remarks i am providing below a list of detailed remarks that could help the authors in an eventual resubmission revision of the paper irrespective of venue 1 the use of the term fictitious play is erroneous fictitious play means bestresponding to the empirical frequency of the opponents play not to their last action see the original papers by brown and robinson in the 50s the unperturbed process considered by the authors is the best response dynamics considered by eg monderer and shapley 1996 not fictitious play 1 the plural of equilibrium is equilibria not equilibriums 1 the authors never define gii is it allowed to take any value is it assumed that the sum in 1 is only taken over jneq i which would give gii 12 1 the authors state in the beginning of section 3 that the best response dynamics are expected to converge to nash equilibrium and that this is the case if rhog 1 ballester et al 2006 the authors subsequently seem to suggest that this occurs in practice because g is sparse but i do not see what this has to do with the spectral radius of g the matrix g1n gn1 1 and gij0 otherwise is as sparse as it can get but its spectral radius is 1 1 what does random generate delta mean in algorithm 1 and if the algorithm is stochastic how does it connect to the presumably deterministic theorems 14 1 line 1 in algorithm 2 is also problematic what does carry out classical subspace identification mean and how is this implemented i do not believe the paper is a good fit for iclr for the following reasons 1 the authors model is highly stylized and there is no connection to the type of applications andor theoretical questions that are relevant to iclr 1 the paper suffers from imprecise writing to the extent that the main theorems are impossible to parse 1 the authors make no attempt to explain their results and the connections of the required conditions to the underlying game this paper could be a good fit to a control theory conference like cdc but not iclr postrebuttal i have read the answers of the authors unfortunately i am not convinced and i will retain my original score docsepthe paper addresses the problem of an adversary learning the structure ie coupling matrix g among agents in the payoff of a repeated game with quadratic payoffs where the agents are playing under best response dynamics and the adversary can potentially observe the actions of only a subset o of players and also affect the action of another subset m of players the authors give conditions for which g can be recovered depending on the sets o and m when a single trajectory or multiple trajectories of the game are available in addition they give algorithms for the cases where all and part of the actions are observable and g is a sparse matrix strengths learning the structure of games of this type has attracted some attention recently and using fundamental concepts from control theory observability and controllability of a linear system is an interesting view of this problem the fact that the adversary may have a partial observation of the actions of the agents can be quite relevant in practice and does not seem to have been explored in previous work weaknesses the paper has some technical issues more specifically the title of the paper is misleading as the paper examines the case where all players play the best response fictitious play usually denotes a strategy where each player plays the best response to the average strategy observed from her opponents the setting is very specific as only best response dynamics are treated in principle agents may use other learning algorithms in addition the authors mention that in practice it is expected that the best response dynamics br should converge to a nash equilibrium however this is generally not the case even if the ne of a game is unique the authors should provide a reference or prove that in this particular game br converges to a ne the ability of the adversary to affect perturb the agents actions seems arbitrary and needs to be motivated in addition the effect of this ability is never really explored in the paper can it for example offset partial observability similarly the fact that g is usually sparse is nowhere motivated theorem 2 seems to hold only for the case where the adversary can also purturb the actions of all agents in addition to having full observability theorem 4 seems just a direct application of van der waarde et al 2021 other comments the authors mention that they consider a setting of dynamic game related work paragraph section 14 however this is not true the game under consideration is a repeated game where agents follow best response dynamics it is unclear what is the connection of sections 3 and 4 to section 2 the theorems proved in the latter section do not seem to be used in any way in section 42 it is mentioned that the observation of the actions of the players is masked by noise however noisy observations are not mentioned before application of concepts of control theory in learning the structure of a game with linearquadratic payoff may be interesting especially in the case of partially observable actions however the setting of this paper seems specific without being very well motivated there are some technical errors and not well supported arguments and the paper reads somewhat incoherent docsepthe authors study a game theory problem in linear quadratic games family of payoffs with the following assumptions 1 players are assumed to use the bestresponse strategy 2 an adversary can observe a subset of the nodes actions 3 an adversary can modify the wards for a subset of the players 4 one part of the payoff function that is governed by a matrix g and models the nonmarginal payoffs ie the part of the payoff that is a function of the actions taken by the other agents the authors study under what circumstances g can be learned one result is that that under some mild assumptions when the adversary can observe all players agents then g can be learned the authors show for more settings necessary and sufficient conditions for learning g the authors also perform some simulations on random graphs strengths the model is quite interesting and i think the model has a lot of potential the methods used are nontrivial and the results are quite surprising and general weaknesses the paper is not very accessible how can all components of g be uniquely determined that is how can one distinguish between g1 with g125 and all other entries are say 1 or 10000 and g2 which is the same but g124999 why was the name controllable chosen what do you mean by the indexes are sorted besides the obvious the authors spend no time giving intuition behind the theorems and since the proofs are not part of the paper i cannot vouch for the correctness what does a trajectory mean in your setting notation g vector notation imb unclear seems like a very nice paper but not accessible enough for me to understand all the details
### Summary:
|
in this paper the authors consider linear quadratic network games also known as graphical games and they discuss a number of conditions and procedures to learn the underlying graph of the game from observations of bestresponse trajectories or possibly infinite sets thereof in the game the reviewers initial assessment was overall negative with two reviewers recommending rejection and one giving a borderline positive recommendation the authors rebuttal did not address the concerns of the reviewers recommending rejection and the authors did not provide a revised paper for the reviewers to see how the authors would implement the suggested changes so the overall negative assessment remained after my own reading of the paper i concur with the majority view that the paper has several weaknesses that do not make it a good fit for iclr especially regarding the lack of precision in the theorems and the statement of the relevant assumptions so i am recommending rejection
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work tackles the task of 6d pose estimation for novel objects given prior observations of a single object of the same category where multiple rgbd images from known relative poses are available for both the template and the query object and the goal is to find a rotation and translation to align the template and the query object the proposed approach is to a extract spatial features from all images using a pretrained vit with dimensionality reduction using pca b find a rotation that aligns the template and query object using a set of query image template image pairs that match well and are consistent with other pairs and c estimates a 7d alignment r t scale using patchwise matches between the selected pairs and leveraging ransac strengths the paper tackles a relevant problem and considers a natural setup for it given multiple views of a query object and a template object computing alignment between the two is a great formulation for the task of oneshot pose estimation the experiments are reported on the co3d dataset which represents a challenging setup and shows that the proposed method does indeed generalize across objects a key implementation detail that this work highlights is the benefit of feature dimensionality reduction this is wellablated and leads to consistent improvements weaknesses one concern is that the paper relies rather heavily on the framework presented in 17 and the primary contribution here is to extend that framework to leverage multiple query views in a slightly straightforward manner while this obviously leads to significant empirical gains as using multiple views should this does limit the technical contribution moreover this should also necessitate an alternate baseline where one simply uses the approach from 17 to get multiple estimates and chooses the mode while the work does use multiple query views the precise methodology feels somewhat constrained in particular the only patch correspondences that are allowed and ultimately drive the optimization are between batches at the same grid locations across images this is suboptimal for two reasons as a it implicitly assumes that images from very similar viewpoints are available for matching and may not be a valid assumption and b cannot leverage observations of matching content from dissimilar viewpoints the handle of a cup might match between a leftview and a rightview and should drive the pose estimation on a note slightly related to the above i am curious why a more traditional sfm like approach is not considered as the method or atleast as a baseline in particular given depth images the patches can be considered as 3d points with associated deep features one could then construct matches between the 3d points observed for the source and target objects and using ransac optimize the 7d pose to best align them i would be interested to hear why this would not suffice particularly as it is the typical approach that drives largescale structure from motion and in this case the depth known extrinsics within each object would only simplify the task further comparatively minor concern the continual learning aspect seems a bit weak and is more of a posthoc application rather than something integral to the method in particular a the notion of a category does not really keep evolving if one sees multiple chairs the category is still represented by a single template or perhaps a collection of independent template the text is unclear on this in any case multiple examples of a category do not lead to an evolving notion of the category b the determination of a new vs seen category is simply done via average cosine similarity of the cls token of the vit model and does not have much in common with the approach for 6d pose alignment considering both these the continual learning aspect of the work while tackling a good goal seems a bit adhoc and technically disconnected docsepthis paper presents an approach to object pose estimation at the category level by leveraging multiple views of a single reference object from the category the approach combines advantages of different paradigms deep models and feature matching high dimensional features from a pretrained vit is used for efficient descriptors through dimensionality reduction these set of image descriptors from multiple views of an object are used to find correspondences between the target and the reference object of the same category an average pose estimate is obtained by filtering best referencetarget view pairs finally the filtered subset are aligned using a robust least square estimation to give a 6d object pose the paper builds upon 17 by using multiple reference object views which leads to considerable improvement in performance strengths the paper is well written and supported with relevant media content the technical contribution is incremental yet considerable the proposed approach results in better performance in terms of speed and accuracy than existing category level technique weakness although the performnace improvement in table 1 has promising results the extensive evaluations with other techniques would be desirable for eg 14 you only demonstrate once categorylevel manipulation from single visual demonstration rss2022 also in line 38 i do not think its well argued do you mean method in 14 is designed for single object instances please check docsep this paper proposes an approach for oneshot categorylevel pose estimation given m reference views of an object belonging to a specific category which defines a categorylevel pose and n views of a novel instance of that category the task is to define the categorylevel pose estimate the authors propose to use a pretrained dino vit to produce 2d2d semantic correspondences between the reference and target images and then solve for the relative pose by 1 lifting the 2d correspondences to 3d using provided depth images and 2 aligning the correspondence sets using ransac the paper is an extension of prior work 17 note that this work is only on arxiv and i was unable to find any conferencejournal publication of 17 in particular the main innovation of this work seems to be in using multiple views ir1m of the reference object instead of just one view as in 17 strengths the main strength of the paper is that it tackles an interesting and challenging problem of oneshot categorylevel pose estimation they demonstrate that using multiple reference views leads to significantly improved performance over prior work 17 weaknesses the main experiments table 1 table 2 assume 10 views of the target object at inference time this is typically not a realistic assumption in most robotics settings and thus requires more justification if this is to be the main test condition of the proposed approach how would this affect the method in terms of the consensus and retrieval steps the robot experiments are quite weak in the current paper more qualitative examples of the pose estimates would be helpful from the video it doesnt seem that the pose estimates would be accurate enough to actually enable any meaningful robot manipulation tasks eg categorylevel tasks such as those performed in 10 11 categorylevel grasping categorylevel pick and place etc the authors should include at least some comparison the performance of this method vs what can be achieved with a supervised categorylevel pose estimator such as 6 8 this would allow the reader to understand the tradeoffs between using a supervised approach that has more requirementsrestrictions vs a more general approach such as proposed one a comparison on a standard categorylevel pose estimation benchmark such as the nocs dataset from 6 would make sense docsepthe paper presents a method for categorylevel object pose estimation based on rgbd input the approach does not require poselabeled data 3d models or categoryspecific networks but only relies on few views of a target object and of a reference one of the same category similarly to 17 the paper shows that features extracted from a pretrained vit can be used as descriptors to find categorylevel correspondences for pose estimation compared to 17 the paper proposes to use multiple views from the reference object and shows that this has a significant impact on the estimated poses further the method is applied in a continual learning setting for novel category discovery the evaluations show that proposed approach significantly outperforms 17 and the baseline of 13 which however uses purely geometric correspondences the method is demonstrated in a robotic setup to perform realtime pose estimation of a novel object based on a reference object from the same category the motivations and the research gaps addressed by the proposed paper are stated clearly a detailed review of the most relevant related work is provided with a sensible and targeted classification of the previous approaches the paper is generally very well written with virtually no typos or notation inconsistencies with very limited exceptions mentioned in issues the language is clear and the explanations are thorough making the paper easy to read the quality of the presentation is very high despite minor possible improvements i find the visualizations professional and very informative particularly figure 1 which provides an effective overview of the method and figure 2 which illustrates clearly the process of finding correspondences and highlights interesting not obvious points such as the correlation between viewtoview similarities and true geodesic errors the attention to the details is also evident in the video eg the visualization of the actual referencetarget views in the correspondence matrix or the methodoverview figure edited to visualize the moving robot arm a number of minor technical details particularly in the choice of the hyperparameters could be provided for completeness furthermore for multiple explanations and additional results though all supporting nonmajor points the reader is referred to the supplementary material but the latter does not include the aforementioned additions the main point that could be seen as a weakness is that the viewpoint estimation process is heavily based on the previous work of 17 as repeatedly acknowledged and that particularly one of the main differences ie the use of multiple reference views as opposed of a single reference view as in 17 could render the setup more complicated however as shown in the experiments this choice results in a large improvement of the pose estimation accuracy furthermore a number of other relevant insights eg the impact of feature dimensionality or modifications consensus filtering vs ransac cosine similarity vs l2 distance are presented the method runs in real time and was demonstrated on a real robotic system the whole application in a continual category learning scenario for new category discovery is novel and of high relevance several interesting ideas are introduced or investigated for instance the use of the cls token for category comparison the evaluation of feature similarity in 3d rather than in pixel space or the use of the attention maps to produce a segmentation mask
### Summary:
|
reviewers felt positively that the proposed task is important and challenging and the proposed method is reasonable the reviewers also felt that the results are very nice the dataset used for evaluation is challenging the performance is better than baselines in terms of speed and accuracy the results show nice generalization nice ablations on dimensionality reduction the method runs in realtime the paper includes realrobot experiments reviewers also felt that the paper is wellwritten and visualizations are clear and informative reviewers appreciated the response in the rebuttal and the subsequent improvements to the paper at the same time reviewers felt that the paper had some weaknesses especially a lack of comparison to more baselines such as a supervised categorylevel pose estimator 6 8 14 although the setting is different oneshot vs training on a full dataset it would still be helpful to show the gap in performance between such methods to get a sense of how far the performance of the proposed method is compared to methods that can train on more data other useful baselines include teaser or a baseline that uses nocs or nunocs for categorylevel generalization it would also be helpful to show the performance as the number of views at testtime is varied below 10 reviewers also requested to see more qualitative examples of the pose estimates finally reviewers would like to see a more clear explanation of how the groundtruth pose 7d pose is defined across instances of a category the paper would be greatly improved by including these modifications in the final version of the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
first of all i believe the paper is looking into a very important question that attracts lots of attention recently the set of techniques proposed in the work are also reasonable and practical where the proposed progressive space pruning seems to work very well empirically the obtained models do perform better compared to standard transformer baselines as for the comparison with previous methods since there are too many implementation details that can affect the fairness of the comparison eg length of pretraining batch size teacher performances etc its hard to judge the actual scale of the gain there are also a few concerns firstly when the blockwise search is used it feels like the nasbert is trained in a way that is more similar to a variant of distillation that additionally utilizes intermediate hidden states as this signal is not used in the standard bert baseline some improvement could actually come from this factor besides a better model architectureparam a better baseline could be a transformer trained in a similar way secondly in terms of novelty this work is more like the combination of existing ideas namely onceforall and blockwise search one general issue with onceforall is after the search although you obtain multiple models of different sizes the excellence these models are tied to the 1 specific set of shared parameters obtained and 2 the pretraining task so whether this is really a good way to obtain the desired taskagnostic compressed models is still questionable docsepsummary the paper develops a new method to compress the bert model with varying model sizes depending on the underlying usage they use blockwise neural architecture search to choose the best set of submodules for each of the blocks to reduce the size of the exponential search space they progressively remove the architectural configurations that yields high loss overall the paper is well written and nicely presented ve the nasbert can produce pertained models with varying model sizes which is better than distilbert and bertpkd that requires pretraining every time the number of layers are varied the paper conducts rigorous experiments to demonstrate the effectiveness of nasbert the baseline methods used for comparison covers most of the stateoftheart methods used for model compression for bert concerns can the authors clarify if they save the model parameters corresponding to all the possible architectural choices or they find out the best configuration matching the model size and latency requirements and then do the pretraining again with those architectural choices in appendix a7 the paper demonstrates some of the architectures used by nasbert for lower model size 20m it can be observed that the nasbert ends up choosing sepconv layers most of the times do the authors do any analysis on why sepconv layer works better than the selfattention layer and the feed forward network how does the network perform if it is composed of all sepconv layers have the authors tried to use only sepconv layers and see if that itself gives good accuracy rather than doing the architecture search in terms of original ideas although the concepts of blockwise architecture search using sepconv layer for nlp tasks and using blockwise knowledge distillation are not novel by themselves but this paper has efficiently made use of the available techniques along with efficient engineering work like progressively reducing the search space to develop a method that gives good performance on nlp tasks docsepsummary this paper proposes to search architectures of bert model under various memory and latency contraints the search algorithm is conducted by pretraining a big supernet that contains the all the subnetwork structures where the optimal models for different requirements are selected from it once an architecture is found it is retrained through pretrainingfinetuning or twostage distillation for each specific task several approaches blockwise training and search progressive shrinking performance approximation are proposed to improve the search efficiency experiments on glue benchmark shows the models found by proposed methods can achieve better accuracy than some of the previous compressed bert models the paper together with the appendix is clearly presented and the idea is new and interesting to me the experiments are detailed and comprehensive pros the paper is well presented the architecture of the superent and the candidate operations are carefully designed and selected it seems that the speconv operation is particularly effective when the model size is small the search algorithm including the blockwise training progressive shrinking can remove lessoptimal structures quickly and significantly reduce the search space the performance of nasbert models are generally better than those of the compressed bert models with similar model size although the comparisons may not be completely fair concerns 1 the organization of the paper can be further improved the paper may not be easy to follow if the appendix is skipped especially for the readers who are not familiar with nas or related work many of the important information can only be found in appendix 2 the novelty of the paper is unclear to me although this work may be new on search bertlike language model it seems many of the ideas such as blockwise search and distillation are borrowed from existing work please the author clarify the main novelties and technical contribution of this work especially to the field of neural architecture search or more broadly automl moreover some of the proposed techniques such as progressive shrinking are merely empirical practices and are lack of theory or insight showing how accurate the approximation would be 3 it is usually more illustrative and also space saving to plot accuracy versus latencyparameters of different models in the same figure some of the well noted models such as mobilebert and tinybert are not included in comparison for dynabert there are multiple configurations but only one is included adabert which adopts nas for each specific task should also be included if possible again since there are of many models with different size and latency it may be better to have a plot for clear comparison 4 hat wang et al hat hardwareaware transformers for efficient natural language processing acl 2020 is not mentioned in the paper which share similarities training supernet and differences search algorithm with this work from technical point of view it will be better if the author can explain and compare the proposed search algorithm to evolutionary search docsepthis paper presents an effective nas method for pretrained language models at the pretraining stage so the selected models can be applied to various downstream tasks with finetuning to achieve better performances the widely used twostage distillation and data augmentation are applied to the selected models from supernet the main contribution of this work lies in the designed search space and the proposed three strategies blockwise search progressive shrinking and performance approximation for improving search efficiency and accuracy although the novelty of this work is quite limited training a big supernet for bert at the pretraining stage is not trivial which is useful for industry applications the authors evaluate their approach on glue datasets and compare it to other stateoftheart models the paper is wellwritten and organized the experiments are thorough however i have several concerns 1 in the table 1 under the kd setting twostage distillation is conducted on the selected models from supernet to further improve the performances it would be interesting to add another two settings a only conducting the distillation at the pretraining stage b continuing to pretraining on large scale unlabeled data to finally obtain better taskagnostic models 2 the models are evaluated on the glue dataset more experiments on challenging qa tasks should be added 3 in the table 1 the comparison to mobilebert and tinybert should be added and the flops or the inference time on cpugpu can be provided 4 some important related work should be included 1 hat hardwareaware transformers for efficient natural language processing although this work focuses on the machine translation task 2 finding fast transformers oneshot neural architecture search by component composition
### Summary:
|
compressing bert is a practically important research direction our main concern on this submission is on its practical value comparing with mobilebert in the literature nasbert does not show advantages on any aspect latency prediction performance or model size less important while being much more costly to build because of nas mobilebert just simply narrowed the original bert models 8x narrower than bert large so it is hard to convince the readers that adaptivesize or nas is interesting or matters on the research side this paper have some interesting points on designing the search space but overall the novelty of this paper is limited as all of the reviewers pointed out it is also worth noticing that the claim of task agonistic in this paper does not fully hold in the downstream tasks the soft labels of the teacher model are required to train the compressed model to be fully task agonistic the results on downstream tasks should be solely based on training with the ground truth labels as in the mobilebert paper once following the exact task agnostic experimental protocol the reported performance in this paper may be significantly lower
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
i am putting weak accept because i think the paper addresses an important problem domain adaptation and has an interesting approach as the other reviewers pointed out its maybe not super novel but its still interesting and pretty readable for the most part i do question the statistical significance of the timit experiments timit has a very tiny test set to start with and by focusing on the female portion only you are further reducing the amount small point i dont think gans are technically nonparametric as the neural nets do have parameters i am a little skeptical that this method would have as general applicability or usefulness as the authors seem to think the reason is that since the cycle constraint no longer exists there is nothing to stop the network from just figuring out the class label of the input say image and treating all the rest of the information in that image as noise the same way a regular noncyclic gan would treat it of course one wouldnt expect a convolutional network to behave like this but in theory it could happen in general cases this is just speculation though personally i would have tended to accept the paper but im not going to argue with the other reviewers who are probably more familiar with gan literature than me i am changing from marginally above acceptance threshold to clear accept after reading the response and thinking about the paper a bit more i acknowledge that the difference from previously published methods is not that large but i still think it has value as its getting quite close to being a practical method for generating fake training data for speech recognition docsepthe authors propose an extension of cycleconsistent adversarial adaptation methods in order to tackle domain adaptation in settings where a limited amount of supervised target data is available though they also validate their model in the standard unsupervised setting as well the method appears to be a natural generalizationextension of cyclegancycada it uses the ideas of the semantic consistency loss and training on adapted data from cycada but fills out the model by applying these techniques in both directions whereas cycada only applied them in the sourcetotarget direction the writing in this paper is a little awkward at times many omitted articles such as the or a but with a few exceptions it is generally easy to understand what the authors are saying they provide experiments in a variety of settings in order to validate their model including both visual domain adaptation and speech domain adaptation the experiments show that their model is effective both in lowresource supervised adaptation settings as well as highresource unsupervised adaptation settings an ablation study provided in section 41 helps to understand how well the various instantiations of the authors model perform indicating that enforcing consistency in both methods is crucial to achieving performance beyond the simple baselines its a little hard to understand how this method stands in comparison to existing work table 3 helps to show that the model can scale up to the highresource setting but it would also be nice to see the reverse comparisons against existing work run in the limited data setting to better understand how much limited data negatively impacts the performance of models that werent designed with this setting in mind i wouldve also liked to see more comparisons against the simple baseline of a classifier trained exclusively on the available supervised target data or with the source and target data togetherin my experience these baselines can prove to be surprisingly strong and would give a better sense of how effective this papers contributions are this corresponds to rows 2 and 3 of table 1 and inspection of the numbers in that table shows that the baseline performance is quite strong even relative to the proposed method so it would be nice to see these numbers in table 2 as well since that table is intended to demonstrate the models effectiveness across a variety of different domain shifts while its nice that the model is experimentally validated on the speech domain the experiment itself is not explained well the speech experiments are hard to understandits unclear what the various training sets are such as adapted male or all data making it hard to understand exactly what numbers should be compared why is there no cyclegan result for female adapted male or all data adapted male for example the paper would greatly benefit from a more careful explanation and analysis of this experimental setting ultimately i think the idea is a nice generalization of previous work and the experiments seem to indicate that the model is effective but the limited scope of the experiments prevent me from being entirely convinced the inclusion of additional baselines and a great deal of clarification on the speech experiments would improve the quality of this paper enormously update after looking over the additional revisions and experiments im bumping this to a weak accept i agree with reviewer 3 that novelty is not the greatest but there is a useful contribution here and the demonstration of its effectiveness on low resource settings is valuable since in a practical setting it is usually feasible to manually label a few examples im still not convinced by the timit experiments now that i better understand them since the fm baseline is quite strong and very simple to run it simply doesnt seem worthwhile to introduce all of this extra machinery for such a marginal improvement but the experiment does serve the job of at least demonstrating an improvement over existing methodsdocsepthis paper introduces a domain adaptation approach based on the idea of cyclic gan two different algorithms are proposed the first one incorporates a semantic consistency loss based on domainspecific classifiers acting on full cycles of the of the generators the second one also makes use of domainspecific classifiers but acting either directly on the training samples or on the data mapped from one domain to the other strengths the different terms in the proposed loss functions are well justified the results on lowresources supervised domain adaptation indicate that the method works better than the that of motiian et al 2017 weaknesses novelty is limited the two algorithms are essentially small modification of the semantic consistency term used in hoffman et al 2018 they involve making use of both the source and target classifiers instead of only the source one and for the relaxed version making use of complete cycles instead of just one mapping from one domain to the other while the modifications are justified i find this a bit weak for iclr it is not clear to me why it is worth presenting the relaxed cycleconsistency object since it always yields worse results than the augmented one in fact at first i though both objectives would be combined in a single loss and was thus surprised not to see eq 5 appear in algorithm 1 it only became clear when reading the experiments that the authors were treating the two objectives as two different algorithms note that in addition to not performing as well as the augmented version it is also unclear how the relaxed one could work in the unsupervised scenario experiments in 41 the authors mention that 10 samples per class are available in the target domain are they labeled or unlabeled if labeled are additional unlabeled samples also used in table 1 and in table 3 is there a method that corresponds to cycada i feel that this comparison would be useful considering the similarity that said i also understand that cycada uses both a reconstruction term as in eq 4 and a semantic consistency one whereas here only a semantic reconstruction term is used i therefore suggest the authors to also compare with a baseline that replaces their objective with the semantic consistency one of cycada ie cycada without reconstruction term in 42 it is again not entirely clear if the authors use only the few labeled samples or if this is complemented with additional unlabeled samples in any event does this reproduce the setting used by motiian et al 2017 as the argument is that the proposed loss is better than the reconstruction one and that of hoffman et al 2018 for lowresource supervised adaptation it would be worth demonstrating this empirically in table 2 summary the proposed objective functions are well motivated but i feel that novelty is too limited and the current set of experiments not sufficient to warrant publication at iclr after response after the authors responsediscussion while i appreciate the additional results provided by the authors i still feel that the contribution is a bit weak for iclr
### Summary:
|
the authors propose a method for lowresource domain adaptation where the number of examples available in the target domain are limited the proposed method modifies the basic approach in a cyclegan by augmenting it with a content taskspecific loss instead of the standard reconstruction error the authors also demonstrate experimentally that it is important to enforce the loss in both directions target source and source target experiments are conducted on both supervised as well as unsupervised settings the main concern expressed by the reviewers relates to the novelty of the approach since it is a relatively straightforward extension of cyclegancycada but in the view of a majority of reviewers the work serves a useful contribution as a practical method for developing systems in lowresource conditions where it is feasible to label a few new instances although the reviewers were not unanimous in their recommendations on balance in the view of the ac the work is a useful contribution with clear and detailed experiments in the revised version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose examining the certainty of neural network predictions for a variety of neural networks that are aimed at solving differential equations while the idea and concept is indeed interesting and important there are quite some fundamental problems that need to be addressed only low dimensional odes and pdes are considered and under restrictive conditions eg the solution is constrained to some form of a gaussian distributions it is suggested that the authors elaborate on those choices and add experiments that involve more complex pdes specifically in higher dimensions no comparison with other methods is provideddocsepthe paper aims to extend some uncertainty quantification method to nn de solvers the paper would benefit from more concrete explanations of the problem setting line 46 a should be defined is there a citation you can add to support lines 4344 implications of the assumptions 9899 why constraining the solution in this form is ok for this problem the results lines 107108 are unclear as well as the results from fig 1 and the corresponding conclusions the papers concludes that given the computational expense of sampling during bayesian nn training the latter two methods could be preferable in case of complex des on a multidimentional domain it is unclear why this claim follows from their experiments docsepthe authors propose an adaptation of popular bayesian uncertaintyquantification methods for neural networkbased differential equation solvers the problem is of interest to the community and the proposed method is straightforward but interesting the reviewer has the following concerns which are mainly centered around the scalability of the approach 1 the proposed approach requires the model to be trained twice lines 9799 which is computationally expensive and significantly limits the applicability of this method on realworld use cases involving complex high dimensional odespdes and a large number of timepoints 2 fixing the epistemic uncertainty hyperparameter varepsilon lines 9899 seems quite adhoc while one set varepsilon to be very small for the kind of simple low dimensional odes used in the experiments as the neural network can easily approximate the solution choosing an appropriate value of varepsilon becomes a nontrivial hyperparameter selection task for complex odes where learning a good approximation becomes challenging for the nn such a scenario is demonstrated within the paper itself as the nn is unable to fit the solution to burgers equation lines 136138 the task of choosing varepsilon adds a hyperparameter selection step to the already expensive twostage training procedure thereby further limiting scalability to a significant extent 3 considering the fact that the model ensures that the initialboundary conditions are always satisfied lines 4447 the uncertainty blowup observed near t0 for the neural linear model and deep evidential regression experiments figure 1 is quite concerning and requires further explanation 4 the paper considers only low dimensional and simple odepde examples for experimental evaluation applications where uncertainty quantification is of importance generally involve complex highdimensional differential equations it would be interesting to see experiments on low dimensional chaotic dynamical systems such as the lorenz attractor as well as more complex odespdes with greater practical significance such as the navierstokes equation 5 benchmarking against other methods for uncertainty quantification such as model ensembles or mc dropout is absent 6 the choice of fixing the posteriorpredictive distribution to be a gaussian seems nonstandard to the best of my understanding the existing literature on bayesian nns generally uses gaussian priors on the weight space further explanation motivating this choice would be greatly appreciated
### Summary:
|
this paper considers the important problem of uncertainty quantification for differential equation solvers the work has been selected as a poster for the workshop the authors may consider this opportunity to address comments made by reviewers
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a modification over linear chain crf models such that the output space of the model is constrained to be in a regular language linear chain crfs use markov assumption where a given output only directly depends on its immediate neighbours which restricts the influence of distant ones the assumption makes training tractable for the model but restricts its expressive power which could inhibit the performance on longer sequences some approaches have been previously proposed to relax the assumption but the authors claim they have certain drawbacks in terms of performance and expressive power the authors propose a new way to relax the markov assumptions by constraining the output of a crf to be in a regular language they describe a simple way to construct such a constrained crf when given an nfa by setting certain transition and emission probabilities to 0 they then discuss ways to make the algorithm more efficient by making use of equivalences classes and heuristics to minimize nfas to minimize the size of tag sets the authors also discuss the relationship between the constrained crf model and weighted fsts and point certain distinctions in favour of the constrained crf model some previous constrained variants of crfs are trained in the same way as standard crfs and then the constraints are enforced during decoding by setting certain output probabilities to 0 unlike those approaches the proposed approach could be trained and then used in a constrained manner training in a constrained fashion will directly minimize the nll against the data distribution and achieve a better error given output y in regular language l the authors conduct two synthetic experiments to showcase that their model with constrained training is able to better capture nonlocal dependencies and data distributions compared to a model with constrained decoding additionally they show slight improvements on a semantic role labeling task compared to baseline crf models i think the paper proposes a clever approach to deal with an interesting problem the approach and constructions are simple and natural the results in their few experiments also support the effectiveness of the approach i was curious to see if they applied their method to a larger problem and if the approaches to optimize the computational costs made a significant difference they have an interesting discussion on the relationship between their approach and wfsts i understand that in contrast to wfsts this approach does not suffer from the issue of paths with unbounded lengths but does it not arise from the need to induce a regular language before applying the approach which itself could be hard in some cases the paper is very well written and the arguments are very clearly presented i appreciate the extra effort put in by the authors for the reproducibility and accessibility of their implementation i think authors could have explored more synthetic languages to solidify their claims the synthetic experiments were on two very simple languages i think a more systematic exploration even within the hierarchy of regular languages to test the limits of the approach would have been more insightful the improvement in the srl task also seems to be incremental and the need to induce regular language for various practical structured prediction tasks could be difficult in certain cases in summary it seems like a good paper with a simple and clever idea to improve a fundamental model in structured prediction the arguments are made very clearly while presenting the idea but the paper lacks enough empirical evidence to back up the efficacy of the proposed idea docsepthis paper describes crfs that are constrained to generate tag sequences that belong to a given regular language regccrf this is useful for example in bio tagging where tag sequences must be of the form obio since crfs do not have hidden state the constraint makes them more powerful on the other hand the claimed advantage of regccrfs over general finite transducers are 1 guarantee that the partition function converges 2 finding the best path is the same as finding the best string 3 the loss function is convex i liked this paper but it took me a little while to overcome my initial hesitations about why one would want to do this first the application to bio tagging is a good one and it would help your presentation enormously to mention this application in the introduction second readers may differ depending on their background but i was initially confused about why one wouldnt want to just use finite transducers this is explained adequately in section 43 but i feel it is a little late perhaps the explanation can be left where it is but summarized in the introduction i think the theorems in section 5 are pretty intuitive and if you are need of space you could relegate the proof of theorem 1 to an appendix in section 6 the experiments are interesting but couldnt you go further and prove formally that a crf is incapable of generating these particular string relations i think there is a typo in the statement of theorem 2 the two sides of the inequality are the same the method improves performance on semantic role labeling but the improvements due to the proposed method as opposed to using roberta are not dramatic nevertheless they produce apparently the new sota on this task on page 8 id like to see a clearer explanation of how the constraint language is constructed there are a lot of magic numbers here that would benefit from explanation and i certainly wouldnt be able to replicate the results from this explanation it had been claimed in section 43 that direct comparison between the proposed method and neuralweighted finite transducers is possible maybe you meant direct theoretical comparison but i definitely would have liked to see an experimental comparison between the two the advantages claimed in section 43 are legitimate but i am not sure how much difference they make in practice i like this paper and just think it needs some improved motivation in the introduction an experimental comparison against neuralweighted finite transducers would strengthen the paper a lot by justifying the claims in section 43 docsepthis paper describes a transformation to add a hard regular language constraint to the output space of a linear chain crf given a nondeterministic finite state automaton nfa describing the crfs output space their method maps the edges in that machine to a new crf tag set and wraps the potential function from the original crf in a function that is aware of the compatibility of adjacent tags based on the edges they represent they prove some basic properties about this transformation and discuss its relation to learning weights in arbitrary fsts finally they provide experiments showing the techniques application to synthetic data as well as semantic role labeling which has natural constraints such as uniqueness of core roles they are able to show small but significant improvements over both an unconstrained ablation and the stateoftheart on this dataset on novelty i think if you asked any crf practitioner to incorporate nonlocal hard constraints into a crf their first reaction would be to say i can hack it into the tag set and then they would have to hack the transition and emission feature functions to ignore the new information in the expanded tags to avoid parameter blowup and then the transition potential function to enforce the constraint i have done this they would eventually wind up with a oneoff solution that looks very much like what the transformation described here would have handed them now the engineer is given another problem to design a finitestate machine describing the output language and to make it have as few edges as possible i would argue that this is a step forward and a worthwhile ml contribution strengths this is a useful addition to the crf toolbox it is a nice clean formalism for adding regular language constraints and by making the connection explicitly to regular languages and nfas it opens up the possibility of fruitful crosspollination with formal language research the paper is wellwritten and very easy to follow the authors get out ahead of the inevitable question of the utility of their work in the face of more general work on learning weights for fsts i think section 43 is strong and gives a good argument why this contribution has value in the face of previous work the authors took the time to show that the technique helps even in the context of a stateoftheart model weaknesses crfs quadratic dependence on tag set size and the mapping from nfa edges to tags means that for many constraint sets this solution will be infeasible in general i found 42 on tagset minimization a little hard to follow the advice amounts to minimize manually and apply nfa minimization where applicable i think an example of an organic application of nfa minimization plus a citation to the algorithm the authors have in mind would go a long way toward improving it i didnt find the proofs in section 5 particularly compelling they were easy to follow but they extend almost trivially from definitions in particular the impact of training with constraints seems to 1 assume perfect minimization of the training objective and 2 ignore generalization error entirely however the synthetic data experiments do a good job compensating for this the realworld experiment is not particularly convincing its very good that the authors report statistical significance because the deltas in performance look very very small i also think that its good that they attribute much of their strong performance to roberta i think it would be informative to include the number constraint failures in the unconstrained model it would also be informative to include an unconstrained model that has the same computational concessions as the constrained model removal of rare labels this paper formalizes and systematizes how to incorporate regular language constraints into crf training and inference this simplifies the incorporation of constraints and makes it clear when they will become computationally infeasible it also provides exciting hooks into formal language theory for future contributions the experiments are not superexciting nor are the proofs but the framework is a nice addition to crfs overall docsepthis paper claims to propose a generalization version of crf regularconstrained crf regccrf compared with traditional crf it can not only model local interdependencies but also incorporate nonlocal constraints for the model specifically by specifying the space of possible output structures as a regular language assigns zero probability to all label sequences not in language to achieve the goal the paper spent a lot of space analyzing the difference between the proposed regccrf and constrained decoding markov relaxations methods and finitestate transducers and gave two settings constrained training and constrained decoding but as far as the actual implementation is concerned only the introduction of a new tagwise crf combined with the traditional labelwise crf is not much novel in addition the experimental part is seriously lacking and there is no adequate experimental comparison and ablation study of the proposed method strength 1 consider the problem of external constraints in the crf model and propose a solution to address the nonlocal dependency issue 2 give a detailed mathematical description and proof of the proposed method weakness 1 the author mentioned that regccrf can incorporate their constraints during training while related models only enforce constraints during decoding but no matter in theory i cant see what gains can be brought to the model by using constraints during training in the current version because maximum likelihood estimation is used on the golden data during training the labels will strictly follow the constraints the constraints cannot bring any redundant information so i dont see any advantages for supervised training under general data situations on the contrary i think it may be useful for training in unsupervised situations but it has not been studied 2 the article claims that constrained training is substantially better than constrained decoding in practice but unfortunately from the actual results in table 2 it is obviously not in line with this point the difference between the two is 031 because the srl test set is not large this result is far from significant 3 for regccrf what about the training efficiency and decoding efficiency and how does it perform compared to traditional crf none of these questions have been answered in this article so this obviously cannot make regccrf a replacement for traditional crf 4 the paper mentioned a lot of related work but no comparisons were made in the experiment such as semimarkov crf skipchain crf and constrained decoding constrained beam search which makes the performance of regccrf difficult to discuss 5 since the added tagwise crf can be viewed as a particularly wellbehaved special case of fst weight learning for an appropriately chosen transducer architecture and parameterization a baseline that needs to be compared is to directly use an rnn network to simulate fst its speed is obviously faster and can be trained 6 the results reported by the article on the srl are only an improvement from the baseline which is not significant compared to the traditional crf model and the results have not reached sota i have read the response and it has addressed parts of my concerns so i raised my score although the discussion is very interesting the current version of the experiment did not meet the requirements for publication and many parts of the method were not properly studied
### Summary:
|
this paper does as its title suggests it introduces an algorithm for constraining a crfs output space to correspond to a prespecified regular language the authors build upon a wealth of prior work aiming to enable crfs to capture particular nonlocal dependencies and output constraints and present a coherent general algorithm to specify such constraints with a regular language this is a clearly presented and well motivated contribution the reviewers predominantly agree that this work is clearly and rigorously presented and that the formalisation of constraints for crfs through regular languages is a useful contribution for practitioners one reviewer questioned the utility of constraining the output distribution at training time in response the authors convincingly argue that unconstrained models will fail to learn the data generating distribution when nonlocal constraints exist in the data and have included a clear synthetic example of this in the paper the most significant weakness identified of this paper is the limited experimentation consisting of one synthetic experiment and an application to semantic role labelling the key motivation for formalising constraints on crfs with regular languages is the argument that this allows model builders to use a familiar formalism across disparate tasks rather than producing bespoke solutions for each as such it would be informative when assessing the contribution of this work to see a number of practical examples of task output spaces formalised as regular languages such that we can form an intuition for how natural this representation is for more than one task while also shedding light on the ease or otherwise of the crucial processing of minimising the representation to maximise efficiency while the application to a broader range of tasks would definitely strengthen this paper in its current form it provides a useful formalism that will be of interest to those working in structured learning and as such is a contribution worthy of publication
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper presents a new dataset with several benchmark models the goal of the dataset is to learn physical understanding where the physical understanding is measured by the modelshumans ability to predict collisions of basic objects based on a short video the dataset is very relevant for learning physical understanding of the world the evaluation is very general such that many types of models can be used the dataset is challenging humans cannot get perfect scores and visionbased models are just slightly better than random substantial baseline work several relevant baseline models are presented this covers visionbased models vgg transformer based particlebased models that takes the gt physical states as input and humanevaluation where humans are asked to estimate collision no collision based on small video clips the environments seems rather similar fig 1 same floor walls etc thus we cannot expect visionbased models to generalize to real world or other datasets if trained on this dataset docsephere the authors provide a dataset of 17200 videos of simulated physical interactions containing 2000 training videos and 150 test videos for 8 distinct problem scenarios the dataset provides a clearcut problem of predicting a final contact state between objects based upon an initial configuration and momentum overall the main claims of the paper are that the dataset provides a more comprehensive challenge than existing benchmarks for physical prediction and reasoning the authors provide a resource for the critical problem of testing physical prediction and reasoning capabilities of machine learning models the dataset is comprehensive and encompasses a diversity of physical scenarios and the problem formulation is elegant distilling the task of physical prediction down to a centralized definition that is granular enough to be useful i really like the baseline dataset that the authors provide of human physical prediction ability the authors collect annotations from 800 human participants and compare models against human reasoning capabilities on the same dataset that current models fall short of this baseline means that there is still ample room for our current machine learning methods to grow and having benchmarks like these can really support this aim overall this work promises to provide a firstofitskind dataset for a challenging problem that has potential to advance the stateoftheart however as i will explain in the weaknesses and feedback to authors section i feel that there are some cumulative problems with the work in its current state that make it fall short of achieving this goal on a practical level i really want to like this paper however a major issue is that the training dataset is difficult to access because it has to be generated directly from code the authors provide and the code is not documented well enough to explain how to generate both training datasets but additionally there are numerous smaller issues with the clarity of the writing the explanation and characterization of the dataset and plan or lack thereof for maintenance of the dataset that are not fatal in isolation but in combination result in a manuscript that is rough around the edges these cumulative issues detract from the utility and reproducibility of the dataset as well as knowledge of what claims are possible with it i provide a more detailed list of these weaknesses in the feedback to author section docsepthis paper studies the problem that whether machines can make future predictions for realworld physical events and proposes a new benchmark named physion it simulates photorealistic videos with the tdw simulator24 the benchmark contains diverse physical scenarios including dominoes support collide contain drop link roll and drape with dense and accurate annotations like color depth normals various baselines are conducted and analyzed among which dpi with objectcentric graph neural networks performs the best 1 the benchmark is valuable for evaluating machines intuitive physical understanding of the world which is of great importance for human cognitive understanding and robotic planning and control 2 specifically the benchmark provides dense pixellevel annotations while containing more diverse physics events and being more similar to realworld videos than previous videos from simulators eg 6 29 3 various baselines are evaluated and analyzed showing to what extent the previous dynamic models understand the physical world 4 a valuable and timeconsuming human testing is provided showing how humans perform on the new benchmark and its comparison with machine algorithms while the reviewer is positive about the benchmarks the reviewers still have some concerns or suggestions on the project 1 one concern is about the implementation of the baselines the reviewer appreciates the authors provide original code links however according to the reviewers understanding baselines like rpin and dpi are based on object proposalcentric representations how do the authors handle and model the softbody collisions for them will the implementation of the baselines for the new benchmark available to the public 2 the performance comparison in fig 5 is easy to understanding however it will be hard for future researchers to copy the numbers for comparison it will be better to provide its table versions in supplementary 3 the dpi is based on groundtruth annotations from the simulator which makes its comparisons with other baselines are unfair although the reviewer knows that a fair performance comparison is not the main focus of the paper 4 recentlytransformers a b has shown strong performance for spatiotemporal reasoning it is interesting to evaluate their performance on this new benchmark 5 the benchmark provides dense pixellevel annotations like depth and normals however it seems that the baselines make no use of them how will these visual properties help with physics understanding a zhou honglu et al hopper multihop transformer for spatiotemporal reasoning iclr 2021 b ding d hill f santoro a et al objectbased attention for spatiotemporal reasoning outperforming neurosymbolic models with flexible distributed architectures arxiv 2021
### Summary:
|
the paper proposes a challenging dataset to learn physical understanding along with several benchmark models the paper is very wellwritten and the reviewers agreed that the paper should be accepted the physical understanding of the world is a very fundamental problem in ai research this paper proposes a set of challenges to investigate it further it is a very important challenge to build better modelbased rl algorithms and generalpurpose ai agents the paper tests several different ml models and compares them against the human performances on those tasks the experiments seem to be carefully designed and wellthoughtout the authors addressed the concerns raised by the reviewers during the rebuttals quite well
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors produce a framework and metrics for comparing three visual search models and use these to identify differences between the approaches they adapt the ideal bayesian searcher model to allow it to work with a wider range of datasets and highlight how using a unified framework can produce better algorithms in general the motivation is compelling and clear experiments are well thought out and clearly described includes a thorough analysis of model differences appendix is extensive introduction describes background and necessary knowledge that motivates the primary contributions of the paper the paper does not fully explain why the authors made many of the choices described in the paper such as modeldataset selection metric selection etcetera this may cause readers to question the efficacy of the proposed evaluation framework given that the evaluation framework is a key contribution of the paper it would have been nice to include a more extensive discussion detailing the benefits and downsides of using the chosen metrics given that producing a framework for model comparison is a primary contribution it would be helpful to possibly include a full section detailing how the models are modified to allow direct comparison see clarity section docsepthe authors propose a standardised procedure for evaluating both humanderived datatsets and models of visual search in a single framework the procedure is evaluated on a set of publiclyavailable models and datasets and the results are discussed in providing a unifying framework for evaluating models and datasets the authors aim to make it possible to test disparate models and datasets within a standard set of criteria the paper builds on a poster presentation from the svrhm workshop at neurips 2021 the exposition is expanded significantly and a great deal of supplementary material which i did not review in detail is provided overall i believe this paper constitutes a valuable move to provide a standard set of analysis criteria for visual search and a standardised framework in which to perform this analysis highlighting some of the challenges faced by a suite of existing models for clarity i am not a domain expert in visual search and so i am unable to make strong statements as to whether the analysis is complete or whether there are missing references to other common datasets and benchmarks hence my low confidence score my review is based on my understanding of the technical merits of the analysis the overall clarity of the exposition and the quality of the accompanying code and documentation the paper is directly and broadly relevant to the domain of visual search and provides a framework in which analysis of models can be be performed which is in itself a valuable contribution to the field the analysis provided covers a range of metrics which attempt to give a full picture of the similarities in scanpaths in a range of different contexts and which are ultimately summarised into a single metric for the comparison of human and modelbased sanpaths the authors provide a codebase which makes a number of different models and datasets available within a common framework making the analysis very accessible for others to build on given that the analysis provided is of existing datasets and models i dont see any obvious ethical concerns questions of subject consent should have been dealt with by the original authors of the published datasets involved the benchmarks are very specific to visual search and quite a lot of effort has been put into assessing a disparate range of models on the same criteria it would be interesting to comment on how these benchmarks might be applied beyond the immediate domain and how the lessons learned on unifying analysis could be applied when working on related problems docsepthis submission aims to provide a benchmark for visual search behavior of natural scene images and a set of metrics and common pipeline to evaluate model performance saccade is a natural behavior in human vision and gaze position prediction is a major component of computer vision having a standard benchmark dataset to evaluate the gaze predictability across models has great value to the community this submission evaluated three publicly available models of human visual search behavior with preprocessed public datasets and provided a common framework for model evaluation the submission contains a wellorganized github repo which clearly summarized preprocessed human scanpaths data from four different publicly available datasets metrics auc multimatch human scanpath prediction example models to be tested and testing results this submission raised an important challenge which is how to evaluate model performance on human gaze prediction the diversity of tasks involving human subjects make it challenging to compare model performance if each model uses its own dataset for example the scanpath for a a goal driven visual search task would be drastically different from a free exploring task i think creating a common dataset as proposed here has great value to the broader community and have potential for both computer vision and neuroscience there is one major limitation for this submission as a benchmark paper as specifically pointed out in the paper the existing datasets for human gaze prediction are for different purposes with different experimental designs the submission tried their best to preprocess the data for common features however different datasets are still created under different experimental constraints it would be ideal if there exist a dataset for the benchmarking purpose which takes all the issues the paper raised into account docsepthey compared stateoftheart visual search models in natural scenes they benchmarked three publicly available models and defined a common set of metrics and baseline models and bring them together into a single pipeline code is opensourced the experiments conducted on the four datasets are comprehensive i have a hard time finding contributions presented by this work aside from running these baseline models on these datasets it would be nice if the authors can list their contributions in a bullet point fashion why is it that only 4 datasets are used the author said that it is essential to always evaluate the visual search models in a diverse collection of datasets but 4 does not seem to be enough simulated datasets could also be considered right docsepthe paper summarizes the various datasets and methods currently used in the prediction of human gaze positions during observation of images first it presents the many different datasets used for this task and highlights the key differences between each of the datasets namely there is large variability in the number of participants surveyed total number of images size of the targets format of the targets stopping conditions of gaze tracking and other attributes then the different visual search models that are currently used are presented as well as the metrics used to evaluate the performance of these models many different experiments are performed with these datasets models and metrics the authors conclude that there is still a large performance gap between these existing methods and the gold standard baseline model the paper is concluded with a call for a more unified set of data and evaluation metrics for the task of visual search computation 1 the paper presents a very thorough and detailed comparison between the existing datasets and models in the task of visual search the key differences are made clear and the lack of uniformity and standardization is made readily apparent 2 the results presented in the paper highlight the room for improvement in the task of visual search computation and seem to be reproducible the work clearly shows that each method considered has benefits and drawbacks and does a good job of providing a reasonable explanation and interpretation of the results while this paper does a good job of reviewing the current methods in visual search and even providing quantitative results of existing models on existing datasets evaluated using existing metrics the paper does not provide a better alternative for unifying the evaluation procedure in other words while the paper convinces the reader that there is a need to unify the criteria used to evaluate visual search methods it does not provide such a unified criteria a more significant novel contribution could come in the form of a metric that has a generalizable ability to compare performance across different datasets and models the only novel contribution seems to be applying existing models to existing datasets and evaluating them using existing metrics in this way this paper seems to be more suited as a review paper rather than a benchmark paper all of the results and analyses presented in this paper seem to be correct however while they are interesting they do not constitute a significant enough contribution to the effort of benchmarking model performance in visual search
### Summary:
|
there are extensive discussions between authors and reviewers the authors have done an excellent job addressing the raised issues during the rebuttal phase as such reviewers raise the scores overall there is sufficient support to accept this paper from the reviewers the authors should include the replies to reviewers and revise this paper before the conference date
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
in this paper the authors revisit the problem of multiclass classification and propose to use pairwise similarities more accurately what they use is the cooccurrence pattern of labels instead of node labels thus having less stringent requirements for supervision their framework has broader applicability in supervised and semisupervised classification and in unsupervised crosstask transfer learning among others pros the idea of using pairwise similarities to enable a binary classifier encapsulate a multiclass classifier is neat cons my main gripe is with the conditional independence assumption on pairwise similarities which the author use to simplify the likelihood down to a crossentropy such an assumption seems too simple to be useful in problems with complicated dependence structure yes the authors conduct some experiments to show that their algorithms achieve good performance in some benchmark datasets but a careful discussion if possible theoretical of when such an assumption is viable and when it is an oversimplification is necessary analogous assumptions are used in naive bayes or variational bayes for simplifying the likelihood but those are much more flexible and we know when they are useful and when not secondly by using cooccurrence patterns one throws away identifiabilitythe latent labels are only learnable up to a permutation unless external information is available this point is not made clear in the paper and the authors should describe how they overcome this in their supervised classification experiments docsepthis paper proposed how to learn multiclass classifiers without multiclass labels the main idea is shown in figure 2 to regard the multiclass labels as hidden variables and optimize the likelihood of the input variables and the binary similarity labels the difference from existing approaches is also illustrated in figure 1 namely existing methods have binary classifiers inside multiclass classifiers while the proposed method has multiclass classifiers inside binary classifiers the application of this technique to three general problem settings is discussed see figure 3 clarity overall it is very well written i just have two concerns first the authors didnt discuss the underlying assumption of the proposed method except the additional independence assumption i think there should be more underlying assumptions for example by the definition psij0 or 1yiyj and the optimization of lthetaxs does the cluster assumption play a role in it the cluster assumption is popular in unsupervisedsemisupervised learning and metric learning where the x part of training data is in a form of pairs or triples however there is no such an assumption in the original supervised multiclass learning without figuring out the underlying assumptions it is difficult to get why the proposed method works and when it may fail second there are too many abbreviations without full names and some of them seem rather important such as kld and kcl i think full names of them should be given for the first time they appear this good habit can make your audience more broad in the long run novelty as far as i know the proposed approach is novel it is clear that section 3 is original however due to the writing style it is hard to analyze which part in section 4 is novel and which part is already known this should be carefully revised in the final version moreover there was a paper in icml 2018 entitled classification from pairwise similarity and unlabeled data in which binary classifiers can be trained strictly following erm without introducing the cluster assumption the same technique can be used for learning from pairwise dissimilarity and unlabeled data as well as from pairwise similarity and dissimilarity data i think this paper should be included in section 2 the related work significance i didnt carefully check all experimental details but the experimental results look quite nice and promising given the fact that the technique used in this paper can be applied to many different tasks in machine learning ranging from supervised learning to unsupervised learning i think this paper should be considered significant nevertheless i have a major concern as follows in order to derive eq 2 the authors imposed an additional independence assumption given xi and xj sij is independent of all other sij hence eqs 2 and 3 approximately hold instead of exactly hold some comments should be given on how realistic this assumption is or equivalently how close 1 and 3 are one more minor concern why px appears in 1 and then disappears in 2 and 3 when y is marginalized docsepthe work is a special case of density estimation problems in statistics with a use of conditional independence assumptions to learn the joint distribution of nodes while the work appears to be impressive such ideas have typically been used in statistics and machine learning very widely over the yearsbelief propagation topic modeling with anchor words assumptions etc this work could be easily extended to multiclass classifications where each node belongs to multiple classes it would be interesting to know the authors thoughts on that the hard classification rule in the paper seems to be too restrictive to be of use in practical scenarios and soft classification would be a useful pragmatic alternative
### Summary:
|
this paper provides a technique to learn multiclass classifiers without multiclass labels by modeling the multiclass labels as hidden variables and optimizing the likelihood of the input variables and the binary similarity labels the majority of reviewers voted to accept
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
contribution using a known parameters crystallography simulator xray beam structure being analyzed environment crystalline or not built a dataset called diffranet of 25000 512x512 grayscale labeled images of resulting diffraction images of various materialsstructures crystalline or not carried various classification approaches of the dataset labelled images in two steps feature extraction scale invariant feature transform with the bagofvisualwords approach as local feature extractor and the graylevel cooccurrence matrix and local binary patterns as global feature extractor then classification of the diffraction images is carried with three approaches two using images described by extracted features from the previous feature extraction step coupled with either random forests or support vector machines and a third consisting in a convolution neural network cnn topology named deepfreak the images are classified according to the diffraction patterns they encompass into one of 5 classes blank nocrystal weak good and strong the last three describing presence of a crystalline structure a fine tuning step of the various algorithms was carried using automl optimization tools all algorithms were off the shelf publicly available implementations and have previously been used for such domain applications crystallography patterns the approach and choices of classification algorithms is well articulated and results interesting a few questions though in what way the diffraction images are synthetic arent they actual diffraction images but in a controlled known and controlled setting set of parameters beam structure to analyze o more like a library of diffraction pattern images for various materialsstructures how many structures were analyzed were there 25000 for the 25000 pattern images one image each o this is to understand the representability of the samples structures analyzed regarding the possible structures hundreds of thousands as per papers 21 what variations for each of the setting variabilities x ray beamflux beam size divergence and bandpass crystal properties unit cell number of cells and a structure factor table and the experimental parameters sources of background noise detector pointspread and shadows were used o this is to assess the size of the pattern space were any reallife setting obtained pattern samples classified using diffranet dataset patterns finetuned classification algorithms o this is to assess the generalization level of the diffranet dataset patterns finetuned classification algorithms to reallife obtained patterns relates to the previously stated representability of the samples o if not your statement we plan to add new images and new classes that are common place in serial crystallography in 6 conclusions would be an appreciated validation of general usability of your diffranet finetuned setting were all the structures analyzed crystalline o its stated in figure 2 and table 6 that 2 classes are either blank or nocrystal but is that a known fact purposely chosen or no pattern images for crystalline structures due to inadequate experimental settings to uncover the crystalline nature of the analyzed structure were the pattern images preprocessed in any manner before being classified nota in table 6 use nocrystal class as in figure 2 for consistency docsepthis paper introduces a purely synthetic dataset for crystallography diffraction patterns for this very specific application domain this might be a very welcome approach however i feel the paper is not strong enough for iclr for two reasons 1 the scope is too narrow for iclr only a limited readership will be interested in this specific problem since the contribution is mainly on the dataset level and not on the methodological level i suggest submitting such an article in venues more focused on the application domain i can see no contribution which is general enough to be interesting for the broader readership a new dataset might be of interest if it is a really challenging one where current approaches cannot yield high performance levels while it would be easy for domain experts to recognize 2 the experiments are only on synthetic data i agree that synthetic data in general can be very useful if generated correctly this has been shown in many works for a substantial article contribution one should however in general add much more exhaustive experiments besides analyzing the behavior on synthetic data one should perform tests on real data and see the influence of eg pretraining on this synthetic dataset furthermore comparison to pretraining on other datasets should be performed docsepthe paper describes a new open synthetic dataset for serial crystallography generated by a simulator three methods are proposed and implemented to demonstrate the classification of these diffraction images the results from these methods are compared and clearly show the ones achieve high performance the article structure is clear and is well written the experiments are carried out in a professional way and statistical analysis is shown it will be better if the authors can demonstrate how the models obtained from training the synthetic data perform in real scenario please also add some discussion on how good the synthetic data simulate the real data some image comparison between the synthetic data and real data should be analysed
### Summary:
|
reviewer ratings varied radically from a 3 to an 8 however the reviewer rating the paper as 8 provided extremely little justification for their rating the reviewers providing lower ratings gave more detailed reviews and also engaged in discussion with the authors ultimately neither decided to champion the paper and therefore i cannot recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the proposed method is interesting and well explained authors carefully present enough mathematical and algorithmic details to make the proposed method highly reproducible authors present both quantitative and qualitative results some technical terms are not sufficiently explained important differences between train and test metrics seem to indicate overfitting authors should better discuss this point more details about the computation of the edges of the diffusion connectivity matrix are needed docsep interesting topic nice gcn task clean presentation nice evaluation with multiple alternative methods novel albeit iterative method multimodal ann rsfmri only gcn brainnetcnn dictionary learning ann dynamic deepgenerative hybrid gcn with the proposed method showing improved results in most comparisons interesting results on prediction single vs multiple continuous symptom scores the proposed research is somewhat iterative over existing graph convolutional networks including the authors prior work unclear about which comparative results reach statistical differences among the methods docsepthis is an abstract of very high quality it is is clear concise but does not cut corners the presented work is well described the loss function is explicitly stated as are the graph convolutional operations the inputs and outputs of the networks are well presented and the figures are helpful each comparison method is well defined and relevant including the ablated versions of the proposed method two datasets were used for the experiments and each experiment is explained as well as they should be results are complete well presented and meaningful it is in the opinion of the reviewer that besides minor comments the main weakness of the paper is in the presentation of the results performance measures at training time for both experiments and all metrics are reported but only contribute in hiding how the proposed work performs better at test time than the other methods while transparency is important test time performance is arguably what is most important and if the results at train time were to be removed the proposed work would shine even more docsepthe ideas in this paper are well motivated and there is no argument that new methods which effectively integrate dmri and fmri would be useful for a number of different problems the methods are clearly presented the figures are well made and clearly described in the text the method is validated against several other methods which would be considered state of the art for this domain results do seem improved relative to baselines however sometimes the differences are marginal my slight concern with this paper is whether it would still outperform other methods if they were to use the hcp data processed in the most subject specific sensitive way here the authors state that they use only 275 out of 1000 hcp subjects and only 15mins instead of 1hr of resting state fmri it is well known that fmri connectivity matrices improve in stability with more data they also process data in the volume rather than using the image processing advances of the hcp pipeline which is designed to improve the groupwise consistency of fmri data on the surface finally why use the aal atlas rather than the hcp parcellation which is optimised for cortical organisation while the authors state that this is to put the model on comparable settings to clinical data would it not be good to first test how well it works on the best quality data especially when the goal is to predict such a challenging phenotype as fluid intelligence it would actually be good to have a graph plotting the effect of performance relative to the data sets size and length of fmri i would be less concerned about this if the results showed stronger improvement relative to past methods however the reported differences are small and it is difficult to tell whether the reported improvements are statistically significant also i think the small data sets used here are a problem because the plots figure 4 especially seem to suggest that the model is mostly predicting the mean for the test examples suggesting overfitting
### Summary:
|
the paper proposes a graph convolutional network that simultaneously operates on two interdependent networks namely structural and functional brain connectivity networks this is both relevant and challenging the paper is well written and presents supporting experiments there is some criticism of the experiments for not using the specialized hcp preprocessing but rather approaching a more typical clinical preprocessing scheme leaving questions of whether the method would still outperform state of the art if the network had the data quality implied by the hcp preprocessing the results are nevertheless relevant as most clinical or even research data also does not have hcp quality
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper introduces a method for code editing from few examples it allows to automatically generalize code edits from few support examples via adaptive multiextent composition the approach is evaluated on two standard code editing datasets c fixer and py fixer against the graph2edit baselines specifically graph2edit model variant applied to a randomly selected support example and an average graph2edit representation over all k support examples the method leads to a 810 improvement over baselines source code editing is a fundamental software engineering task which has recently attracted a lot of attention in the ml community it is used in various se applications code refactoring and program repair the paper introduces the problem of code editing from few examples and formulates the method to use multiple examples in training and inference while the task of code editing from few examples is new it is conceptually similar to programming by example pbe authors might find it useful to mention program synthesis by examples in related work the paper is relatively difficult to read and some of the terminology is not explained is there a precise definition of editorial style of source code program repair is arguably not a style change and may lead to a different code semantics see introduction first paragraph on the other hand programming style is a well established concept which is different from editing style which needs to be made more clear in text it is not clear from the text how to identify helpful support examples for a given dataset for learning stage can you describe a procedurealgorithm to select supporting examples baselines in code editing alignment plays a crucial role does the ae baseline performs average of aligned representations if not please consider adding an average with alignment minor composite them to guide editing combine them to guide editing or compose them to guide editing consider rewriting the abstract to highlight the main contributions and the key methods used the paper introduces a task of code editing from few support examples and formulates the learning and inference methods to implement the composition method on top of the graph2edit method while the evaluation is sound and shows an improvement of accuracy as a function of number of support exemplars one practical challenge in applying this technique would be to identify groups of support examples identifying the edit intent which could lead to problems applying this method at scale or in the product docsepneural code editors are a class of neural networks which take vector representations of an edit and an input to produce the output by applying the edit to the input program the existing approaches take a pair of source code snapshots and embed them into an edit representation this paper argues that a single pair may not be enough to unambiguously perform the edit and multiple examples can help the authors present an approach to aggregate edit representations from multiple such pairs the approach is evaluated on code editing datasets in c and python the aggregation approach gives better results than simpler baselines and ablations of the aggregation method the paper aims at generalizing from multiple example edits rather than applying a single given edit to an input program this is achieved by aggregating the edit representations by matching the input program with the support set of programs from the example edits the scores for the edit representations are computed by a series of matching steps over the tree representations of the programs a hyperparameter lambda is used to control the granularity at which the tree fragments are matched the representations of the input program query and a support program are computed by nodelevel crossattention this operation is quadratic in the size of asts i was wondering if the tree structure can be used to reduce this cost the paper conducts experiments on datasets from two languages c and python with graph2tree and graph2edit as the underlying neural editors the results show that the multiextent matching does better than all other choices the paper aggregates edit representations from multiple examples using a number of complex steps however the examples come from a single type of fixer and share the same intent i am not convinced that the paper demonstrates the need to combine multiple edit representations in particular it might suffice to use a single mostsuitable examplar i would therefore like to see singleexample baselines that select the nearest neighbor from the support set i using a discrete measure like tree edit distance over asts with terminals and without terminals and ii in the vector space note that the single example baselines consider in the experiments graph2treers and graph2editrs use random selection and all other baselines consider all the examples together the number of examples is also small in this case an iterative baseline that takes each of the examples is also required to demonstrate that the combination of multiple examples is indeed better than any of the examples separately a qualitative discussion of combining multiple edit representations would also be helpful the paper presents a technique to combine multiple edit representations for neural code editors the paper is difficult to understand at times it makes some technical contribution and provides experimental evidence however there is a conceptual gap about the need for combining representations from multiple examples though at a highlevel it is conceivable that multiple examples could help the paper does not provide evidence that a systematically chosen single example or an iterative approach which goes over all examples and takes the best output would not be sufficient docsepthis paper tackles the task of applying an edit to code that is similar to some prior edits think of a linter edit that replaces double quotes with single quotes in python prior work has addressed such a task by embedding an edit exemplar before and after snapshots of a file for such an edit and the before snapshot for a new example seeking to predict the after snapshot that adapts the edit exemplar to the new before snapshot however this work recognizes that generalizing from a single exemplar can push baseline models to overfit the patern modifications of the exemplar instead this work a enables a model to see multiple exemplars b and uses a similarity ranking estimator to weigh different exemplars in a way specific to the query the new before snapshot so as to influence the prediction accordingly this similarity formulation is parametrized by how much individual good alignment matches between exemplars and the new before snapshot should dominate the overall prediction versus a broadly good alignment match across multiple parts of the various exemplars an ensemble approach combines multiple values for this parameter on a number of datasets generated by applying stock c and python fixers to github code this new approach outperforms baselines the motivation for the problem was well presented and made sense a single example edit can lead a model astray and figuring out which of multiple exemplars to imitate seems like a very useful facility in your approach your querysupport matching functions section 31 have similarities to attention but some differences as well for instance attention is all about alignment and your similarity functions are essentially trying to align the query graph nodes to the support snippet graph nodes it would be instructive in the paper to explain the differences and why this is not attention in general the formulation of section 31 is not very well motivated and could use some explanation and intuition eg the difference between mqkk and msk q and what each captures that the other cannot also it would be good to explain how you exactly implement the two phi functions in section 31 i had to guess that they are some feedforward network from what you say but its only a guess i found the term extent highly confusing especially since its associated with the lambda parameter which doesnt have a physical interpretation i wonder if resolution might be a better term or something else extent has to do with coverage or area if you can intuitively demonstrate how different values of lambda represent some notion of extent id love to see it something i didnt quite get from your presentation is what makes a good set of exemplars you argue that more are better for your approach which makes intuitive sense but presumably its just as easy to have a bunch of exemplars that happen to modify a declaration as in support 1 of case a in your example as it is to have one of those your model would be just as likely to be convinced that the rewrite only applies to declaration statements perhaps you were lucky in your random draws from exemplars in that you ended up with a diverse set that improved the prediction but why would that be the case in general in fact what is the use case for this multiexemplar edit imitation are you expecting a developer to write a few example edits and let the model apply them to diverse queries i couldnt quite tell what kind of use case youre anticipating will there be noise in the exemplars ie are they always correct in your datasets the exemplars are generated by oracle fixers but im guessing in practice you wouldnt have those or else you wouldnt need your tool in the first place nevertheless i found the contribution interesting and different from prior related work questions q1 how many exemplars are enough is more always better is there some measurable metric of fitness for a set of exemplars eg based on diversity or some other similar metric q2 whats the usecase for this multiexemplar task would you expect an editor to provide this functionality do you anticipate a different kind of usage mode for this sort of multiextent input q3 whats the relationship between your querysupport matching formulation and standard notions of attention smaller issues 1 in equation 1 activations have g sk as a superscript in equation 2 they have gsk please be consistent 2 the descriptions of the ours baselines are confusing given the complex formulation of your model it would help if you pointed out which function of which equation youre replacing with the graph edit distance and cosine similarity respectively or just gave a precise alternate formulation task targeted is not well motivated not clear how you apply it if you dont have the fixers or why you would use it if you have the fixers but the approach seems novel and interesting docsepthis paper presents a fewshot learning approach to editing source code with a few exemplars unlike previous work that learns code editing with only one exemplar the proposed approach learns the edit representations from a few exemplars the proposed approach combines edit representations extracted from editing exemplars and compositionally generalizes them to the query code snippet editing via multiextent similarities ensemble the proposed approach models the extents of nodetree representations using a softmax with an adaptive composition of multiple extents experimental results on two code editing datasets demonstrate some improvement over baseline models this paper addresses an important task in software engineering modifying source code towards a desired editing style could be a common programming activity the paper proposed a new dlbased model that can outperform two existing models the results are somewhat unsurprising it is obvious that providing more examples can improve the generalization ability of a machine learning model code editing styles can be better learned from more editing exemplars so the results are expected the related work of this paper is described in a way that is too brief there has been much work on code similarity and code changes for example na meng et al lase locating and applying systematic edits by learning from examples in proc icse 2013 502511 tian haoye et al evaluating representation learning of code changes for predicting patch correctness in program repair 2020 35th ieeeacm international conference on automated software engineering ase ieee 2020 j zhang et al a novel neural source code representation based on abstract syntax tree in proc icse 2019 may 2019 note that the above lase meng et al 2013 work is a traditional program analysis work that produces systematic edits by learning from multiple examples the paper can also compare with traditional approaches such as lase currently it only compared with two deep learning based baseline models the most severe issue of this paper is on the evaluation the evaluation metrics used in the paper macro and micro accuracy are not explained therefore it is difficult to interpret the experimental results a human evaluation is needed to evaluate the effectiveness of the tool in practice it would also be better to discuss some unsuccessful examples of code editing also it is not clear how the syntax correctness is assured by the proposed approach to capture the multiextent matching we design a softmax function by scaling the importance of nodes in an abstract syntax tree the proposed softmax is derived heuristically the variable is not trainable so it is difficult to know how it can be learned to reflect the extents of nodetree representations the presentation can be enhanced if the authors provide more theorical analysis of the proposed method with respect to ast representations minor in page 7 bringing a absolute improvement bringing an absolute improvement our gde our ged the results are somewhat expected the evaluation part is unclear comparison with more existing work including traditional approaches should be performed
### Summary:
|
this paper proposes learning to make stylistic code edits semantics remains similar based on information from a few exemplars instead of one the proposed method first parses the code into abstract syntax trees and then use the multiextent similarity ensemble this was compared to a graph2edit baseline on c fixer and pyfixer which are datasets generated by rulebased transcompilers the proposed method got around 10 accuracy improvement due to a combination of the method and using more than one examplar the reviewers find that any improvement due to more examplars to be expected and suggested that 1 one carefully chosen examplar is enough and 2 that the need for multiple examplars means more practical difficulties in providing them in an application 3 the targets are all generated by rulebased methods and the benefits may not extend to a realistic case where the edits are not so clear and the reviewers wondered about the application value and the potential need for human evaluations the authors argued that it is unexpectedly difficult to expand the base method to multiple examplars and users should be able to provide examplars in an application the authors further provided additional results that addressed some of the reviewers concerns but the reviewers did not change their evaluation rejection is recommended based on reviewer consensus
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
c1 open set learning which is a task of classificing data over the known categories but that are ood is widely studied in 2d in contrast there were no 3d dataset or benchmarks that attempt to solve the open set problems c2 they forumalize open set learning problem and benchmark typical methods they evaluate two scores aurocarea under the receiver operating characteristic and fpr95false positive rate of ood examples when choosing a normality threshold for which the true positive rate of indistribution example is 95 c3 they organize their data from weel known 3d objects datasets shapenercore modelnet40 and scanobjectnn on top of the proposed data they also organize two benchmarks synthetic to synthetics2s and synthetic to reals2r benchmark c4 they evaluate various approahces that are feasible to handle challenges of open set learning they compare for 1 discriminative models 2 density and reconstruction based models 3 outlite exposure with ood generated data and 4 representation and distance based models s1 although im not an expert in this area i did not have problems of reading this article the paper is wellwritten and claims are followed by supporting evidence s2 welldesigned experiments and tasks motivation is clear and paper is well organized s3 they experimentally demonstrate that traditional methods on open set 2d task do not fit on open set 3d task indicating there are many challenges that researchers should solve in the proposed task w1 changing the number of points can also affect the performance in other words both train and test set share the number of points on data so that we cannot clearly say that the splited sets have different domains considering realworld situation such as autonomous driving the authors should also handle this issue w2 according to table 1 the reported score does not show clear tendency for me it was hard to interpret the message from the table 1 it would be better to change the table to graph and put the table on the supplementary materials w3 for section 33 many contents here are repeated from section 22 it would be better to put the details on the supplementary materials docsepthe authors introduce a novel open set 3d learning testbed based on 3 existing 3d datasets with several experiment settings in terms of category semantic shift and also covers both indomain and crossdomain scenarios they also provide extensive comparisons on these benchmarks and give detailed analysis over them 1 this paper proposes a first testbed for open set 3d learning which will facilitate the future research and applications on robots and autonomous systems 2 this paper investigates several algorithms extensively and revealing their strengths and limitations for further research on open set 3d models 1 the pointnet backbone always outperforms dgcnn on the synthetic to real benchmark while dgcnn outperforms the pointnet on the synthetic benchmark could the authors provide more deep analysis on it 2 the synth real benchmark seems to be very sensitive to data augmentation tricks i doubt whether the dataset settings are proper and consistent to verify the capability of the models 3 the overall novelty is limited since it only reorganizes three existing 3d datasets and conducts experiments with several existing methods 4 results on real real benchmark is desired 5 some typos should be fixed eg l115 yt should be yt l328 tab 43 should be tab 4 docsepthis paper focus on building a benchmark on ood detection task for point cloud it is the first benchmark for open set 3d learning the proposed benchmark considers two tracks synthetic synthetic and synthetic real which is reasonable for the 3d scenario to better analyze the situation on ood detection authors adopt series of 2d ood detection methods and directly apply them on point cloud classification methods the experiments discover the sota method for open set 3d learning and analyze the advantages and limitations of existing methods the paper presents the point cloud open set learning benchmark firstly and considers two benchmark tracks for synthetic and real point clouds the problem definition is clear the metrics for evaluating the performance are fairly correct and datasets are sufficient for point cloud ood detection a thorough analysis of 2d ood detection methods in 3d situation is explicitly demonstrated in experiments the results show that 3d open set learning has space for improvement since sota 2d methods are not good enough in 3d situation 1 this paper builds a new benchmark for 3d open set learning which is necessary for 3d literature however there is little discussion about the difference between the 2d and 3d open set learning one obvious difference is the input data structure besides the benchmark almost depends on the 2d counterpart i think the discussion about the challenges for building 3d compared to 2d open set learning is essential 2 the performance metrics need more detailed definitions with formulations and citations since the paper build a new benchmark for a new task 3 the baseline approach msp 18 also uses the aupr metric for evaluation and the mean pred prob as a comparison for the advantages of new metrics auroc aupr but this paper only use the auroc i think authors need an explanation for metric selection since a reasonable benchmark should include comprehensive metrics 4 for the synthetic to real benchmark it introduces the domain shift in the task however the benchmark is for evaluating ood detection and focusing on discriminating ood examples the domain gap is another variable influencing the results in tab 3 a real to real track may better analyze the 3d open set learning for comparison docsepthe paper presents a benchmark for outofdistribution detection on 3d point clouds their benchmark comprises two tracks the first one is purely synthetic and based on the shapenetcore dataset in the second one the training set is also synthetic and based on modelnet40 while the test set contains real point clouds from scanobjectnn the authors benchmark multiple ood methods and reveal room for future improvement the paper tackles an important problem that has been largely ignored by the research community so far namely the detection of ood samples in point cloud classification the provided benchmark takes a good first step into filling this gap the authors evaluate numerous existing methods that are wellselected their benchmark indicates that existing ood methods from the 2d domain do not transfer well to 3d this leaves room for future research to improve in this area my main concern is that all methods are trained on exclusively synthetic samples the benchmark lacks an evaluation of methods on point cloud data obtained from real sensors furthermore all samples are of relatively low resolution 1024 3d points hence an evaluation on this benchmark leaves the question how well the results transfer to more realistic realworld settings the synthetic benchmark consists of three subsets that are described as increasingly difficult in the paper l152 however i did not find an explanation as to why one subset should be more difficult than another
### Summary:
|
this work for the first time proposes to study the task of open set 3d learning for 3d point cloud data the authors have conducted extensive experiments under different settings with varied category semantic shifts and provided comprehensive experiments benchmarking the popular methods from 2d open set learning which leads to some important conclusions about the transferability of the 2d methods to the 3d settings the contribution of the work is clear and novel the ac believes the paper provides important findings for the community to be aware of during the rebuttal reviewers raised up concerns regarding the lack of realtoreal setting and the missing of some evaluationsmetrics the authors have added the requested materials in the revised paper which addressed most of the raised issues while there are some minor questions asked by the reviewers the authors have carefully addressed them and the ac does not think they are major issues preventing me from accepting this paper the final scores are 3 accepts and 1 reject and two reviewers confirmed their final decisions after rebuttal the reviewer who gave the reject review didnt come back for responding to the authors rebuttal and the ac think the authors have addressed hisher questions well therefore the ac is very confident in recommending an acceptance of this work to the track but please carefully revise the final paper for the cameraready submission based on the reviewers suggestions congratulations
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
proposes to use bert style masked language modeling to amortize inference across assignments of variables in probabilistic programs particularly stan programs this learning of an inference network a across programs is a form of metaamortization implemented as compiled inference optimizing the qzx distribution that maximizes xz pxz using klp q instead of qp it allows efficient inference of complex programs finding high likelihood assignments to variables with low variance in the importance weights combined the masked language modeling objective of random tokens with masking out just some entries attempts to study the learned attention weights to see if the model is learning meaningful causal structurerelations performs somewhere between nuts best and advi cheaper on a dataset drawn from posteriodb mli masked language inference does as well as advi with a single sample and improves given more time originality this is a fairly original idea combining 2 powerful ideas to more effectively amortize and speed up inference in flexible probabilistic models bert for ppls it is the first paper i am aware of that uses bert and transformers for inference in probabilistic programsgraphical models similar ideas appear in other instances of compiled inference including applications of lstms to learn samplers for sites in probabilistic programs see tuan anh le thesis and etamulis quality well executed using an established ppl stan to evaluate performance reports both the likelihood of the sampled values and the variance in importance weights a helpful diagnostic and indicator of the quality of the approximate posterior impressive results compared to standard baselines clarity well written illustrates main points provides compact background on the motivation for probabilistic programs and approximate inference using compiled inference networks believe the results are straightforward to reproduce experiments are targeted and make a convincing case for the components of the method and in comparison to standard inference methods built into stan significance suggests a way to use the rapid advances in large language models to aid efficient inference in existing probabilistic programming languages this is likely to spark significant followup work to learn metaamortized inference models which are capable of outperforming existing optimizers across a wide range of applications no significant negative societal impact docsep this paper use metaamortised learning to learn a neural network which can approximate not just a family of posterior distributions but also one that is generalised across different probabilistic programs this is done through using compiled inference to generate a corpus of probabilistic programs which are then used to finetune a transformer model the language model is trained to predict different posterior distributions based on different observations originality the paper seems very similar to the work of gwonsoo che and hongseok yang and im struggling to distinguish them the main difference seems to be the use of large transformer models instead of something that follows the structure of a particular probabilistic program technical quality the work mostly focuses on toy problems and the results are promising for them i have a concern that a massive language model is overkill for handling the kinds of toy problems used in this paper the experiments involving stan and posteriodb are very compelling though it is a shame that it doesnt outperform nuts i would have also liked to see this work compared to the work of che and yang from 2021 im also concerned that there doesnt seem to be any benefit in using an augmented dataset from one toy problem to do inference in another toy problem without achieving that kind of generalisation what makes this method better than just using compiled inference in each toy domain clarity the paper is wellwritten and easy to follow i had no difficulty understanding the method though have not tried to replicate the experiments in the paper significance i think this work as tremendous potential as being able to do amortised inference over a set of related probabilistic programs would be very useful if this can shown to generalise across several domains even related ones it would make the work much more compelling i feel the authors have adequately addressed the limitations of the work and potential negative social impacts of this work docsepthis paper has formulated probabilistic inference as masked language modeling in particular it proposed a metaamortized inference algorithm for probabilistic programs where a foundation posterior is constructed and can be netuned downstream to perform inference over various programs the authors demonstrated the efficacy of such an approach on a benchmark of stan programs originalitysignificance good it is interesting to see that the authors interpret probabilistic inference from a different angle and bridge ideas from unsupervised learning language modeling in particular as discussed in the related work section i believe this idea is novel and also has solid foundations which opens up interesting research directions quality good the paper appears to be technically sound clarity good the paper is well organized and clearly written adequate intuitions as well as details of the proposed methods are both introduced and it is enjoying reading the paper weaknesses one main concern of the paper is its computation burden i will explain in detail in the following section na docsepthis paper presents a method for learning a language model that can perform approximate inference in probabilistic programs specifically given a set of probabilistic programs a dataset of masked annotated programs is generated and then trained on using a linear combination of a masked language modeling loss and a variational inference loss an extra head on the language model outputs the parameters of a variational distribution for the current token this variational distribution can then be used as an approximation of the posterior distribution and can be further finetuned on individual programs to achieve even better performance experiments on both toy programs and a real benchmark demonstrate that the learned model provides reasonable posterior probabilities originality to my knowledge this work is quite original while language models have been applied to any number of tasks it seems there are still more places where they can be applied successfully in this case to posterior inference for probabilistic programs quality and clarity the quality and clarity of the paper are both high while some additional information should be provided about the inference procedure see questions the overall exposition is quite clear and the contribution and empirical validation is of high quality significance this paper presents an interesting and novel method that i think will be of some interest to the community both in regards to the ability of a language model to learn meaningful representations for posterior inference and a new method for posterior inference that incorporates ideas from modern machine learning ie transformers over large amounts of data more work is still needed to become a method used for the latter but this paper solidly demonstrates that that work may be possible overall this paper was wellwritten and interesting to read while the approach presented here is a nice step i wonder if using the proposed method as a prior or proposal for a method with more guarantees would be more useful in general i see that this is mentioned in the conclusion and figure 6 and id encourage the authors to further explore such directions my main complaint with the paper is that i found section 33 to be somewhat poorly executed and explained why look at average l2 norms instead of attention weights from the transformer itself its hard to assign much meaning to the particular values of these norms the writing and explanation could use some editing and rethinking as well to make the results more clear and compelling minor i think the term foundation prior may be more appropriate particularly if you frame finetuning as an inference procedure that incorporates data into a prior to produce a posterior the authors wrote a very nice discussion of the limitations of the proposed approach which i much appreciated no potential negative societal impacts were mentioned but its not clear to me that there are any immediate such impacts with this work
### Summary:
|
this paper proposes a novel and interesting perspective on leveraging large masked language models as ways to initialize posterior distributions across probabilistic programming language ppl tasks the idea is that this distribution can later be fine tuned over different probabilistic programs all reviewers acknowledged that the idea is a novel application of masked language models and despite being a natural analogy to the way these models are already used nowadays in nlp can be potentially impactful for amortized inference in ppls the paper is accepted upon the introduction of the discussions emerged during the rebuttal concerning related works and presentation i also advise the authors to think about substituting the term foundation with a more precise technical term masked language model posteriors transformer posteriors imprecise umbrella terms nowadays bring more noise than help and inflate the hype around simple concepts
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
response to authors the authors have largely responded well to my original concerns however after reading through the discussions with other reviewers i agree with reviewer 2 that more work is required to make this publishable in particular this should include comparisons to the other methods suggested and justification of the use of the umap loss function given this i have downgraded my score accordingly original review this paper presents a parametric approach for umap a dimensionality reduction method this area is of interest to the community as dimensionality reduction can be useful in a lot of different tasks such as visualization semisupervised learning etc if my comments below can be addressed i would be willing to increase my score pros the parametric version presented here appears to work well in the experiments given the incorporation of the umap loss directly in a neural network as a regularization is also interesting cons some of the results dont appear to have a corresponding figure or table eg reconstruction accuracy in section 33 these should be included umap tends to inherit some of the weaknesses of tsne as it tends to overemphasize local structure at the expense of global structure in particular its been shown r1 that umap and tsne are basically equivalent when using the same initialization could similar results be obtained by using the same initialization as umap instead of including the regularization term umap is traditionally used for visualization some of the applications presented eg ssl requirewould benefit from higher dimensions than 2 or 3 tsne is known to be considerably slower for higher dimensions does umap inherit this problem if so that should be mentioned as a potential drawback all of the figures are given in the appendix while this allows for more results i think it would be a better paper if some of the figures were included in the main paper and some results were moved to the appendix the authors should verify that their references are as up to date as possible for example the phate paper should be updated to the nature biotechnology version not biorxiv r1 kobak and linderman httpswwwbiorxivorgcontent10110120191219877522v1 docsepupdate following discussion following the revision by the authors and the discussion with them i am updating my score from 3 clear rejection to 4 ok but not good enough rejection this reflects in great part the revision the authors made to have the main paper limited to 8 pages be self contain and present their main results while using the appendices for complementary and technical information however i still maintain the paper is not ready for publication in its current form the extension of umap to implement the optimization via a neural network applied to input data rather than directly assigning coordinates is rather straightforward the advantages it provides over umap in terms of natural inference on new data without the need for separate more computationally intensive out of sample extension method are a direct result of this neural network implementation and they would be true not only for umap but in fact for any method implemented in a parametric way via a neural network compared to nonparametric coordinate assignment similarly allowing the addition of reconstruction or classification objectives in training is clearly a direct byproduct of this neural network implementation as well and not unique to umap therefore an important question has to be asked here for whether the umap loss is indeed a good choice for a loss term to impose on networks for example to enable visualization or improve various tasks the authors already look into this to some extent by comparing to parametric tsne as one alternative approach but there are many others as i mention in the initial review relying on constructions from topological data analysis and manifold learning most if not all of which rely on some graph construction on the data and then ensuring the coordinates provided by a hidden layer in the network match the relations encoded in the graph similar to the proposed umap loss term how are reconstruction and classification affected by using such other regularizations compared to the umap one is inference speed the same for these other approaches how does training speed compare between them one can clearly expect some tradeoff between such properties and the geometric information encoded by different methods umap and tsne emphasize clusters while other methods may emphasize other patterns but this should be discussed and demonstrated clearly rather than just ignoring the vast amount of related work on parametric approaches to capturing intrinsic geometry in data now beyond the described lack of relevant comparisons for autoencoding and semisupervised classification even simply as a parametric implementation of umap which would be a rather narrow scope which is not very enticing as a motivation on its own i am not sure this work is sufficient to establish the presented approach first for the inference or embedding speed this is essentially and out of sample extension task as such even if one insists on only comparing to umapbased methods there are multiple oos methods that can be used such as nystrom geometric harmonics etc some analysis of the tradeoff between extension quality and speed seems warranted here but as i said previously i think a comparison should also be provided to other parametric embedding methods beyond just oos of umap and tsne for that matter second as the authors clarified in discussion their approach relies on the suitability of the umap loss to be incorporated directly in the network optimization essentially comparing activations to the umap graph however an alternative approach presented in related work is to provide a loss term between activations and a umap embedding this second approach is more general since other embeddings can also be considered there but also probably has some disadvantages for example the a priori fixed dimensionality as the authors suggest the differences between these two approaches should be addressed better in the manuscript and importantly since previous work exists already on the embedding loss approach the authors should present a comparison establishing the benefits of the graphbased loss one in addition to discussion regarding them to conclude the idea behind this work seems reasonable albeit rather straightforward since its a reimplementation of the umap optimization however as it currently stands i find it is not mature enough for publication and would need nonnegligible amount of work to properly position the contribution provided by this work compared to previous and related ones i would like to encourage the authors to invest the time in adding such comparisons and clarifying not only how they are different from other methods but also how they are better and why choose umap to begin with as the basis for their proposed loss terms compared to various other approaches not just tsne initial review before getting into the details of this work i note that in my opinion it should have been desk rejected for violating the page limit base on the way it is written the main 8 pages of the paper are far from being self contained and regularly reference materials from the appendix as integral parts that are crucial for the presentation and understanding here these include not only methodological illustrations but also all results establishing the method in fact the main paper here does not show any result it only describes the setting for getting them all the figures and tables showing results appear solely in the appendix if we are to ignore the appendices and only judge the paper based on these main eight pages then there is no support no results and very little in the way of presenting the method here if on the other hand we include the result figures as integral parts of the main paper as they should be then it clearly has significantly more than eight pages considering most papers submitted to this conference do try to provide a coherent and relatively selfcontained presentation of their work within the page limit according to the guidelines of the conference while only leaving technical and supporting details to the appendix i believe it would be inappropriate to consider this work as meeting the conference page limit as for the work itself this paper presents a rather nave attempt to combine together the umap visualization with deep learning it essentially proposes to consider the coordinates optimized by umap as resulting from a neural network applied to input data then instead of adjusting directly these coordinates via the umap optimization the method here continues to backpropagate the coordinate optimization through the network to provide a parametric model via a feed forward neural network that embeds the data in low dimensions while preserving the local neighborhood structure in the same sense that umap tsne and largevis do with their nonparametric approach this neural network formulation can also naturally be extended to consider other loss terms such as reconstruction loss of autoencoders or any predictive loss classification regression etc enabling supervised visualization from a methodological perspective this is a pretty straightforward extension of the umap optimization and does not indicate a clear advantage over it for the main task of unsupervised visualization or dimensionality reduction neither in embedding quality or scalability the authors show some interesting results albeit only in the appendix and not in the main paper on supervised visualization and out of sample extension speed but these are not compared to relevant baselines that directly aim to address these tasks moreover there is significant related work that is either ignored by the authors or just mentioned in passing in the appendix without providing proper discussion and comparison with the proposed method for example in a4 the authors mention topological autoencoders connectivityoptimized representation learning scvis vaesne geometry regularized autoencoders ivis and differential embedding networks but they do not compare their work to any of these even though such comparison seems highly relevant here more work that is completely ignored here includes for example diffusion nets mishne et al 2015 laplacian autoencoders jia et al 2015 dimal pai et al 2019 finally briefly looking at duque et al 2020 cited here while the main method there uses phate coordinates to regularize autoencoders it seem they have also proposed the incorporation of umap loss terms in autoencoders albeit only mentioned as somewhat of a sidenote together with tsne regularization in their appendix a discussion about the difference between these two approaches should be added to the main paper here and it seems some comparison between them should also be presented to establish the advantages of the proposed approach here hence even without the page limit argument it does not seem the work presented here reaches the iclr acceptance threshold without major revision to its presentation discussion and results i must therefore recommend its rejection at this stagedocsepthe authors propose a parametric version of umap by replacing sampling embeddings in the optimization of umap with directly learning weights of a neural network the paper is very well and clearly written but i have several significant concerns 1 i dont see significant methodological novelty replacing embeddings with neural networks learning seems to be quite basic and straightforward it is certainly a cherry on top of original umap but i am not sure it could be counted as a separate method the simplicity of methodology could be neglected if the authors demonstrated significant improvement in their experiments especially on downstream tasks 2 a large part of the experiments is devoted to the comparison with tsne however it is not very clear why there is a lack of comparison with other parametric methods such as topological autoencoders also not very clear why the authors mention these very relevant methods only in appendix and not in the introduction in the beginning 3 the performance of parametric umap achieves similar results to nonparametric umap which is certainly nice but also quite expected therefore i would consider applications of parametric umap to other downstream tasks as a more significant and interesting contribution however experiments on this part are not convincing at all especially on cifar10 dataset would be interesting to see the performance on some other datasets also it would be very interesting to see confidence intervals for figures 15 16 18 4 in terms of speed i also dont see an improvement compared to nonparametric umap tf i see clear improvement compared to umaplearn version but this as far as i understood due to a different implementation of the original umap and not in particular novelty of this paper after authors response to revisions i reconsidered my evaluation and updated the score docsepin the manuscript the authors introduce a parametric version of umap replacing the original embedding optimization step with a deep learning solution detecting a parametric relationship between data and embedding the novel approach compares favourably with the standard algorithm and as a major contribution defines a loss function that can be employed for other important applications such as constraining the latent distribution of autoencoders and improving classifier accuracy for semisupervised learning the paper is well written complete and thoroughly detailed both in the theoretical and the experimental section the introduced material represents a significant advancement in the field becoming a valuable resource for researchers in several areas a couple of notes an application to one or more large real world dataset eg singlecell sequencing or weather radar data would strengthen even more the authors claims and the papers impact so i would suggest to include it at least in the appendix fig3 in the appendix is extremely useful to graphically explain the algorithm to a broader audience i understand the page length limit but i would strongly recommend to fit it in the main text i would also suggest to include maybe in the appendix a kind of howto fully worked example to help researchers in optimising the use of novel algorithm in a data exploration pipeline i would point out within the limitation of the anonimity requirement the availability of the code for the algorithm
### Summary:
|
we thank the authors and reviewers for engaging in a detailed and constructive discussion and providing a revised version of the paper after the initial round of reviews regarding quality the work is technically correct and the amount of experiments significant however as highlighted by reviewers 2 and 3 some important questions remain unanswered in particular 1 more empirical evidence to support the claim that the umap loss is a relevant for neural networks and 2 more comparison with existing approaches beyond tsne regarding clarity the paper is overall clear and pleasant to read however after the revision round all details about the proposed methods have been moved to the annex while the initial version was criticized for the opposite reason all experiments were in a annex the balance may not be found yet eg the equation for the umap loss which is at the core of the paper would certainly find its place in the main part of the manuscript for an iclr paper the originality is the weakest aspect of the paper besides the lack of comparison with related work as mentioned by several reviewers plugging the umap loss to a differentiable model is nowadays an idea that lacks originality what would be important to justify that such a straightforward idea makes it to iclr would be to demonstrate convincingly that it outperforms existing alternative approaches finally regarding the significance of the work it is limited by the lack of thorough comparison with existing method on the other hand if the method is implemented in a fast and easytouse package it may find its public as illustrated by the positive evaluation of reviewer 1 from a potential user point of view
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a new pseudodistance called the spherical slices fused gromov wasserstein distance ssfgw it builds on top of the slices fused gromov wasserstein distance sfgw that takes a weighted combination of the wasserstein distance and the gromov wasserstein distance on sliced spaces the paper tackles the problem of solving for the best sampling directions of slicing existing approaches either assume uniform sampling strategies or a single direction that maximizes the discrepancy of two measures this paper uses von misesfisher measures vmf as the bridge to combine the advantages of the two changing the parameters of vmf is equivalent to nonlinearly interpolating between a dirac impulse and a uniform distribution which converges to uniformsliced wasserstein and maxsliced wasserstein respectively the authors provided the proof of its pseudo metric properties and upper bound and extended their ssfgw to mixtures of vfw creating mssfgw and applied these variants to deterministic relational regularized autoencoder drae the results of comparing the drae equipped with proposed distance with other draes demonstrate its superiority in stability and generative capacity some comments and questions where does von misesfisher distribution come from if i understand it correctly von misesfisher is a variant of gaussian on hyperspheres the authors can consider citing existing works or proving some arguments of choosing vmf as an interpolation of a uniform distribution and a dirac please refine 2 background definition 2 is iv actually id in cdk the vmf distribution provides a way to have concentrated weight on the most important directions and assigns less weight to further directions therefore we gain a better representation of the discrepancy between probability measures swap sfg and drae in the title of 21 to consist with the content of 21 the introduction of power ssfg at the end of section 3 seems abrupt if power ssfg performs better than ssfg then what is the point of introducing vmf can we build mixtures of power ssfg the paper does not mention an mpsdrae on page 20 of the appendix at the top it seems power ssfg is superior to ssfg in all desired aspects but psdrae underperforms sdrae as shown in table 1 any explanation empirical evidence of interpolating between a uniform and a dirac in figure 12 abd the graphs of sdrae cross the red baseline with a large momentum when k decreases to 1 which suggests that it could go far away from the baseline when k rightarrow 0 because when k1 vfm is still very far from a uniform distribution please explain that a wasserstein paper or a generative modeling paper the main contribution of the paper which tackles the problem of generalizing sliced wasserstein distance and maxsliced wasserstein distance is on the new pseudo metric but the paper only argues the contribution from the perspective of its power in generative modeling i wonder if the authors ever considered evaluating the metric without addons like a neural network some typical areas of applications of the wasserstein distances like color transfers shape interpolation rigid transformation since it is gromov wasserstein etc or is the evaluation without an ae setup not necessary after rebuttal i appreciate the authors response most of my trivial comments and questions have been resolved i stand by my initial rating this is a solid paper the authors clearly introduce the problem and develop a clear story with a straightforward solution the paper ends with extensive experiments my main concern remains the contributions of the paper to the ml community is moderate because the story is very narrow i think the idea can be substantially extended to solving the fundamental problems in sliced wasserstein distances but i dont object acceptance of the paper in the current form i recommend the authors incorporate suggestions from all the reviewers and polish the language especially section 2 to make the paper more accessible for readers outside the sliced wasserstein community thank youdocsep summary the paper presents a novel method to improve the relational regularized autoencoders the proposed method is based on the new relational discrepancy which is called the spherical sliced fused gromov wasserstein ssfg it is seen that the ssfg is an extension of the sliced fused gromov wasserstein sfg and its max version two variants of the ssfg are also presented experiments suggest that the proposed autoencoders outperform some existing autoencoders in terms of generative performance in comparable computational time reasons for score overall i have a positive impression about the paper i think that the proposed relational discrepancy using the von misesfisher distribution is a reasonable extension of the sfg using the uniform distribution my concern is whether the estimation of the tuning parameters of the proposed relational discrepancies could be computationally expensive in practice see cons below hopefully the authors can address my concern in the rebuttal period pros 1 applying the fact the von misesfisher distribution is an extension of the uniform distribution and the dirac distribution the authors successfully presented an extended relational discrepancy of the sfg and its max version 2 the mixture spherical sliced fused gromov wasserstein mssfg achieves an even better flexibility than the ssfg i reckon that the mssfg which adopts a mixture of the von misesfisher distributions is a reasonable extension of the ssfg 3 the extensive experiments suggest that the proposed autoencoders show satisfactory performance in terms of fid scores and reconstruction losses and are not particularly expensive in terms of computational time cons 1 i wonder whether the estimation of the tuning parameters of the proposed relational discrepancies could be computationally expensive in practice for example the mssfg requires the values of k kappa1ldotskappak alpha1 ldots alphak i fear that the estimation of these tuning parameters requires expensive computational cost in the experiments of the paper only limited combinations of these tuning parameters are considered i wonder whether these combinations really cover a sufficient area of the parameter space 2 the von misesfisher distribution has the property that as the concentration parameter kappa increases the concentration of the distribution monotonically increases in particular the von misesfisher distribution tends to the dirac distribution as kappa goes to infinity however i am not sure this nice property also holds for the power distribution if not i wonder whether the power ssfg and power spherical drae have sufficient flexibility compared with the ssfg and spherical drae questions during rebuttal period please address and clarify the cons above typo 1 p1 abstract last line generation and reconstruction generation and reconstruction updates the authors have carefully responded to my comments their response addresses most of my concerns i will keep my score high i understand that the choice of the hyperparameters can be computationally heavy but the authors have given an idea to solve this problem it is good to find that the power spherical distribution also includes the dirac distribution as a limiting case docsepthis paper builds on the work of xu and colleagues 2020 on auto encoders ae with relational regularization the core idea is to enforce a notion of structure in the latent space of an ae by measuring the composed structrural divergence with a target distribution in order to do so a quadratic optimal transport problem is used the fused gromovwasserstein fgw which has a supercubical complexity in order to alleviate this cost xu et al defined a sliced version of it akin to sliced wasserstein sw which consists in solving several simple 1d versions of the problem that admit a close form solution after projecting onto random directions drawn uniformly on the unit hypersphere this work proposes to replace this uniform distribution over the sphere by a von misesfisher distribution on the sphere that is alike a gaussian distribution on the sphere and also a mixture of those distributions their parameters are optimized during the training so that it maximises the fgw divergence this strategy is similar to recent trends in computing sw that also replace the uniform distribution by either the replacing the expectation by a max or looking for subspaces that maximizes this the expected sw the paper is clearly written and interesting it can be seen as incremental with respect to the work of xu et al but the formulation of there sliced optimal transport problem with a parametrized von misesfisher distribution is novel and could also been applied to the original sliced wasserstein but maybe also to compute the sliced fused gromov wasserstein for other task such as graph classification as in the original fgw paper see minor comment the experimental results are very good and clearly show the benefits of the method yet i have some questions which answers might be critical for the final evaluation of the paper in the mixture part how do you train for the parameters of the mixture do you have a kind of em algorithm or do you perform a gradient ascent in general directions are drawn randomly for every batches of samples what is the meaning of fixing in advance the number of projections as done in figure 3 and the related experiment it seems that the sliced fgw is computed on minibatches of samples while i acknowledge there is a common practice to do so a 1d fgw on a mini batch is not the same as computing the 1d fgw on the full dataset as such this mini batch version of fgw is not the same as computing the true fgw in the end i the size of the minibatch might have an impact on the estimation quality that should be discussed ii if computing 1d fgw on mini batches why not computing and comparing with the mini batch version of the original version of fgw from the paper the batch size is 100 this would be solved very quickly by current fgw solvers in the end i think that using the von mises fisher distribution is an interesting and original idea which might have a broader impact than the sliced fgw the paper has also the merit to push a little forward the structural regularized ae which is in the reviewers opinion a good point on the negative side this work can be considered a little bit as incremental and some questions remain on the experimental part i am willing to change my rating depending on the answers to my comments minor remarks it is interesting to note that another paper in consideration for iclr is developing a similar strategy in the case of the simple sliced wasserstein distance httpsopenreviewnetforumidqyjo70acdk it is also a little bit strange that the original fused gromovwasserstein paper is not credited in the paper 1 titouan vayer et al optimal transport for structured data with application on graphs international conference on machine learning 2019 after author response i thank the authors for their detailed response to my comments i do not agree with the complexity of fgw being solved in n4 which should be more related to n3 for the type of distance considered in the paper see analysis in 1 but yet the point is still sensible for considering the minibatch version my other comments have been adressed and i am changing my note to the score of 7
### Summary:
|
this is a solid paper that proposes a new slicing approach to the fused gromov wasserstein distance using projection on directions sampled from vonmises fisher direction the location parameter of the von mises fisher is choosen to be maximally discriminating between the distribution maxepsilonmathbbethetavmfthetaepsilonkappabeta wthetamuthetanu 1betagwthetamuthetanu the new sliced distance is analyzed and extended to mixture of von mises distributions with k locations or directions this contribution of the paper is of general interest beyond the application of the paper as mentioned by the reviewers authors applies the new sliced fused gromov wasserstein distance to relational auto encoders and show improvement the spherical slicing is original and new and of independent interest and the application is good as it pushes the boundary of relational auto encoders reviewers and ac did not have any concerns with the paper and the rebuttal and revisions addressed all questions raised accept
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper considers the vertexweighted online bipartite matching problem where the online vertices arrive in two batches in the first batch along with the neighborhoods of all vertices of the first batch the algorithm is also provided with a predicted matching the algorithm needs to commit to a matching and then the second batch of online vertices are revealed and the algorithm can now match these new vertices to the set of unmatched offline vertices the goal of the algorithm is to maximize the weight of the final matching the paper defines the robustness of the algorithm as the ratio of the weight of the matching found by the algorithm to the weight of the optimal offline matching and the consistency as the ratio of the weight of the matching found by the algorithm to the weight of the best matching obtained by following the predicted matching exactly the primary contribution of the paper is to quantify the exact tradeoff between these quantities its great that the authors can obtain a tight and nontrivial tradeoff between robustness and consistency for this model the paper is wellwritten and is easy to read and follow the analysis and algorithm are based on feng et al and the primary observation is that any penalty function within a range of functions is sufficient to guarantee the desired robustness guarantee by then adjusting the penalties for vertices in the predicted matching and for those not in the predicted matching the bound on consistency follows i appreciate the discussion in the introduction on alternate definitions of consistency and the notion of prediction error none docsepthe paper studies online matching with advice in particular the authors focus on a twostage online bipartite matching problem where vertices of part a are weighted and known in advance while vertices of part b arrive in two batches each vertex arrives with all its incident edges and we need to make irrevocable decision about matching each vertex in b similar to the new literature on adviceaugmented algorithms a supposedly good matching for the first batch is given before the second stage begins two measures of robustness and consistency are important in this regime robustness compares with the optimal solution regardless of the advice and consistency looks at optimality assuming the advice is correct compared to the usual predictionerror bounds consistency is simpler to understand one can translate between them the authors manage to characterize the pareto frontier of robustness and consistency by presenting an essentially optimal algorithm note that without advice a 34competitive algorithm exists due to fns21 which trivially translates to a 34 34 adviceaugmented solution a naive solution following only the advice obtains a 0 1 guarantee 0 for robustness and 1 for consistency while with a coin flip between the two methods one can get a smooth linear tradeoff the authors show how to use different set of penalties to obtain the optimal tradeoff of sqrt1r sqrt1c 1 strengths first characterization of pareto frontier for online bipartite matching tight guarantees weaknesses lack of experiments on actual online bipartite matching datasets na docsepin this paper the authors consider the twostage bipartite matching problem in a relatively new setting there is an unreliable prediction in the first stage the goal is to design the online algorithm which can balance the consistency performance in the best case and the robustness performance in the worst case their algorithm can achieve the optimal paretoefficient frontier between consistency and robustness the key idea is to choose different penalty functions for different offline vertices they prove that their algorithm can achieve rrobust and cconsistent if the value rc satisfy the inequality sqrt1rsqrt1c1 they also give the tight example to show that such tradeoff is optimal strengths 1 the result shows the clean and optimal tradeoff relation between consistency and robustness this is a very nice theoretical result 2 it is nice to model the consistency as algadvise in many other papers we will model consistency as the performance of algorithm when the prediction is accurate it is difficult to argue tradeoff between consistency and robustness thus the model in this paper is in some sense more equal between consistency and robustness weaknesses 1 the results hold only for a simple setting of online matching that is they consider only vertexweighted version and twostage model however although it is unclear how to generalize the results to edgeweighted or multistage it is an interesting start yes the authors clearly address the limitations i dont think there are potential negative societal impact of their work docsepthe paper studies the problem of augmenting the recently introduced two stage vertexweighted online bipartite matching problem with predictions in this problem one sees one side u of the bipartition the vertices on the other side w are associated with a weight each and arrive in two batches the algorithm has to decide on which edges to add to the matching after the first batch arrives and then do the same again after the second batch arrives while still being consistent with previously selected edges goal is to maximise the total weight of vertices from w that are matched the authors argue that it is without loss of generality to assume that the vertices of the second batch only come with the edges that the optimal solution uses for these vertices so the problem essentially reduces to handling the first batch it is assumed that the algorithm has access to a prediction on how to match the first batch of vertices this prediction may or may not be accurate so the algorithm needs to keep a sort of tradeoff between safeguarding itself for all possible second batches that could still arrive robustness and following the predicted matching in order to attain consistency ie good performance when the prediction is accurate there is already a recent result by feng et al soda 2021 that looks at the problem without predictions the proposed algorithm here identifies for each desired robustness how to fine tune the parameters in feng et als algorithm so as to obtain the best possible consistency that the desired robustness allows a very simple example shows that the obtained robustness and consistency are pareto optimal strengths the matching problem considered is quite interesting although the algorithm is a slight modification of the one by feng et al there is still a nonnegligible amount of technical work required to set the parameters correctly and prove the obtained results weaknesses i found it quite hard to follow the paper the main reasons i could identify for this are the following the problem is not clear at all the first time the paper mentions that they actually look into the maximization version is on page 5 i was until this point under the impression that a minimisation problem is considered which made many statements confusing i assume if i were familiar with fengs result it may have been clear but in my opinion the authors should not assume familiarity with that result the performance of the algorithm also the one without predictions must surely depend on the sizes of the respective batches d1 and d2 but this is not touched upon at all it seems to me that this is handled by the way that consistency is defined ie comparing to an algorithm that uses the same batches the authors suggest a way to define an error measure to get a result as a function to the prediction quality similar to other results in the area this definition seems to me quite adhoc and not very reasonable the only justification seems to be that this works to give the desired formula i find that predicting the whole matching for d1 at once can be a bit unnatural i am not sure if i would call the problem online it resembles some similarity to online algorithms in that there must be an irrevocable decision after the first batch without knowledge of the second batch but thats all also it should be made more explicit that the whole batch arrives before the algorithm needs to decide anything the paper refers to the prediction as advice this can be confusing since in the online algorithms literature the term advice has historically been used for predictions that are guaranteed to be optimal and usually one studies the relationship between the amount of that advice required to obtain a specific performance guarantee when the predictions are of unknown quality as is the case here the term predictions seems to be much more established and i would suggest to change the paper accordingly no experimental evaluation is contained no potential negative social impact with respect to limitations see above
### Summary:
|
the paper studies two stage matching with advice and completely characterizes the tradeoff between the robustness and consistency the online matching problem is a central problem in online algorithms with numerous applications such as assigning jobs to machines impressions to advertisers etc the model studied here is a very simplified online model where there are only two stages on the other hand the paper is a rare case in advice augmented algorithms where the tradeoff between robustness and consistency is fully understood the reviewers all appreciate the tight characterization there is a minor concern that the paper does not include experimental evaluation the authors are encouraged to include the simple result for the edge weighted case in the author response to provide more context to the main result
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
in various machine learning problems which can be formulated as a bilevel optimization problem and solved using gradientbased methods the computation of hypergradients is necessary however the involved inverse jacobian matrix in the hypergradient has been a computational bottleneck in highdimensional settings this paper proposes to use quasinewton matrices from the forward pass to approximate this inverse jacobian matrix in the direction needed for the gradient computation which appears in the computation of hypergradients the proposed algorithm is applied to both hyperparameter optimization and deep equilibrium models for cifar10 and imagenet showing that it reduces the computational cost of the backward pass by up to two orders of magnitude i believe that this paper is studying a crucial problem both deep implicitequilibrium models and hyperparameter optimization which can be formulated as bilevel optimization problems are important applications in machine learning in particular the computational bottleneck in hypergradient computation has long been a stumbling block for their applications in high dimensional settings the proposed method of this paper borrows ideas from lbfgs and broydens method which appears to be natural choices to consider the authors establish various convergence results showing that the approximate hypergradients converge to the true hypergradients under different sets of assumptions experimentally the proposed method is comparable to or outperforms the jacobianfree method by fung et al 2021 in hyperparameter optimization for logistic regression and deqs the proposed method is interesting and can be viewed as a complementary method to the recent jacobianfree method typos add punctuation whenever necessary in display style equations like 1 theorem 2 4 and 6 page 8 figure 2 caption wihtout to without this paper studies the problem of approximating the inverse jacobian matrix in hypergradient computation for bilevel optimization problems solved with gradientbased methods in an attempt to reduce the computational bottleneck of computing the exact inverse jacobian matrix in high dimensions the proposed method leverages quasinewton methods for such approximations experimental results have demonstrated the effectiveness of the proposed approach docsepthe paper proposes a way to improve on the computational cost of bilevel optimization problems these often come up in recently proposed deep equilibrium models and in hyperparameter optimization settings in ml strengths the theoretical aspects of the paper especially theorems 3 and 4 are novel and good ideas the theoretical part of the paper is written very well for someone unfamiliar with the literature questions figure 1 on the 20 news dataset the variance around the convergence of shine methods is a little concerning especially if one wishes to make the claim of an acceptable level of performance is reached twice faster for the shine method compared to any other competitor again in figure 2 left the variance in the convergence curves of shine methods requires further comment by the authors perhaps it will be a good idea of having n runs and reporting average runtime improvement benefits over other methods along with a standard deviation figure 3 it seems that the jacobianfree method and shine are almost equally performant in terms of top1 accuracy while shine takes longer due to additional updates while this is discussed briefly i would like the authors to dig a bit deeper on why that is the case as i understand right now the authors claims are that the jacobian free method is working outside the assumptions used to prove its convergence but it seems that even their method is well outside the scope of its assumptions i am unsure what the claims are here why would one use shine over the jacobianfree method for the deq models overall i like the core idea and theoretical insights of the paper but have questions regarding the experiments as indicated in my main review i am willing to update my scores based on author responses update i would like to thank the authors for thoroughly engaging with reviewers on the platform after reading the author responses to my review and other discussions on this forum i am convinced that the improved draft should be presented at the conference and vote to accept the paper docsepin implicit deep learning such as deep equilibrium models computing the inverse jacobian for the forward pass is computationally expensive this paper propose an interesting approach to combine the information from the forward and backward pass to make an efficient estimate of the jacobian inverse in one approach they propose to replace the jacobian in the backward update with the quasinewton matrix which is being already usedestimated in the forward pass solved by quasinewton method additionally they propose an iterative update to the quasinewton matrix such that to helps its estimate toward the direction useful in the backward pass they call this outer problem awareness they provide theoretical analysis of their proposed method and show that under certain conditionsassumptions the forward pass still converges to the desired solution and the sequences of backward estimates converges to the loss gradient of needed to parameter updates they provide numerical results in bilevel optimization regularized logistic regression and training deq for classification in certain settings bilevel optimization they show that they outperform jacobianfree and have similar performance to the stateofthearts but it is faster than all for deq they show similar performance to jacobianfree strong points the paper starts with a clear motivation and proposes an interesting approach to combine the computations from the forward and backward pass to accelerate the backward pass the simplicity of their approach is an strength given the method the paper provides theoretical analysis on the convergence of their forward and backward estimate to the desired computation the theory is written well seems correct and states clear assumptions the authors discuss the assumptions well in case of whether they are used in any other works and if they are common they provide detailed numerical results and experiments in certain settings they show faster performance than the original with similar performance weak points some of the experimental results are not convincing for example although they outperform jacobian free method in the regularized logistic regression they show similar performance to the jacobian free in the case of classification with deq so why one may use shine which is slower than jacobian free but have similar accuracy an extensive experimental study is needed to compare the two shine and jacobianfree for the proposed method to make practical sense the author should show the wide settings at which shine is better than jacobian free for example is shine better in any other bilevel optimization in addition to the regularized logistic regression a discussion in the main paper is needed for experimental studies on the relation between the quality of the inversion and the performance i recommend to move figure e2 to the main text for this matter please explain why the quality of the inversion and performance are not correlated fallback strategy is of concern please provide more intuition of this instability and why it is not seen in other methods how much is barely in the statement we verified that the fallback is barely used the bullet points of contributions need to be more precise figure 1 fix typo freee post dicussion opinion see the discussion given the additional ho experiment and elaborations in the abstract i have revised my rating and i recommend acceptance of this paper the papers main motivation is to computationally improve the backward pass they show speed improvement compared to the other methods involving inversion of the jacobian and minimal decrease in performance however they do not show performance improvement compared to the jacobian free which is faster than the method in deq experiment given this why one may shine that approximates the inverse jacobian when their method does not outperform the jacobianfree method more experimental results is needed to highlight the advantage of their method against jacobianfree this advantage is already shown once in the regularized logistic regression
### Summary:
|
the paper considers the setting of bilevel optimization and proposes a quasinewton scheme to reduce the cost of jacobian inversion which is the main bottleneck of bilevel optimization methods the paper proves that the proposed scheme correctly estimates the true implicit gradient the theoretical results are supported by numerical experiments which are encouraging and show that the proposed method is either competitive with or outperforms the jacobian free method recently proposed in the literature even though the reviews expressed some initial concerns regarding the empirical performance of the proposed method the authors adequately addressed those concerns and provided additional experiments thus a consensus was reached that the paper should be accepted
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper presents a theoretical analysis bounding the difference between a kernel dynamics of finite width network trained for time t with a finite set of samples and the ideal infinite width ntk kernel that concludes a finite width network is biased toward learning the top eigenfunction of the infinite widthbased kernel the paper positions itself well amongst related works and is presented such that even an outsider to the topic is able to understand the problem analysis and results the paper is constrained to bounding the difference in the kernels and it is not clear how the network architecture and activation function fits into the analysis a discussion on how the model design affects the results could make clear the limitation as well as applicability to future model designs the analysis considered requires twice differentiable activation function which does not include relu i think this is an admissible assumption for theoretical analysis though it makes analysis disconnected from practice minor comment the notation for nonlinearity and eigenvalue seems to be overloaded and might create confusion the paper targets a simple question of spectral bias of networks using ntk kernels and provides a characterization that is applicable to deep networks the dependency on stopping time limits the consequences of the analysis to when the network is initialized with few training steps docsepthis paper provides a new theoretical analysis of the bounds of function space difference between a finitewidth network in a finite sample size regime and the corresponding kernel dynamics in finite width and finite data regime the results are presented based on fully connected convolutional and residual neural network architectures by investigating the spectrum of the ntk and its connections to the fisher information matrix this work provided some insights into the spectral bias of neural network learning dynamics strengths this work continues to explore the spectral bias of the neural networks from the perspective of the neural tangent kernel theory this work made some novel theoretical analyses on measuring the distance in function space and the scaling properties of the learning with respect to the stopping time this paper is generally clearly written alongside the main theoretical results it provides some description and interpretation of the main theorems weaknesses my main concerns are about the lack of more discussions about the connections with existing literature 12345 on analyzing the spectral structure of neural networks and corresponding kernel formulations for example existing works 1 provided theoretical results on the spectral alignment of neural network and kernel methods in learning dynamics 2 addressed the lowrank spectral structure of both neural tangent kernel and the fisher information matrix not limited to the missing references below a more detailed discussion on the theoretical and empirical connections to the relevant literature is needed 1 atanasov alexander blake bordelon and cengiz pehlevan neural networks as kernel learners the silent alignment effect arxiv preprint arxiv211100034 2021 2 zhang r zhai s littwin e susskind j 2022 learning representation from neural fisher kernel with lowrank approximation arxiv preprint arxiv220201944 3 karakida ryo shotaro akaho and shunichi amari pathological spectra of the fisher information metric and its variants in deep neural networks arxiv preprint arxiv191005992 2019 4 pennington jeffrey and pratik worah the spectrum of the fisher information matrix of a singlehiddenlayer neural network advances in neural information processing systems 31 2018 5 hazan tamir and tommi jaakkola steps toward deep kernel methods from infinite neural networks arxiv preprint arxiv150805133 2015 na docsepthe paper bounds the distance between the dynamics of wide dnns and the dynamics of kernel gradient descent on the population with the limiting ntk this paper improves over previous bounds by considering the projected error along the top k eigenfunctions of the integral operator tkinfty for any k they show that the approximation is better along the top eigenvalues this allows them to show that with early stopping one can guarantee learning along the top eigenvalues of dnns in comparison to previous bounds the results in this paper only require a width that grows linearly with the number of datapoints at least with a specific early stopping the paper is well written and gives a good overview of the previous bounds and how their bound improves on previous ones the paper nicely connects two lines of research the ntk regime approximations for finite widths and the spectral bias of dnns the results are new to my knowledge and describe an interesting phenomenon i did not check the proofs in detail but i did not see any technical issues the only weakness that concerns me is the fact that the results only apply to gradient flow the discussion of early stopping is much more interesting in the context of gradient descent since it then relates to computational complexity there exist many proofs of convergence to ntkdynamics for gradient descent so i think the proofs should extend to this setting easily i do not understand why the authors did not try to extend their proof to gradient descent as already mentioned i think the paper would be greatly improved with a discussion of gradient descent docsepthis article theoretically offers quantitative bounds between the trajectory of a finitewidth network from the idealized kernel dynamics of infinite one which implies that networks tend to learn the top eigenfunctions of the ntk at rates corresponding to their eigenvalues furthermore the authors observe that if the width of network grows linearly with the number of samples it will remain the bias of the kernel at beginning strengths 1 the paper is rich in theory and offers quantitative bounds between the trajectory of a finitewidth network from the idealized kernel dynamics of infinite one which measure the l2 difference in function space and imply that networks tend to learn the top eigenfunctions over the entire input space im sure the audience of neurips will be interested in it 2 the theoretical proof is detailed the order is reasonable and the structure is clear weaknesses 1 although the theory of this article is rich there are very few experiments that could be found in it which could verify the rationality of the hypothesis and whether the theoretical derivation is true or not one suggestion is that it will make the conclusions more convincing if corresponding graphs of experimental results could be attached after important conclusions especially authors can use a 1d experiments to show the estimation in theorem 1 is meaningful 1 the main limitation of this paper is that although the theory is rich there are too few experiments to tell the reader at a glance what it does the interpretation of theoretical result is not clear enough especially for outside the training set
### Summary:
|
this paper focuses on theoretically bounding the difference between a finitewidth network in a finite sample size regime and the corresponding kernel dynamics in finite width and finite data regime for various neural network architectures using the spectrum of the ntk they provide some insights into the spectral bias of neural nets all reviewers were positive and recommend acceptance i concur with this decision
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
updates the author addressed my concerns about the experiments though the improvement is marginal and i still have some concerns im ok to accept the paper ill change my score to 6 summary the paper studied the adversarial bayesian neural network and found that the stochasticity of it vanished as stochasticity can help improve robustness against adversarial examples the author proposed to use conjugate prior of gaussian posterior to improve stochasticity of the model and robustness at the same time strength experiments show that the proposed method outperforms adversarialbnn and adversarial training on several benchmark datasets and the stochasticity of the proposed model is larger than adversarialbnn weakness the evaluation is somehow questionable checked the original paper of adversarialbnn and found that the performances of adversarialbnn is much better than reported in this paper in both papers vgg16 is the base structure of bnn but the reported performances of advbnn and adversarial training are different in two papers on cifar10 cifar10 results epsilon advbnn advbnnin this paper adv adv in this paper 0 797 62 803 72 0015 687 54 583 60 it could be a problem of hyperparameter tuning could the author provide some explanation on this in experiments the models on cifar10 and stl10 are trained with linfty perturbation magnitude of 003 they are evaluated under pgd and eotpgd with linfty in 0003 the range of attack perturbation magnitude on cifar10 and stl10 could be larger such as 0008 to better compare the baselines with the proposed method clarity and correctness the paper is well written and easy to follow but the experiments might be problematic reproducibility code of the method is not available conclusion the idea is clear and novel but experiment results need more elaboration overall i think the paper is marginally below the acceptance threshold i like the idea of using conjugate prior to improve stochasticity and robustness however im a little bit concerned about the experiment results if that can be addressed im willing to accept the paper docsepin this paper the authors study the adversarial robustness of bnns on large scale datasets bnns have been shown to be a more robust learning paradigm due to their uncertaintystochasticity given the empirical observation that adversarially trained bnn posterior variances converge to zero which the authors need to do much more to show as this is not a wellestablished phenomena the authors propose a hierarchical prior where they put a prior over the parameters of the gaussian prior normally used in meanfield variational inference the authors show that performing approximate inference with a hierarchical prior leads to an increased variational posterior variance which the authors claim is correlated to the observation of increased adversarial robustness i think the empirical results are solid and the direction of the paper is ultimately an important one i truly encourage the authors to continue to pursue this topic unfortunately this paper is severely handicapped by its lack of clarity in terms of accurate contextualization and its pervasive use of misnomers these misnomers are so prevalent that they would certainly lead uninformed readers to incorrect conclusions about bayesian deep learning on the true posterior there are many places throughout the paper where the authors discussreference the computation of the true posterior for a bayesian deep neural network this can only be done by exact bayesian inference that includes 1 proper marginalization over the space of parameters 2 normalization wrt px given a dnn with nonlinear activations computation of the true posterior is intractable despite this the authors claim that one can infer the true posterior via variation inference methods bottom of pg 1 variational inference makes a closed form approximation of the posterior that one tries to learn the parameters of even learning the optimal parameters does not guarantee convergence outside of the case of conjugation further on this computing the true adversarial posterior is even more intractable given that the intractability of computing the optimal adversarial example compounds the issue of performing exact bayesian inference conjugate priors and hierarchical priors are distinct under the bayesian framework despite this the authors name their hierarchical prior the conjugate prior in this work the authors suggest placing a prior distribution over the parameters of their prior distribution ie a hierarchical prior yet call it a conjugate prior a conjugate prior in the standard bayesian literature is a prior which is known to be in the same family as the true posterior it is not known and likely not true that for general approximate bayesian neural networks eg meanfield approximations the true or approximate posterior is gaussian thus it is likely false to call a meanfield prior approximation a conjugate prior robust optimization is the special case of adversarial training where only adversarial data is used the authors conflate adversarial optimization optimization with respect to an adversarial objective with robust optimization which has a rich history in optimization prior to its application to deep learning the end of section 21 should have its terminology corrected on the notion of regularization in bayesian deep learning in several places the authors refer to the regularization term of the elbo objective this regularization term is the kl divergence with the prior distribution while the prior distribution could be said to have a regularization effect on the posterior saying that the prior distribution is a regularizer is reductive and probably misleading following the authors rebuttal i think the paper has benefitted from further experiments and from further clarifications i would like to thank the authors for carefully considering my feedback and for modifying their paper in the directions i suggested ultimately like i said in my original review i think this is a very interesting and wellmotivated problem but i still have a few doubts in particular the doubt about the papers use of the term of conjugate remains in their rebuttal the authors use the term approximateconjugate prior but i am not sure that this is satisfactory as being conjugate means you have knowledge of the form of the true posteriors closed form which is not the case for bnns i have increased my score to reflect that i think the authors are moving in a promising direction and i hope that they will continue with this work one thing i will note on the experimental side of things is that having greater variance is indeed interesting but it may or may not be correlated with increased uncertainty and this may be interesting to investigate in a future version of this work docsepsummary this paper presents a new adversarial training for bnns with variational inference vi specifically advbnn training of liu et al 2019 uses a standard normal prior for vi of bnns the paper observes that the above method may have vanished stochasticity that reduces the robustness and the proposed method extends it with a conjugate prior constructed by a normal distribution and an inverse gamma distribution this extension results in a stronger regularisation of the weights of bnns which can enhance the robustness against adv attacks and leads to a hierarchical inference the method is reported to have better performance than vanilla adv training and advbnn training on several benchmark datasets pros it is an interesting and motivative observation that advbnn has vanished stochasticity issue which is important for bnns the proposed method is a straightforward way to address the vanished stochasticity issue the results in table 1 are intuitive which directly shows the proposed method has more stochasticity than advbnns cons it is concerning that the reported performance of advbnn in this paper has a significant difference than that reported in the original paper in this paper advbnn performs worse than the vanilla adv training with a large margin which is a bit surprised therefore it is unclear whether it is advbnn not working or it is the settingsimplementations of this paper having something wrong given this fact the performance reported in this paper seems to be ungrounded it is hard to justify the true performance advantage reported in this paper the approach might be less related to the topic of hierarchical inference as its only replacing the standard gaussian with a normalinversegamma distribution which only affects the kl divergence in this case where there are no intermediate variables are inferred minor some of the notations and equations are a bit unclear for example q has been used to denote the posterior but it denotes the prior in eq 5 the author response addresses my major concern on the experimental results therefore i have updated my rating from 5 to 6 docsepthis paper studies the adversarial robustness of dnns with bayesian neural networks although bnn has been integrated with adversarial training for better robustness this paper argues that the previous method lacks the stochasticity ie the posterior tends to have zero variance thus limiting the robustness performance in this paper a new hierarchical variational inference is proposed to enhance the robustness when integrating with adversarial training the proposed method is presented well the experiments show the effectiveness of the proposed method besides i have few concerns about this paper 1 this paper argues that the previous method advbnn learns the posterior distribution that has near zero variance but does not analyze why advbnn causes this phenomenon does normal bnn models without adversarial training also have this issue 2 the experimental settings are unclear in the context the parameters of pgd and eotpgd are not stated eg number of steps step size number of samples in eot etc therefore it is hard to judge the significance of the results 3 this paper lacks the comparison with the stateoftheart methods a common practice is to use wide resnet models for adversarial training and set the linfty norm of perturbation as 16255 i suggest the authors to compare with the public adversarial training models
### Summary:
|
this paper improves on previous work advbnn with hierarchical variational inference it observes that meanfield vi training for bnns often result in closetodeterministic approximate posterior distributions for weights which effectively makes the bnn closer to deterministic neural network thereby loosing the robustness advantage of stochastic neural networks to address this a hierarchical prior is proposed on the weights which together with the corresponding approximate posterior design aims at preventing the collapse of the variances of the weights towards zero this improved version of advbnn is shown to be reasonably better than the original advbnn and their deterministic counterpart against the pgd and eot attacks on a various of benchmark dataset in the adversarial robustness literature reviewers initially had questions about whether the comparison is fair to the original advbnn since the reported results were very different this issue has been addressed by the authors during the author feedback period after that reviewers agreed that the proposed approach is a good extension of advbnn towards making it more robust they also agree that the analysis of the original advbnn in terms of posterior variance collapse is interesting and potentially useful although they also pointed out the link of increased variance with the proposed method and better uncertainty estimation is unclear in revision i would encourage the authors to clear up the confusions of the reviewers by clearly stating the comparison setting with the original advbnn and better clarify the methodology
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work aims to propose an algorithm and theoretical analysis for learning from the label proportion problem where the labels are given in aggregated form as a proportion of true labels in a bag of features the proposed algorithm is for the case when the size of the bags is less than three and it comes with guarantees on the fraction of satisfying bags the theoretical analysis shows the hardness of the learning problem some experimental results on synthetic data are provided i find the hardness results saying that satisfying more than 1q o1 fraction of bags in the learning from label proportion problem is nphard to be interesting since most of the previous work focus on developing new algorithms while not much looks into the inherent hardness of this problem however here are a few concerns my main concern is the readability of this work even though the hardness result is interesting i find it hard to understand the proofs for the overview provided in sec 15 it would be helpful if the definitions of label cover problem and the template of a dictatorship test are formally stated to make this work more selfcontained also the proof in sec 5 is hard to parse since it is mostly formulas without intuitions the readability of this proof needs to be improved probably by making it more verbal besides there are too many typos that harms readability a lot see below it seems that the complexity of the proposed algorithm is at least cubic in the size of features which make it impractical it would be helpful if the authors provide a detailed discussion on the complexity of the proposed algorithm i wonder how the runtime increases as the number of dimension d increases the empirical evaluation seems too toy and comparison with existing learning from label proportion algorithms are missing missing references on existing work on learning from label proportion scott c zhang j learning from label proportions a mutual contamination framework advances in neural information processing systems 2020332225667 typos line 97 the lower case f is not defined line 97 it is unsplit by f is the latter it is unsplit by f if the latter line 110 in the worst case in the worst case line 129 r is not defined at line 150 the symbol is used to denote a formula while later at lemma 24 the same symbol is used to denote another formula line 205 e is undefined line 310 a right parenthesis is missing yes docsepthe paper proposes algorithms for learning linear threshold functions ltfs from label proportions in this model the learning algorithm is given bags of points with the proportion of points in the bag labeled 1 the goal is simply to find an ltf that maximizes the number of bags on which it labels the points exactly at the right proportion the problem is nphard so approximation algorithms are considered the paper improves upon the previously known lower bound for bags of size 2 it also gives a frac112 guarantee for bags of size 3 and in general an omega1q guarantee for bags of size q it is also shown that its nphard to approximate the problem with bags of size q beyond frac1qo1 the method is to solve a semidefinite programming relaxation then round the result using a random hyperplane the main contribution of the paper is the new sdp relaxation which is new nontrivial and interesting the model has been studied before and the bags input is justified by issues of privacylegal constraints however im not completely convinced by the justification for the objective function that for small bags its reasonable why insist on getting as many bags as possible to have the exact input ratio perhaps at the expense of gross errors on the other bags alternatively one can try to minimize the total deviation or the maximum deviation or a host of other alternatives none docsepthis paper studies the fairly natural learning problem of learning from label proportions the underlying problem is realizable supervised classification however instead of receiving labelled instances the learner receives multiple bags of instances and the proportions of positively labelled instances in each bag ie the label proportions this is motivated by eg privacy concerns the goal of the learner is to find a hypothesis from a hypothesis space which predicts the instance labels in all bags in a way such that the number of bags with correctly predicted label proportions is maximised this generalises regular supervised learning as for bags of size 1 this corresponds to classifying all instance labels correctly the main concern of the authors is to construct a polynomialtime empirical risk minimiser or the equivalent thereof in this label proportions scenario and not generalisation aspects eg to new independently drawn bags in particular the authors focus on this label proportions learning scenario in the case of linear threshold functions ie halfspaces for bags of size leq 2 previous work suggested first algorithms and hardness results this paper improves the hardness lower bound for bags of size at most 2 from essentially 12 to essentially 49 additionally this paper generalises these results to bags of size leq 3 and achieves similar results there through an involved sdpbased relaxation they also derive hardness lower bounds for the general case with bags of arbitrary size which also applies to the improper case of functions that depend on a constant number of halfspaces and propose an algorithm which asymptotically achieves this bound however using a weaker notion of satisfied bags they also prove a novel characterisation of psd matrices ab satisfying apreccurlyeq b which might be of independent interest finally the theoretical results are complemented by first practical experiments where the authors compare their approach with a simple randomised baseline on synthetic datasets very wellwritten and easy to follow paper the contributions will be most likely interesting to a theoretical subcommunity of neurips and potentially enables further work building on top the experiments are sufficient for such a theoretical contribution in fact they are not really necessary minor points why do the authors not use the more common and simple term halfspace instead of ltf the two overview sections 14 15 are helpful to round it up the authors could include pointers to the actual lemmas and theorems eg while discussing line 164 and following maybe note that the full statement is in lemma 23 or multiple times on page 5 please add a short section on llpltfq with the most important details in the main paper as well for example by moving the experimental evaluation to the appendix standard references for the dictatorship test the lavel cover problem and the folding trick would be very much appreciated i would switch the order of 4 experimental evaluation and 5 hardness result to separate the theoretical results from the empirical ones more clearly typos berryessen esseen line 126 missing space between proceedssince the paper only discusses the computational problem of finding a hypothesis such that a maximum number of bags is satisfied have correctly predicted label proportions however as the problem is called learning from label proportions a discussion of possible generalisation bounds would be very interesting prediction label proportions of unseen bags previous work eg 28 apparently has generalisation bounds
### Summary:
|
all of the reviewers found the theoretical results in this paper novel and significant in particular the main contribution of the paper which is the new sdp relaxation appears to be nontrivial and interesting however there remain concerns about readability of the paper as outlined by one of the reviewers and we request that the authors put some effort into addressing them
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper considers using predictions to improve algorithm running time via warm start as in the recent work due to dinitz et al 2021 the authors propose a framework based on discrete convex analysis which applies a discrete steepest decent approach to go from an initial feasible solution to a global optimum each iteration requires solving a local optimization problem to find a descent direction and the number of iterations can be bounded by oppinfty where p is the initial solution and p is an optimal solution algorithms for projecting infeasible predicted solutions onto the feasible set with respect to the ellinfty norm and efficiently learning an initial predicted solution are also given to complete the framework using this framework the authors give warmstartwithpredictions algorithms for weighted bipartite matching improving the result of dinitz et al 2021 weighted matroid intersection discrete energy minimization the more general class of llconvex functions this paper gives a nice improvement over dinitz et al 2021 for weighted bipartite matching and also extends the ideas to other problems the writing is a bit dense but overall well explained an experimental evaluation may be interesting but is probably not necessary for the current submission na docsepthis paper extends recent work by dinitz et al on using the learningaugmented algorithms model to speed up graph algorithms by warmstarting the dual solutions in particular they focused on the hungarian algorithm for minimumcost bipartite matching in this paper the authors interpret this idea of warmstarting in the context of discrete convex analysis inspired by the idea that warmstarting is a well studied concept in convex optimization through this lens they give an improvement in the mincost matching problem reducing the dependence on the prediction error from linear in the ell1 error to linear in the ellinfty error their framework is rather general and the authors apply it to get learningaugmented algorithms and bounds for weighted matroid intersection and discrete energy minimization as well the authors also provide pac bounds on learning predictions for these problems under the ellinfty norm strengths the connection between warmstarting in dinitz et al and in the framework of discrete convex analysis is interesting and general the improvements in the bounds from prior work are significant changing the error dependence from ell1 to ellinfty i thought the discussion of learning primal vs dual solutions in section 5 was very interesting it is worthwhile to consider what factors make some predictions useful or not for algorithms with predictions weaknesses for large graphs the pac bounds require many samples in particular if we set cnwinfty we need at least fracn2 winfty2epsilon2 sample instances which for large graphs seems extreme the bounds are a significant improvement over the bounds given in prior work but im not sure how informative they actually are it seems that many of the techniques used in this paper come almost directly from prior work nonetheless i think the application and combination of these ideas makes for some interesting insights and results on learningaugmented algorithms perhaps most importantly seeing as both this and the prior work are in some sense not comparing themselves to the best theoretical algorithms for matching even if the predictions are nearperfect the theoretical bounds will be worse it would be more compelling if some experiments were done to show that the this discrete convex analysis approach is actually a better way to approach the problem than dinitz et al i found the discussion of limitations to be adequate i do not foresee any specific negative impacts of this work as it is quite theoretical and abstracted away from applications docsepthis paper presents results illustrating the possible benefits of initializing learning algorithms based on predictions by using a discrete convex analysis framework the results indicate improved bounds for the sample and time complexity the usefulness of the framework is shown by applications to weighted perfect bipartite matching weighted matroid intersection and discrete energy minimization finally whereas the choice whether to learn primal or dual solutions has typically been made somewhat heuristically the paper uses the proposed framework to give principled guidelines the paper is clearly written and structured addresses a pertinent issue and seems to provide novel improvements on prior work the presentation is neat the improvements over prior results are significant for instance the time complexity of some algorithms is improved by up to a factor n where n is the number of vertices one weakness of the results is that unlike many previous analyses where the possible performance of learning algorithms due to the use of predictions the analysis is unable to recover optimal worstcase bounds a more extensive discussion of the applicability of the given convexity assumptions and how these contrast with other notions of convexity could be beneficial in order to more clearly motivate the work and clarify the benefits and limitations the authors clearly state the limitations but an extended discussion on what classes of problems enjoy the different convexity notions could be beneficial
### Summary:
|
in this paper the authors provide new theoretical guarantees for augmenting algorithms with learned predictions based on discrete convex analysis dca they generalize previous results of dinitz et al and obtain better time complexity bounds for a number of online problems the application of dca to online algorithms with predictions is interesting and the improvements in the bounds are significant
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
standard distances or dissimilarity measures are too often used regardless of the specific nature of the data this paper deals with small count data and opens the way to use novel dissimilarity measures more appropriate to such data in a wide range of tools and applications the second paper of the paper intrinsically requires the 9 pages of additional material which is mostly gathered in appendixes for paper length reasons furthermore even is the dimensionality reduction part of the paper is an important application of the proposed dissimilarity measures this part of the paper is less convincing it mixes linear and nonlinear methods whose goals are quite different and it seems to make the assumption that dr methods can be evaluated by clustering performances which is rarely the case on realworld data the paper introduces new dissimilarity measures for small count data this part of the paper is innovative to my knowledge very well written and may open the path to a wide range of algorithms and applications of these dissimilarity measures although dimensionality reduction is an excellent application of these measures and should certainly not be removed from the paper the link between the dissimilarity measures clustering and dimensionality reduction is not clear enough clustering could be performed without dimensionality reduction while the latter is mainly intended for visualization in addition the paper uses both linear and nonlinear dr methods which is confusing on most cases linear dr methods will decrease the quality of clustering while in nonlinear dr methods this decrease is largely influenced by the tricky metaparameters that have to be adjusted for example in tsne there exist now better methods that integrate all scales in a single dr and that are therefore much less sensitive to metaparameters given that it seems obvious that the authors had to rely on the supplementary material to increase the length of the paper beyond 10 pages making the 10pages paper itself hardly selfcontained i suggest to concentrate the paper on the dissimilarity measures and shorten the part on clustering and dr which may be seen as two possibly independent applications of the new measures docsepthis paper tackles an important problem to define an informationtheoretical distance of high dimensional count data for dimensionality reduction the problem setting suits well in the uai conference the quality of writing is satisfactory overall the novelty is limited the information theoretical measurements have been used for dimensionality reduction see for example carter et al 2011 the proposed measurements are mainly instances of methods that are already known the distance measures listed in table 1 are simply kl divergences between poissonnb posterior distributions it is expected that the poissonnb distribution can better model count data especially in the posterior setting kevin m carter raviv raich william g finn alfred o heroiii informationgeometric dimensionality reduction 2011 the evaluation metric rfxfy corresponds to the discrimination power of the distance measure with respect to two given distributions it is not clear how it aligns with the objective of dimensionality reduction in dimensionality reduction we usually consider the spectrum of the pairwise proximity matrix i am not fully convinced that rfx fy is a good measure for dimensionality reduction introduce the abbreviation vst at its first appearance eq1 extra proposition 1 explain the notation stackrelpto corollary 2 suddenly here the statement uses the lim notations please unify the to notations section 33 given a pair of distributions fx fy and that of measures is grammarly incorrect section 43 heavily refers to the appendix this is hard to read please move these contents into the appendix or move the figures into the main text after rebuttal i have read the authors responses and had another look at the paper embedding or performing dimensionality reduction on probability distributions is not new the significance of the proposed dissimilarity measure in section 2 lies on applying existing techniques to new data small counts despite the good performance reported the proposed distance measure has more parameters that correspond to the priors see table 1 in the paper this distance is assessed based on the authors definition in section 31 i can agree that it is a reasonable definition however to be more complete the evaluation of different distances should be based on some other evaluation method such as the spectrum of the pairwise distances besides the authors own evaluation method overall i wont be disappointed if the paper is accepted but recommend the authors to have a revision in either case docsepthe authors provide a strong conceptual foundation for the measures they propose and validate their expectations with both simulated and real data numerical assessments confirm the conclusions suggested by visual inspection my only concern is that the paper does not devote sufficient attention to the results achieved since the newlyproposed dnb performs so well on certain data sets and only reasonably well on others it would probably be quite informative to investigate the specific characteristics of the contrasting conditions that are responsible for these differences in fact such an investigation could possibly also give useful intuitions on the applicability of say gplvm vs tsne the paper is well thought out and easy to read even for someone like myself who is not an expert in this field i caught a few minor issues in the wording and list those below last paragraph of sec 1 discriminate ability should be discriminative ability the statement it is straightforward to see that the higher the value of rfxfy the more powerful the discrimination ability of the dissimilarity measure seems too strong for example a nonlinear transformation of a given d may enhance r without affecting discrimination i would suggest something like by construction a higher value of rfxfy will tend to indicate more powerful discrimination ability the relatively weak performance of dp for data that is truly poisson distributed table 2 needs some explanation in sec 43 i suggest replacing the uses of dr dp and dnb lead to more with something like applications of dr dp and dnb lead to more docsepin this paper the authors mainly focus on the dimension reduction techniques on highdimensional small count data specifically the authors took both poisson and negative binomial nb distributions into account and proposed two dissimilarity measures for small count data based on the kullbackleibler kl divergence in addition they used empirical bayes estimators to estimate the distributional parameters so as to calculate the kl divergence reliably they also proposed an index to evaluate the discrimination abilities of different dissimilarity measures and show their proposed dissimilarity measures can achieve superior discrimination ability compared with other widely used dissimilarity 1 from the results in tables 23 and the description on the right column of page 5 for small count data following poisson distributions dlog achieves better performances than dnb please give more detailed explanations and analysis why the proposed dnb can not achieve the same good results than dlog in these scenarios 2 the length of appendixes in this paper is too long more important many results and analysis in section 4 rely on the figures in appendixes i suggest authors show the important results in the main body of the paper and add more detailed analysis about the results shown in the main body rather than appendixes 3 there are some evident typos in the paper for example on the left column of page 3 three lines above equation 4 poislamdaix should change to poislamdaiy on the second line on the right column of page 4 fx and fx should change to fx and fy in this paper the authors mainly focus on the dimension reduction techniques on highdimensional small count data specifically the authors took both poisson and negative binomial nb distributions into account and proposed two dissimilarity measures for small count data based on the kullbackleibler kl divergence in addition they used empirical bayes estimators to estimate the distributional parameters so as to calculate the kl divergence reliably they also proposed an index to evaluate the discrimination abilities of different dissimilarity measures and show their proposed dissimilarity measures can achieve superior discrimination ability compared with other widely used dissimilarity in my opinion the motivation of this paper is clear and the proposed dissimilarity measures seem like reasonable but the writing of this paper still needs to improve some suggestions are listed as follows 1 from the results in tables 23 and the description on the right column of page 5 for small count data following poisson distributions dlog achieves better performances than dnb please give more detailed explanations and analysis why the proposed dnb can not achieve the same good results than dlog in these scenarios 2 the length of appendixes in this paper is too long more important many results and analysis in section 4 rely on the figures in appendixes i suggest authors show the important results in the main body of the paper and add more detailed analysis about the results shown in the main body rather than appendixes 3 there are some evident typos in the paper for example on the left column of page 3 three lines above equation 4 poislamdaix should change to poislamdaiy on the second line on the right column of page 4 fx and fx should change to fx and fy
### Summary:
|
meta review accounting for the presence of many zeros in dimensionality reduction for count data is an important topic the ac and reviewers all agree that this paper makes some pertinent contributions to that problem for such contribution to be fully clear and impactful we strongly urge the authors to follow up on their promise to incorporate several important points in their revision including clarifying the link between dissimilarity measures clustering and dr by incorporating the points the authors made in their feedback to reviewer 8a5w and as the reviewer suggested focusing on the dissimilarity measure portion while shortening the 2 applications to make the paper more self contained clarifying the motivation of using the proposed r index to evaluate dissimilarity measures as opposed to the spectrum of the pairwise distances incorporating the new results provided as feedback to reviewer yrep
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the question about the performance of transformer models for medical imaging data to be quite relevant at the moment relatively well written paper and easy to follow comparison qualitative and qualitative to the winning method of the hecktor challenge as well as nnunet the paper uses data from the hecktor challenge the challenge seems to be closed for new submissions at the moments so the authors use only the training set and make their own experiments this is clearly states in the text but i feel that it should be also stressed in the caption of the results table as to discourage direct comparison to the results of the challenge which would not be fair in tabel 3 the results of the vanilla unets are taken from the challenge overview paper again this is on another dataset and i feel that it makes things confusing it would be best to remove them docsep the paper is clearly written with a clear focus as a validation study experimental setup is done on a publicly available challenge dataset a good ablation study on performance difference of model when different data augmentation is applied this is indeed necessary as the performance of the transformer based model is notoriously dependent on the data augmentation performance of transformer based model is clearly compared against the most popular nnunet based mode in the discussion it is clearly mentioned that the transformer based model is not giving better performance but still it is an active area of research this is good as there is no overthetop claims although the literature review is commendable it focuses too much time and space on the hn tumor segmentation literature review instead of the transformer based literature review it would have been more beneficial for the authors to focus on transformer application in the medical image segmentation task clarity regarding what is already done in transformer based deep learning architecture for medical image segmentation task with what is missing from these papers and how those things are addressed in this paper would put the paper in clear light and allow for a better understanding of the usefulness of the work discussion regarding why one type of data augmentation might be more useful compared to other is missing qualitative results are given in the appendix while the discussion regarding the same is given in the main paper maybe it would be helpful if the authors move at least one of the qualitative results to the main text docsepthe paper proposes an interesting first use of transformers for this task of hn tumor segmentation the paper is well written and structured the evaluation with crossvalidation dsc precision and recall and visual examples is good although test results should be reported no evaluation on the test set which does not allow for a comparison with all other participants of the hecktor challenge results are lower than the soa and the claim of performance on par is not supported by the results
### Summary:
|
this paper investigates the relevance of transformer networks for segmentation in the context of head and neck tumors all reviewers recognized that the paper is relatively well written but also the need to report the performance on hecktor 2021s test set this is important since the transformers seem not to outperform classical cnns this is an important negative result to report but would be more impactful if reported on the test set i recommend the acceptance of the paper but highly suggest the inclusion of test performance in the camera ready and adaptation of the conclusions if needed
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a nonparametric reasoning method for reasoning on incomplete knowledge bases specifically for the task of finding a target entity given a source entity and a relation since this specific relation might be missing for the source entity multihop reasoning is required to get the answer to get the reasoning paths this paper proposes to first retrieve similar entities from the knowledge base that have the same outgoing relation and then gather all possible reasoning paths from these retrieved entities finally these reasoning paths extracted from other entities can be applied to the source entity in the query and get the answer the methods proposed in this paper is simple yet very effective they outperformed previous strong models on nell995 and fb122 moreover because of the nonparametric property this method is also robust in low data settings i also like the general thinking that instead of encoding all the reasoning rules into model parameters the casebased reasoning system might worth more attention i think this paper gave a good initial attempt and established a good framework for future work for example as the author mentioned in the paper neural relation extraction systems can be incorporated to replace the exactstring matching therefore i think this paper should be accepted docsepthis paper provides a very simple approach to the problem of knowledge base completion the idea is this given a query subject relation you find other entities similar to subject see which other paths they can take to their corresponding object if they express the relation and check if the subject express those paths object reached this way are candidate answers the one with most paths reached is marked correct one question i have for author is that when the new relation discovered is entered into memory does it come with any form of weighting that tells us how confident the model is in its prediction for example if we take the melinda gates works in the expressed path ceo based in may not be correct for this new subject perhaps a discussion on this problem will make the paper stronger why do we need caching cant the paths be discovered in real time can there be better heuristic designed that can be used to filter paths at test time depending perhaps on the subject of the query itself the authors present results on multiple datasets where they are either sota or competitive i cannot comment on this with complete confidence if the author has missed any other relevant comparisons they also perform qualitative testing to see why their model has good performance an error analysis on this model would also make this paper stronger in general i like this approach for its simplicity and generality as the authors note and in hindsight it seems surprising why this has never been tried beforedocsepthis paper proposes a nonparametric approach for reasoning on knowledge graphs the described approach cbr casebased reasoning includes several steps including matching similar entities and extracting rules that are used during inference to select the right entity the approach is shown to be effective in knowledge base completion fb122 wn18r and queryanswering nell992 and yield better overall results than several competitive parametric approaches including transe distmul complex minerva asr kale gntps pros evaluation in terms of performance is sufficient comparing to several approaches on two tasks kbc query answering and three different knowledgegraphs state of the art results over several recent wellperforming approaches analysis of why the approach is better than one of the compared systems minerva is insightful the approach is simple easy to understand and its modules can easily be extended with recent mldl modules the reasoning seems interpretable to some extent since there are actual rules that are retrieved and used cons the limitations of the approach are not discussed in detail what is the inference time compared to other parametric approaches could you include these in the paper with the current approach what are the limitations of the size of the graph in terms of the number of triples entities relations the title of the paper seems too general this is not the first paper to propose nonparametric approaches for reasoning over knowledge graphs nor it is an overview paper if the paper is accepted the title must be changed to a more specific one
### Summary:
|
this paper proposed a casebased reasoningnonparametric approach for a widely studied knowledge base completion task given a subject and a relation predict the object based on a given knowledge graph the idea is novel and simple given the subject it retrieves similar entities in the whole kg and corresponding reasoning paths with respect to the query relation and uses multiple paths of evidence to derive an answer the approach has been evaluated on multiple benchmarks and demonstrates excellent performance all the reviewers think this is a strong paper and would lay out a solid framework for future work in this direction we recommend accepting this paper as per the suggestions by the reviewers it is a good idea to consider adding casebased reasoning to the title to reflect the key idea of this approach it would be also be desired to discuss how this approach compares to other existing approaches inference time scalability etc in addition to accuracy metrics
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper formulate a general form of federated adversarial learning fal based on which the authors theoretically analyze the convergence of fal they concluded that convergence is possible with chosen learning rate and communication round for distributionagnostic clients strengths the robustness of neural networks is an important problem and how the distributed learning will converge under adversarially robust training is also crucial in practice the authors have clarified most assumptions and conclusions the proposed analysis method pseudo gradient based on a pseudonetwork is interesting which may be inspiring for other theoretic work weakness the challenges caused by the interaction of the federated optimization and the minmax optimization are not clear enough it is not specific enough to mention that local update steps and global communication are novel settings the authors may need to elaborate on why the integration is hard in terms of convergence analysis specifically how the problem is different from traditional federated with distributional shift a if the adversarial noise is simply treated as a bounded noise the discussion of related work on federated convergence seems missing either in the introduction or related work sections for example both a and b theoretically analyze the federated learning under noise the technical implications of the main theorem 41 are not elaborated either in section 4 or later the authors only vaguely mention that controlling localglobal learning rates will be helpful for convergence in section 6 but not clear how actually i believe it is folk knowledge that decreasing the learning rate is important merely for fedavg c lacking useful implications apparently lowers the value of the work in the community of iclr references a reisizadeh a farnia f pedarsani r jadbabaie a 2020 robust federated learning the case of affine distribution shifts neurips b yin d chen y kannan r bartlett p 2018 byzantinerobust distributed learning towards optimal statistical rates icml c li x huang k yang w wang s zhang z 2020 on the convergence of fedavg on noniid data iclr the studied problem is interesting and important however the related work is not well discussed and the conclusion is weak for me the authors should try to clarify and establish the novelty of the problem contradicting related robust federated learning docsepthis paper proposes a federated adversarial learning fal framework with strong theoretical guarantees compared to the centralized model the federated model allows each local client to generate the adversarial samples and updates the gradient themselves for several iterations and then communicate to the centralized model for global updating this is the first work that gives convergence guarantees for fal their technical analysis mainly involves two parts in the first part they utilize the overparameterization and separability assumption to ensure the initialized model is close to some model u which can achieve the small robust loss then in the second part by using such property on the initialized model and by bounding the difference between real gradient and the fl gradient they are able to show the convergence as the authors stated this fl gradient is a new gradient they use to tackle the difference between global and local updates strength 1 this paper provides rigorous analysis and all the details are well written 2 adversarial learning and federated learning are both becoming more and more important in ml area this paper is the first works to give theoretical guarantees on afl which i believe has some significance 3 their proposed afl framework also shows many promising directions for future analysis weakness 1 as the authors claimed in the paper their analysis is not tight they argue that their main focus is to give the first step for afl convergence analysis 2 their technical contributions despite rigorous analysis is limited a they use many assumptions include lie overparameterization separability and bounded adversarial samples although i understand this is somehow standard in much previous analysis i still think it simplifies a lot of proofs b many of their proofs are built upon previous results for example the first part of their analysis the existence of small robust loss is mostly a small modification from existing methods as the authors show in the appendix f and has little relation with fl in the second part although they emphasize their analysis on the newly defined fl gradient to me it is a very natural definition and the analysis to bound those differences is also expected question in definition b4 gammaseperability it seems to me in order to satisfy gamma leq deltadelta2rho we requires rho delta2 14 and i wonder will this assumption simplifies the analysis a lot can you give more justification for this assumption what if you have a weaker assumption like allows the adversary to be even stronger is it still possible to prove the convergence overall i think the author proposes a useful framework with rigorous analysis but i am somehow concerned with the technical novelty of this paper i think it would be better if the author can get a higher bound and make more efforts to relax some assumptions or give more discussion on the necessitydifficulty of those assumptions docsepthis paper studies the provable property of applying adversarial training into federated learning from the theoretical perspective specifically the authors provide a framework for analyzing federated adversarial training and present the convergence analysis in the overparameterized regime the main results theoretically show that the minimal value of loss function in this learning paradigm can converge to epsilon small under certain circumstances pros 1 this paper is well written and organized which allows a clear read 2 this work is well motivated by the challenge of applying adversarial training into federated learning 3 this paper provides a framework to analyze federated adversarial training in overparameterized neural networks cons 1 the first reason for harder to analyze the convergence of federated adversarial training seems to be not unique in the federated learning paradigm and in my opinion the second reason is the cause of the former thus it may be better to consider the noniid setting in federated learning as one important research focus when analyzing the plain combination of adversarial training with federated learning or to give more insights about the challenge of adversarial training with the noniid setting 2 although this paper provides the comprehensive proof of theoretical convergence guarantee for overparameterized relu network on the presented federated adversarial training it is limited in the term of showing other insights for the unique challenge of adversarial training will meet in federated learning which may be more significant and interesting for other future research minor 1 the two reasons in the abstract seem to be not consistent with the involved challenges in the latter of the introduction are they refer to the same focus this paper provides convergence analysis for federated adversarial training which theoretically shows the feasibility of applying adversarial training to federated learning however it may be not easy to see some indepth insights or huge novelty in the current submission docsepthis paper proposes a federated adversarial learning framework where clients generate adversarial data and do local updates while the server aggregate the local models to do global updates 1 it is unclear what are the technical difficulties to extend classical centralized adversarial training to the federated setting given current researches on adversarial learning federated learning and deep learning theory what are the contributions of this submission what are the gaps that this paper addresses 2 there is a lack of experiments which makes the proposed methods unconvincing can the proposed framework really enhance the robustness of the federated learning have the data heterogeneity issue really get solved and so on 3 it would be very helpful that the authors give some concrete application examples in federated learning where a specific data generating algorithm mathcala can help to fix a certain type of robust concern by both theory and experiments i would like to see these concerns can be addressed by the authors feedback i tend to vote for a rejection due to the lack of novelty clear theoretical justifications and experiments docsepthis paper is a direct followup of zhang et al 2020b with assumptions including overparametrized twolayer relu network normalized dataset gammaseparability and lipschitz convex loss it proves the convergence of fedavg under adversarial perturbation these assumptions are easy adaptations from zhang et al 2020b post rebuttal i would like to thank the authors for the response however after reading the author response my opinion remains the same as the authors acknowledge that this work is an extension of zhang et al 2020b and there is no experiment after the revision my main concern is still the novelty i would encourage the authors to work on the future direction for empirically verifying their general framework strengths combining adversarial robustness with federated learning is definitely a new setting and has the potential of training robust models collaboratively against perturbation this paper aims to study fedavg theoretically in this setting and it gives theoretical results for the convergence by controlling the noise from local updates from the twolayer network and from the adversarial perturbation weakness one of my major concerns is the novelty against zhang et al 2020b specifically 1 def 31 32 are the same as sec 32 from zhang et al they are not definitions per se but assumptions 2 def 33 is the same as def 32 from zhang et al 3 def 34 is the same as def 34 from zhang et al 4 def 35 is the same as def 31 from zhang et al 5 def 36 resembles def 33 from zhang et al 6 thm 41 resembles thm 41 from zhang et al the techniques such as pseudonetwork is also from zhang et al the improvement is not strong either since zhang et al proposed linear pseudonetwork to approximate the twolayer overparametrized relu using fl gradient to approximate the gradient is nothing but adding another approximation error term given that the theoretical contribution is not quite strong i would expect the authors to conduct experiments in fl adversarial training to verify their theory however no experiment is provided the reason might be because the theory analyzed does not exactly match the common experimental setting in fl eg twolayer neural networks polynomial hidden units convex lipschitz loss and gammaseparability i doubt whether such theory could be of any use to the fl community in fact many adversarial attacks are not small perturbation eg adversarial patch 1 brown et al adversarial patch nips 2017 however the current theory requires very small perturbation in order to control the approximation error table 2 appendix e given the above novelty concern and the lack of experiments i would recommend rejection i would suggest the authors rethink the practicability of the assumptions considered rather than directly borrow these assumptions from existing literature it would be much more interesting if the authors could give some experiments to demonstrate the proposed algorithm
### Summary:
|
the reviewers had a number of concerns which seem to remain after the authors response in particular the reviewers were concerned about the validity of the papers assumptions in realworld applications and lack of experimental results also while the reviewers acknowledge the novelty in technical contributions they suggested that the authors explain more clearly how the results of this paper are distinguishable from prior art
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes lsp a graph pruning method based on localitysensitive hashing lsh for gnn acceleration and regularization the paper shows that the lsp method can preserve the edge similarity after the pruning process additionally the computational cost of gnn is reduced due to the graph sparsification strong points s1 the problem is well motivated to alleviate the computational burden oversquashing and varying neighborhoods problems in gnn models s2 combing the lsh technique with gnn models for graph pruning is an interesting attempt s3 the paper presents several empirical results to demonstrate the effectiveness of lsp s4 the paper presents many insightful examples to illustrate the superiority of lshbased pruning over the random pruning method weak points w1 the paper employs a canonical hashing technique lsh to prune graphs for gnn acceleration and regularization although this idea is interesting the primary contribution of this paper is a combination of existing techniques rather than a new method insightful observation or theoretical exploration w2 as stated in the abstract the paper is motivated by the increasing graph sizes and the computational costs of gnn models however the graphs used in the experiments are modest additionally a line of literature devoted to the scalability of gnn is missing for example sign sign scalable inception graph neural networks gbp scalable graph neural networks via bidirectional propagation pprgo scaling graph neural networks with approximate pagerank w3 the relative time of the baseline method random pruning is omitted from the experiments in comparison to random pruning how does lsp perform in terms of relative time and accuracy tradeoffs w3 the presentation of the paper needs some improvements please refer to the minor comments for details minor comments m1 citations should be added to support the two arguments in the last paragraph of sec 1 m2 page 6 could you provide more explanations on the sentence apparently the complexity of this matrices multiplication does not depend on the number of edges m3 page 8 to that end in the end the papers idea is interesting as evidenced by the experimental results however the theoretical depth and novelty may not be enough to meet iclrs standards additionally the concerns raised in w2 and w3 should be addressed docsepfor largesize graph structures the neighborhood aggregation operation of gnns makes their training and inference slow and expensive to remedy this problem one solution proposed in the literature is to increase the sparsity of the graph by selecting a subset of the edges and only running the neighborhood aggregation over those edges this paper develops an lshbased approach for edge selection which can be run as a preprocessing step he problem studies in this paper is an important research direction and a proper solution to the problem can be quite valuable however i have the following reservations about the current work experimental results for a paper that aims at accelerating gnns it is expected to provide results on largersized datasets this is especially the case because as the authors mention in 2 out of the 4 datasets in figure 3 a random selection strategy seems to be onpar with the proposed approach why not use larger graphs from eg the ogb also why do the curves for the proposed approaches stop at around 045 in fig 4 baselines while there are several existing works that solve a similar problem as that of the current paper as mentioned in the main text the proposed approach has been compared only against a random baseline which is not convincing intuition one thing that is not clear to me after reading the paper is why selecting similar edges is a good pruning strategy example 2 of the paper is also too artificial and does not help much with a slight modification of fig 2 one can easily construct a counterexample where random selection has a 136 chance of selecting the right pair of points whereas the proposed approach has a 0 chance correctness of minhash it seems to me that algorithm 1 and the hash functions in section 42 do not correspond to minhashing i might be missing something so this could be resolved during rebuttal heres what i understand from the description there are k hashing functions h1 hk for each node v for each hash function hi the neighbour that minimizes hi is added to the neighbours of v looking at equation 3 the neighbouring node that minimizes hi is the one minimizing floorx wi bi l the neighbour that minimizes the above value is the one that is farthest away from w assuming l is positive so the algorithm ends up selecting k neighbours that are farthest away from the random vectors which does not seem like minhashing this paper studies an important problem but the proposed solution is not wellmotivated and the experimental results are inconclusive docsepthe paper proposes to prune graph structure based on hashing edge features with lsh the proposed method is a deterministic preprocessing the author evaluates the proposed method by comparing with other sparsification methods pro 1 the paper is well written and clear for motivations easy to follow cons 1 the designed method is too simple just a preprocessing and not interact with learning 2 baselines are limited and result doesnt improve over nonsparsified version 3 the author keep emphasizing that the proposed method can help 1 training efficiently 2 regularization however the result doesnt support them first there is no large dataset to show the benefit over training time second regularization should help improving over nonsparsified version but it loses imformation and does worse than nonsparsified version i vote for clear rejection the proposed method doesnt show any strength in training large dataset and help on overfitting with more regularization docsep training gnns on a large graph requires significant computation resources besides the noisy nature of the realworld graphs might cause gnns to overfit to overcome the above limitations this paper proposes a graph pruning algorithm localitysensitive pruning lsp to sparsify the original graph into a sparse graph empirical results show that lsp can remove a significant amount of edges from the original large graph reduce the computation complexity and training time but with some compromise on the model performance pros graph sparsification is an important and interesting problem due to the high computation complexity of gnns based on the discussion in the related work section the proposed lsp is novel for graph sparsification besides the idea of using localitysensitive hashing makes sense to me the experiment section is well structured its nice to see the authors evaluate the performance of the model on different underlying tasks cons novelty main concern local sensitive hashing lsh has been used in transformers to sparsify the fully connected selfattention computation graph in transformer lsp seems like a straightforward application from nlp to graph domain from this paper it is not clear why applying lsh to graph neural network is the challenge motivation the author motivates the lsp using two limitations of existing gnn models in section 1 1 computation complexity and 2 varying neighborhoods ie varying number of nodes participating in the nodes neighborhood leads to varying amounts of information to encode within a fixedlength code it is not clear why varying neighborhoods will hurt the performance of gnn to support their claim the authors provide a discussion on the importance of reducing the variance of the neighborhood size in appendix c which is not convincing in particular in appendix c the authors argue that by reducing the variance of the neighborhood size of each node can avoid oversquashing which is not true because oversquashing is only relevant to the neighborhood size but irrelevant to the neighborhood size variance on the other hand the relative difference in neighborhood size can be important information to distinguish two nodes when using the forward propagation rule as defined in eq 1 since the aggregation weight is 1sqrtdegidegj which take the node degree information into considerations experiment this paper lack enough comparison to baseline methods to indicate the effectiveness the authors only compare with random samplingbased sparsification methods fastgcn and dropedges however there are existing neural networkbased deterministic graph pruningsparsification methods that need to be compared for example 12 as already cited by the authors and missing related works 3 besides even for randombased methods a strongermore recent baseline such as neighbor sampling eg graphsage 4 layerwise sampling eg ladies 5 and subgraph sampling eg graphsaint 6 are expected to be compared since they are reported with better accuracy than fastgcn and dropedges more importantly from the experiments section lsp is still sacrificing performance a lot for example in figure 3 cora even just removing 20 edges there is already around a 2 performance drop the number continuously increasing as fewer edges are used the same also happens to other datasets clarification this paper is a local sensitive hashing lshbased sparsification algorithm but lsh is not well introduced and is a little bit confusing for example in section 32 definition 1 the authors use notation d for both dimensions eg pq in mathbbrd and distance measure eg dpq besides what are p1 p2 the authors are suggested to spend more effort on introducing the lsh algorithm before proceeding to their lshbased algorithm 1 learning to drop robust graph neural network via topological denoising 2 fast graph attention networks using effective resistancebased graph sparsification 3 learning discrete structures for graph neural networks 4 inductive representation learning on large graphs 5 layerdependent importance sampling for training deep and large graph convolutional networks 6 graphsaint graph samplingbased inductive learning method minor comments the proposed lsp is sparsifying the original graph by using the similarity between nodes this requires an assumption on the homogeneity of the graph ie nodes that are connected by edges are similar what happens to other graph structures for example in the useritem graph or heterogeneous graph nodes that are connected by edges are dissimilar this type of similarity assumption does not hold the model configuration in the experiment sections can be improved for example weight decay ratio dropout number of layers learning rate to help reproduce the results despite the pros summarized above the paper lack enough novelty and requires more effort on experiments and clarification on several important aspects as mentioned above eg clarification on the why neighborhood size variance and lsh docsepa preprocessing algorithm is presented to prune sparsify an input graph for a graph neural network gnn aimed at preserving the local neighbourhood structure the algorithm uses locality sensitive hashing lsh on the graph edge embeddings placing each edge into an integer bucket the selection of which edges to preserve uses minhash two variants of the algorithm are presented one uses binary thresholding and the other uses random projections this pruning is referred to as locality sensitive pruning lsp advantages of lsp are explained and experiments focus on the gained inference speed vs preserved prediction performance of a gnn model when pruning the graph as a baseline they prune the graphs randomly and show how their approach is better at preserving the prediction performance than this simple baseline experiments use gnns for node classifcation graph regression and graph classification on numerous datasets where the a good performing gnn typically graph attention networks gats are used sparsification of graph inputs is a valuable research direction as highlighted with the oversquashing problem with gnns lsh is a popular approach for edge pruning in graph clustering and seems like a good choice here example 2 motivated the need for locality based reduction of edges where a toy node classification example depends on 2 neighbours for correct classification and showing that if each node has 4 edges and are randomly reduced to 2 the error of misclassification is small relevant when the topology of the graph is mainly important over node attributes the experiments showed benefits of lsp over randomly removing edges but no other methods were compared which i felt is missing there are numerous other ways to prune edgeprune a graph for example graphical lasso 1 and for efficient computation large scale precision matrix estimation 2 which can be applied to graph edge pruning 3 and there exists other related work not mentioned for sparsiyfing graphs in gnns which could be directly compared for predictive performance of gnns for example 45 with a wallclock time to compare accuracy vs inference speed for example stronger motivation for lsh over other methods of edge pruning for example 2 see 3 for example use of edge pruninig and comparison with other edgepruning methods would be beneficial here and comparison with other gnn sparsification methods would strengthen the emprical evidence for the benefits of the proposed lsp 1 httpsenwikipediaorgwikigraphicallasso 2 zhang richard salar fattahi and somayeh sojoudi largescale sparse inverse covariance estimation via thresholding and maxdet matrix completion international conference on machine learning pmlr 2018 3 strahl jonathan et al scalable probabilistic matrix factorization with graphbased priors proceedings of the aaai conference on artificial intelligence vol 34 no 04 2020 4 wan guihong and harsha kokel graph sparsification via metalearning dlg aaai 2021 5 chiang weilin et al clustergcn an efficient algorithm for training deep and large graph convolutional networks proceedings of the 25th acm sigkdd international conference on knowledge discovery data mining 2019 the main concern with the paper is the lack of comparison with any other methods that sparsify the graph firstly the contribution here is a separate preprocessing step of the graph that uses edge pruning there are numerous approaches to prune edges for example proabilistic methods that use the node attribute similarities to remove edges with negative correlations in the node attributes a graphical lasso approach and this shows to improve performance for then using the pruned graph as side information in graph theory there are more methods for simply removing edges it is not clear why lsh is a better choice secondly there are other methods for sparsification of the graph for graph convolutional networks gcns that not only sparsify but improve the performance i think comparison with these existing gnn sparsification methods is important as the end goal of speeding up inference time while preserving predictive performance is the same with these comparisons it will better position the paper the area of sparsification of gnn is i believe an important one so i feel this work has a lot of potential but there is already work on this that should be compared
### Summary:
|
this paper deals with the important practical problem of speeding up gnns although the proposed method based on lsh may be considered to be a rather too simple preprocessing it would be worthwhile to share the practical idea with the community as far as the proposed method is shown effective enough however as pointed out by several reviewers it is concerned that the experimental validation of this paper is not sufficient further and deeper validations will make this paper stronger
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper studies posttraining quantization and propose to leverage interlayer dependency to improve the posttraining quantized network the authors introduce a number of approaches ar asoftmax improved initialization and amixup and verify the effectiveness of each part accordingly the experimental results demonstrate superiority over a number of previous strong ptq baselines across various network architectures and quantization configurations in summary despite the paper is methodologically incremental compared with past literature the experiments are thorough and solid strengths the major strength for this paper should be the empirical results which outperforms a number of strong baselines and pushes the limit of ptq performance on lowbit quantized networks the ablations are also thorough and complete the writing is clear and easy to follow weakness the paper is methodologically incremental as a large portion is built upon past efforts brecq and qdrop activation regularization resembles knowledge distillation a widely adopted approach in ptq approaches utilizing calibration dataset asoftmax follows adaround and extends the range of discrete space in hv annealing mixup considers to add annealing process into the mixture of quantized and fullprecision activations the authors improves the performance of posttraining quantized networks under lowbit configurations despite there is still a large gap with the fullprecision model the empirical improvement in this paper is still an important step for futural ptq research docsepbased on the interlayer dependency of quantization this paper proposes a networkwise quantization nwq approach nwq arises overfitting and discrete optimization problems in this paper activation regularization is introduced to handle the overfitting issue to solve the combinatorial optimization problem the authors use annealing softmax and anealing mixup the experiments show that nwq can achive significant improvements over previous works strengths ptq is widely studied and the methods proposed in this paper outperfome previous sota by a large margin the paper is easy to follow and the authors provides several ablation study to show the effectiveness of proposed techniques weakness the paper shows some originality but is not novel enough for example annealing softmax and annealing mixup are largely based on adaround32 and qdrop41 the authors clearly state the difference and demonstrate the effectiveness of modifications but previous methods play a more important role in the discrete optimization problem besides the motivation and theoretical analysis of substituting rser with asoftmax are absent from finger 3 we can learn the efficiency of nwq however the cost of each iteration is not clear does it require less or more computationmemory than previous methods in each iteration limited technical contribution the proposed methods are largely based on previous papers for the new techniques no theoretical analysis is provided for example no theoretical analysis of asoftmax regarding the better performance compared with rser docsepthis paper proposes a posttraining quantization ptq technique that aims to search for a networklevel quantization policy compared to the previous works that only considers quantization policies on a layerwise level to tackle the challenges of solving a larger scale combinatorial optimization problem and the issue of overfitting the paper proposes two approaches activation regularization and annealing softmax and annealing mixup experiments show the proposed approach achieved higher performance especially when quantizing networks to extremely low bits 1 the insight of this paper is straightforward but very sensible quantization errors can cancel each other and optimizing with a higher granularity can achieve better performance more importantly based on the insight the authors proposes effective approaches that achieved this 2 the paper is clearly written the derivation and introduction of the techniques are explained very clearly 3 nice ablation study shedding lights on the hyperparameter sensitivity of the proposed method 1 seems that the performance gain is not consistent the performance gain of the proposed method is significant for some configurations eg mobilenet with 2 3 bit quantization however the gain is quite small for others such as mobilenetv2 with 4bit quantization why is that is it simply because mobilenetv2 23 bit configurations do not have a strong baseline 2 the proposed method seem to contain too many hyperparameters from the granularity of ar to initialization scheme to annealing schedules etc how difficult is it to find a good configuration for a new model and new target bits docsep overall to avoid the overfitting problem in quantized networks the authors present several training strategies based on the previous posttraining quantization ptq methods such as layerwise activation fitting as proposed in brecq 1 annealing softmax as to enlarge the searching space of adaround 2 and annealing mixup amixup to smooth the training process of qdrop 3 they conduct thorough ablation studies and experiments on several mainstream networks to fully verify the superiority of the proposed methods reference 1 brecq pushing the limit of posttraining quantization by block reconstruction iclr2021 2 up or down adaptive rounding for posttraining quantization icmlr2020 3 qdrop randomly dropping quantization for extremely lowbit posttraining quantization iclr2022 strengths the experimental results seem strong the authors conduct extensive experiments on imagenet with different bitwidth especially the ablation studies are quite clear which should be encouraged the technique part of this paper is well written i personally think this paper is simple and easy to follow annealing softmax seems new in the community of model compression weaknesses please refer to questions and limitations the main concern is the novelty of this paper comments on limitations as claimed in the abstract this process pays little attention to the dependency across the subnets hence is less optimal however the main idea has been rarely discussed in the current manuscript which makes the submission serve more like a technical report ie the descriptions and effect of different improvements the authors are encouraged to solve the major concerns in the final version
### Summary:
|
this paper studies posttraining quantization by proposing networkwise quantization nwq an endtoend quantization approach that takes into account relationships between layers rather than treating layers independently using this approach the paper demonstrates compelling empirical gains across a number of architectures and compression factors reviewers recognized the practical success of the approach as demonstrated by these empirical results and praised the clarity of the manuscript however there were concerns regarding the novelty of the approach and whether the proposed method is simply a composition of previous methods while i understand these concerns i think there is a significant delta between this work and previous approaches especially when taking into account the markedly improved performance and the challenges of determining how to apply these lines of thinking to endtoend training the authors also expanded their discussion of these works in their updated manuscript clarifying the differences there were also concerns regarding the hyperparameter tuning but the authors clarified in their response that the large majority of experiments used a constant set of hyperparameters suggesting that these results are not simply the effect of tuning altogether i think this paper makes an impactful contribution and will be a valuable addition to the conference
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes several techniques to improve contrastive divergence training of energybased models ebms first the paper proposes to estimate a gradient term which is neglected in the standard contrastive divergence training method and show that this correction avoids training instabilities in previous ebm training methods other techniques include using data augmentation defining the energy function as a sum of energies over multiscales and using reservoir sampling effects of each proposed techniques towards training ebms are evaluated the performance of the trained ebms on image generation ood detection and compositional generation are tested in generally the paper is well written and addresses the important problem of improving ebm training but i have some concerns 1 it is not easy for general readers to understand the upper part of figure 2 which is said to illustrate the overall effects of the losses lcd and lkl what are the meanings of the red balls dark and light in the curve 2 the paper overlooks a class of competitive training methods which introduce auxiliary generators to train ebms including kim bengio 2016 kumar et al 2019 a and so on the comment in section 4 related work which describes these methods as utilizing pretrained networks to approximate portions of energy training is not correct not capturing the core idea of these methods although learning ebms without auxiliary generators is worthwhile exploring the paper needs to give the readers an overall picture of the stateoftheart of learning ebms and does not give biased comments although the proposed method is somewhat new the results are not strong which weakens the contribution of this paper a achieved much better results than the proposed method in cifar10 table 1 additionally computational cost of the proposed method should be given and compared to previous methods a y song z ou learning neural random fields with inclusive auxiliary generators arxiv 180600271 2018 sngan performs much better than the proposed method in cifar10 but much worse in celebahq and lsun this may confuse readers the reimplementation of a sngan 128x128 model using the torch mimicry gan library in celebahq and lsun may not faithfully reflect the performance of sngan 3 considering the above comment the following claim in this paper needs revision significantly outperforms past energy based approaches with approximately the same number of parameters 4 the paper has sporadic writing problems typo in eq3 divergenceterm lsun bedroom 5 in a4 comparison of cdkl gradient magnitudes it is said that the gradient of the kl objective is nonnegligible but it is this nonnegligible gradient term that stabilize the ebm training need more analysis here how influences and relative magnitude of both loss terms are calculated update after reading the response thanks for the authors response but some nontrivial concerns are still not adequately addressed 1 the inconsistent comparison results between sngan and the proposed method over cifar10 and lsun bedroom datasets 2 i can see the benefit such as compositionality from the proposed method of training ebms but the paper still seems to overlook the importance of giving the readers an overall picture of the stateoftheart of learning ebms table 1 should be expanded to include more stateoftheart results from ebms whether using auxiliary generators or notdocsepthis paper proposed an improved version of contrastive divergence learning of energybased models by combining a bag of techniques 1 add back a kl term that is neglected by previous methods 2 data augmentation 3 multiscale processing 4 reservoir sampling experiments demonstrate the effectiveness of the improvements pro the paper is wellwritten and easy to follow various experiments are performed to demonstrate the efficacy of the improved method cons 1 the advantage of adding the kl term is not quite obvious given the current experiments the only experiment that isolates the effect of the kl term is figure 8 stability of training which can be accomplished by simply adding spectral normalization for all the other improvements i tend to believe they are due to the techniques of 234 2 for the first term of lkl it is not entirely correct to take gradient only over the last step of langevin sampling need more justifications for the second term it requires computing on 1000 samples per update where the efficiency should be discussed 3 the multiscale processing of ebms has been explored in 1 which should be discussed and compared besides 234 are relevant references of training ebms that should be discussed 4 qualitative speaking longrun chains in figure 7 still have a trend of degradation from realistic images quantitative analysis eg germanrubin statistics would be helpful for evaluating the longrun chains clearly overall the paper proposes effective improvements on contrastive divergence of ebms and performs various experiments to demonstrate the efficacy however i am concerned about the correctness and the necessity of adding the gradient term lkl which is one of the major contributions that the authors claim please address my concern as listed above 1 learning energybased models as generative convnets via multigrid modeling and sampling gao et al 2 a theory of generative convnet xie et al 3 flow contrastive estimation of energybased models gao et al 4 learning the stein discrepancy for training and evaluating energybased models without sampling grathwohl et al docsepthe paper proposes a series of new techniques to enhance the training of an energybased model and the proposed techniques include adding the often neglected kl term to the training scopedata augmentation multiscale energy functionan experience replay buffer for training the experiments demonstrate the proposed method could generate highquality images compositional tasks and perform outofdistribution detection the main idea and motivation are well and clearly conveyed by the writing the paper would be stronger if the authors could provide the following pieces how well is the entropy estimation we all know that estimating the entropy of data distribution from a highdimenstional space is very difficult does this form of nearest neighbor applicable in other areas it would also be great if the authors could provide some theoretical analysis here while the main contribution of the paper seems to be the kl term added into the objective there are a few other tenichques tagging along it is not clear what role each of these techniques plays in the experiments i recommend the authors to show an ablation study figure 9 would need to compare against other methods it is not clear to me how the arithmetic results are stronger than the other published results an important argument in the paper is that the added kl term enhances the mode coverage could the authors provide some more evidence on this point docsepreview this paper studies how to improve contrastive divergence cd training of energybased models ebms by revisiting the gradient term neglected in the traditional cd learning this paper also introduces some useful techniques such as data augmentation multiscale energy design and reservoir sampling to improve the training of energybased model empirical studies are performed to validate the proposed learning strategy on the task of image generation ood detection and compositional generation strength the idea of dealing with the missing term in the traditional cd learning is important and relevant the paper is well written specifically the figure illustration and the organization of the paper make me feel quite easy to follow the paper the motivation of the method is clear and the experimentation looks ok concerns the contribution of the paper is quite limited even though this paper tried to estimate the missing term in the cd learning it lacks a comprehensive analysis of the benefit and advantages of doing so for example 1 what is the cost to add such a term 2 can you validate theoretically such a missing term can be helpful for mcmc mixing as you claimed in the paper about motivation even though the motivation of the current paper is clear which is to improve the cd learning however the cd learning in equation 2 is biased compared with mle in equation 1 the original motivation for cd learning is to make ebm learning more efficient since currently there has been ebm training method without mcmc or with amortized sampling i am not sure if the current method is still useful for the community about synthesis quality the synthesized images generated by the proposed images are not impressive artifacts can be obviously observed in figure 12 missing important references in related works the current paper missed to cite the pioneering paper about mle training of convnetebm 1 those ebm papers you have cited from 2019 is based on 1 or its variant incomplete narrative of the development of ebms in the introduction even though the narrative of the development of ebms is quite comprehensive it is not complete and even a little bit misleading for example since 2016 the ebms have been applied to realistic image generation 201620191 3 4 6 7 video generation 20172 4 and 3d generation 20185 in the community of computer vision therefore the current research direction seems not to be novel given the fact that authors might miss a lot important developments about ebm made by other fields cd learning is also studied and discussed in 1 for deep ebm the current papers only discussed and connected ebm development happened recently in ml community 2019 2020 typo in section 21 line 16 kl divergenceterm kl divergence term some references 1 a theory of generative convnet icml 2016 2 synthesizing dynamic pattern by spatialtemporal generative convnet cvpr 2017 3 cooperative learning of energybased model and latent variable model via mcmc teaching aaai 2018 4 cooperative learning of descriptor and generator networks ieee transactions on pattern analysis and machine intelligence pami 2018 5 learning descriptor networks for 3d shape synthesis and analysis cvpr 2018 6 learning generative convnets via multigrid modeling and sampling cvpr 2018 7 divergence triangle for joint training of generator model energybased model and inference model cvpr 2019
### Summary:
|
this paper introduces a bag of techniques to improve contrastive divergence training of energybased models ebms particularly a kl divergence term data augmentation multiscale energy functions and reservoir sampling the overall paper is well written and clearly presented in response to the major concerns from reviewers the ac recognizes the authors effort in expanding related work and adding ablation on the effects of the kl loss however reviewers remain unconvinced by the significance of the current results in particular the quality improvement by adding the kl terms is subtle compared to using reservoir sampling as evidenced in the contrast of the last two rows in table 2 moreover the authors are also encouraged to compare additionally with recent development in ebm as pointed out by r2 r4 the ac does find the results on downstream tasks such as outofdistribution quite promising and interesting perhaps its worth expanding the discussion with formal reasoning on why kl loss helps in this case all four knowledgeable reviewers are leaning towards rejection the ac respects and agrees with the decision
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
authors investigate the possibility to learn a generalized embedding function that captures salient and compositional features of sketches by directly imitating human sketches the manuscript is written clearly and concisely methods have been presented with enough detail and seem accurate particularly the results from the quickdraw and omniglot datasets showing generated sketches are rather impressive and the ones for the natural images seem promising overall i very much enjoyed reading the paper and suggest it for publication without any major changes in my view the results presented in figure 5 and especially 5c are the most impressive and interesting ones these results deserve more space in the manuscript i was curious to know whether there were also many unsuccessful conceptual composition examples are the examples shown in figure 5c the best ones or are they representative of performance in general does this approach also work with natural images to any extent could the authors elaborate on why or why not this may be the case docsepa brief summary this paper shows that the model trained to restore sequential data from images in a supervised manner tends to capture more informative latent representations of the data strengths demonstrates that training a decoder model to reconstruct sequential sketches leads for an encoder to better represent the input image achieves the sota result in omniglot recognition task compared against existing unsupervised methods weaknesses confuses readers with the ambiguous claim the embedding function is said to produce salient and compositional features p1 but no evaluations on compositions were included in the main paper does not include any ablation studies to show the effectiveness of each components of the proposed model does not include thorough explanations or analysis on each set of experiments initial recommendation borderline reject reasons while this paper provides a lot of experimental results which i very much appreciate i still found most of them quite irrelevant to support the main claim by the authors which discusses compositional embeddings the main contribution of this paper i believe is mainly the idea to utilize the sequential sketch data during the supervised training time this needs to be clearly stated particularly inside the tables as most baselines only use sketch images not their sequential data that being said this trick is shown to work well and if these significant improvement on omniglot is further verified this trick will be found useful by the community of researchers who work with sequential data such as sketches or handwriting however i found most of the implementation details quite unclear and experimental results were often misleading for instance the results from table 4 if we look closely the authors results use resnet12 trained with sketchy data which quite differs from other results in the list feedbacks authors briefly mention in p4 that a kl term did negatively effect the performance but no detailed explanationsexperiments were given what happens when you provide the class labels and train the model with classification errors as well will it improve the test result if so how much may have been much more interesting if this paper explores the unsupervised disentanglement in latent space to support their claims for what they note as structured embeddings will other latent disentanglement methods eg beta factor vqvae etc lead to better representations convvae alone may not be the best baselinedocsep summary this paper proposes learning embeddings for sketch or natural images by training a network that takes in a raster image and outputs and collection of sketch strokes the architecture consists of a standard cnn encoder followed by an rnn decoder the authors evaluate their learned embeddings on fewshot classification tasks and explore the the quality of the latent space they demonstrate that they outperform unsupervised fewshot classification approaches and seem to obtain a latent space that is more aware of longrange structure than those from methods that operate purely in raster space explanation of rating interpreting and synthesizing sketches in a deep learning context is an promising research direction while the idea of focusing on of sketchaware embeddings of images is an interesting one the main technical contribution simply involves taking a standard convolutional encoder with a recurrent decoder which has already been used for sketch generation sketchrnn in addition to this some of the claims made in the paper require some clarification or additional experimentation as i explain below thus i believe that some additions and changes must be made for the paper to be accepted pros i like the idea of using the vector structure of sketches to gain more insight into image content it is difficult to design deep networks that are able to operate on nonraster data in some sense this approach sidesteps this issue by allowing the relationship between raster and vector to be learned the experiments in section 6 confirm the idea that the embeddings are aware of this highlevel longrange relationships that arent so obvious on the pixel level cons and questions i am not sure i follow the intuition behind why the proposed model achieves better semantic awareness than a convolutional vae the authors use the example of a six legged turtle and state that the vae would only retain information about a single leglike feature but not how many legs are present how is having an rnn strokebased decoder different from a standard convolutional decoder in this sense in both cases the final reconstruction must contain the original number of legs and so the latent vector is encouraged to retain this information the performance on natural images in figure 4 especially on unseen classes is not great i would be interested to see the nearest neighbor images in the training set for the examples shown even in the case of unseen classes the resulting sketches look like they might match a similar image from the training set the authors claim that balancing the stroke and and pixel losses via a curriculum mirrors how humans learn to draw paintbynumbers however im not sure how some of the experiments fit into this methodology in particular most of the experiments are done on datasets that have ground truth sketch strokes but not their ordering eg svg files in this case it seems like imposing an order on the strokes and asking the decoder to replicate it is a counterintuitive constraint for the model on the other hand in the natural image experiments the pixel loss cannot be used at all some discussion of the consequences there would be interesting im not sure that it is fair to compare to fully unsupervised fewshot classification methods while the proposed method indeed does not use class labels the ground truth stroke information may provide considerably more information than just the raster image data perhaps it would help to have a baseline without stroke loss ie alpha1 even the ablation study in table 3 does not include this case how sensitive is the approach to quality of stroke decomposition for instance what happens if you subdivide each stroke in the ground truth svgs the authors have addressed many of my concerns in the rebuttal and so i am increasing my ratingdocsepthis paper proposes a generalized sketch drawing model named sketchembednet for producing sketches and visual summaries of opendomain natural images the idea is interesting and the experimental results show sketchembednet is able to do not only fewshot classification but also oneshot generation overall i vote for rejecting in my opinion the main contributions of this paper are not very clear the introduced model sketchembednet has limited novelty on neither the methodology nor the network structure as stated in the title and introduction the authors aim to capture and generalize the compositional information from sketchnatural images section 4 reports the latent variables organization performance which is directly related to the key motivation i believe but the authors only compared sketchembednet with vae which is not enough to demonstrate their advantages moreover it is not clear why fewshot classification and oneshot generation performance in section 5 and 6 support their main idea thus this paper needs further improvements detailed comments 1 in the first paragraph of section 2 the authors claimed that the cnn embeddings must preserve a compositional understanding of the input sketchesimages to improve the performance in their pix2seq task so how did you preserve the information many sketch synthesis methods such as 8 in the reference can reconstruct the sketch with a sketch image input do these methods preserve the compositional understanding 2 still in the same paragraph im pretty sure a vanilla autoencoder with a cnn encoder containing average pooling layers can reconstruct a sixlegsturtle well as input thus the information about the positions and the turtle legs number must be transported to decoder by the latent embeddings so why you confirm that a regular cnn embedding cannot preserve that information after an average pooling layer 3 the same cnn encoder is used for both natural images and sketches as these two types of images are with totally different patterns many recent studies such as reference 55 used a twobranched encoder for natural images and sketches respectively in this paper the cnn backbone is a 4layer cnn or a resnet12 which are very basic structures is it able to extract the image features well 4 figure 5 shows sketchembednet outperforms vae on latent space organization in my opinion the wellorganized latent space of sketchembednet is mainly due to getting rid of the kl loss term which drives the latent distribution to be a uniform distribution i would like to see the comparison with sketchpix2seq which is the reference 8 as both sketchembednet and sketchpix2seq do not use kl term in training 5 as this paper focuses on sketch drawing why there is no comparison between sketchembednet and other sketch generative models such as 8 11 15 26 in the references after reading the response from the authors we raise our score by 1
### Summary:
|
description the paper presents a generative model sketchembednet for classagnostic generation of sketch drawings from images they leverage sequential data in handdrawn sketches results shows this outperforms stoa on fewshot classification tasks and the model can generate sketches from new classes after one shot strengths detailed technically sound presentation shows that enforcing the decoder to output sequential data leads to a more informative internal representation and thus generate better quality sketches improves over unsupervised stoa methods weaknesses experiments are done against methods that do not use the sequential aspect of sketches because groundtruth in this case contains much more data comparison is not quite fair will have been useful to see results against a baseline that uses it quality of sketches generated from natural images is low
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper is a survey on 15 datasets related to us criminal justice it describes the us justice pipeline in a flowchart and sorts the datasets with according to it the authors propose a datasheet for each dataset which summarizes relevant information on the dataset data collection motivation uses distribution and maintainance they highlight that there are parts of the justice pipeline to which no dataset correspond and discuss domainspecific difficulties in the data collection process 1 the paper formalizes the us justice pipeline as a flowchart and uses it to link the datasets to relevant parts of the pipeline which makes it easy for researchers to choose a dataset 2 a datasheet is associated to each dataset providing a very practical and useful summary of each dataset 3 a gap in coverage of the justice pipeline by existing datasets is identified which could guide the collection of future datasets although the authors note it is difficult 4 political and social implications of the availability of such datasets are quickly discussed 1 although it is central to this kind of data the ethical discussions are a bit limited section 6 only contains one sentence about it and the datasheets do not seem to contain any informations regarding these questions 2 the authors provide download links for the datasets but i would have liked to have a unified method that download the datasets automatically making the use of multiple criminal justice dataset practical for researchers 3 the criminal justice pipeline described in figure 1 does not seem to be supported by any reference i believe there are two possibilities either it is a wellknown pipeline then i would appreciate to have a link to relevant previous works or it is a contribution of the paper in which case reference to eg the law could be nice but i admit i am no expert of the us criminal justice it also looks impossible to escape the pipeline after being charged i would imagine that after probation one can be free but it does not appear on figure 1 4 the authors mention many datasets but do not include data sheets for each of them what motivates the choice between the datasets chosen for datasheets and the others 5 some datasets contain multiple data records about one given individual authors mention that partial criminal path can be reconstructed from these datasets it is not clear how many of these partial paths can be reconstructed for all records 10 of records 1 6 the authors mention that different data collection processes lead to different kinds of unfairness in the datasets they claim that bias may be present at each step of the pipeline but fail to propose any even very basic statistics to support this claim 7 i find figure 1 to be a little to strict are the authors sure that their association between one dataset and one part of the pipeline are correct i am worried that someone could find other uses for these datasets either by applying new methods on it or by combining multiple datasets that are gathered in the survey docsepthis work surveys datasets in the criminal justice field that are often used for machine learning or fairness research it supplements this survey with a catalog of datasheets that are newly generated by the authors except for 1 where the dataset creators already generated a datasheet along with the survey there is a synthesis of knowledge where authors discuss gaps in the datasets and potential downstream analyses utility and quality of the submission impact originality novelty relevance to the neurips community will all be considered this piece reviews an important area of criminal justice datasets these datasets are used in a large body of fairness literature the work provides a survey of relevant criminal justice datasets and supplements these datasets with data sheets this is important and relevant to the neurips community and provides a nice survey while adding additional value in the form of data sheets there is a nice synthesis of information from the authors survey in the discussion section this also adds value to the survey completeness of the relevant documentation for datasets sufficient detail must be provided on how the data was collected and organized what kind of information it contains how it should be used ethically and responsibly as well as how it will be made available and maintained for benchmarks best practices on reproducibility should be followed the main contribution of this work is documentation so it certainly has that accessibility and accountability for datasets there should be a convincing hosting licensing and maintenance plan there is a github repo for maintaining and updating datasheets utility and quality of the submission impact originality novelty relevance to the neurips community will all be considered this isnt the most novel or original work since it is completing datasheets for datasets that have examined by a number of works in the past this piece cites many such related works however no other work is as comprehensive in its survey and no work systematically creates datasheets so concerns of novelty are somewhat minor the discussion on biases could be strengthened at the point where it is discussed it feels like an afterthought but there are significant problems here beyond just implicit biases for example earlier there is discussion on how for example hispanics are often reported as white this would lead to significant problems with biases and errors in analyses in downstream models especially for things like fairness by accounting for demographics this discussion could be expanded to touch on more of the findings from the authors exploration that might affect models or analyses similarly much of the discussion makes it seem like omitting information is a strictly bad thing for example the authors seem to lament that victim information is not included to conduct analyses with but these datasets require significant care and taking into account privacy considerations more engagement with these privacy considerations would strengthen the paper completeness of the relevant documentation for datasets sufficient detail must be provided on how the data was collected and organized what kind of information it contains how it should be used ethically and responsibly as well as how it will be made available and maintained for benchmarks best practices on reproducibility should be followed some of the datasheets are not particularly detailed if the contribution of the work is the datasheets i would have expected a bit more for example for the cpii datasheet the question what data does each instance consist of has the response as the data is compiled from 27 different sources each source has a different set of variables all sources report on the date time and location of the crime as recorded and the type of the offense i wouldve expected a table or breakdown here to make this information up front similarly on this same page there are some broken references i would suggest a pass over the supplementary datasheets to add more detail to spots like this and fix broken refs accessibility and accountability for datasets there should be a convincing hosting licensing and maintenance plan if the goal is to raise awareness of these datasheets and the underlying gaps for these datasets i would suggest creating a project webpage to host them in a way thats more prominent than as pdfs in a github repo this will raise the impact make it more accessible and might get others to update the datasheets as new information is identified such as for the questions regarding whether the data is being used already docsepthe authors conduct a survey of criminal justice datasets their main contributions are a discussion of these datasets in context of the full criminal justice pipeline and a public repository containing datasheets for 15 selected datasets this is a wellwritten paper on an important topic papers like this will become increasingly important in the realm of machine learning datasheets for datasets and specialized repositories are essential for responsible data use the survey is thorough and thoughtfully done i think the paper could benefit from an expanded discussion of related work and misuse of these datasets docsepin this paper the authors surveyed datasets from the national archive of criminal justice data nacjd and compiled a list of datasets arising from a variety of points in the criminal justice pipeline for 15 of these datasets the authors created publicly available datasheets using a template that they updated from the original in order to better suit criminal justice datasets the authors briefly describe each of the 15 datasets in the paper and create two index tables that summarize for each dataset 1 the type of criminal justice information and demographics covered and 2 the size composition maintenance and license information finally the authors discuss challenges in working with criminal justice datasets and illustrate these points using examples from the 15 surveyed datasets 1 it is in the best interest of the research community among others to broaden which datasets are used when studying criminal justice rather than focusing evaluations on a few datasets eg compas with this paper the authors have initiated this process by shining a light on 15 potential new datasets which are already publicly available 2 the 15 datasets are thoughtfully organized and presented in particular i found mapping the datasets onto the pipeline figure 1 to be a useful tool for getting quickly acquainted with the datasets and it nicely complements tables 1 and 2 3 in creating the index tables and a new datasheet template tailored to criminal justice datasets the authors have initiated the important discussion about what metadata should accompany criminal justice datasets and how these metadata questions might be standardized for describing new criminal justice datasets going forward this is an important discussion for datasets of any field but is particularly challenging and important for criminal justice datasets in which context is often not properly considered 4 the paper is wellwritten and should be easy to understand for a lay machine learning audience major points 1 since the authors are bringing 15 criminal justice datasets to the attention of the ml community it seems important to discuss in the paper why introducing these criminal justice datasets is beneficial to the ml community and to society to this point i feel it is important to answer the following questions how does providing the datasheets improve how these datasets can be used in ml beyond the official documentation does introducing these datasets help alleviate some of the existing problems with the use of criminal justice datasets in ml or will this just extend current problems to new datasets 2 in the introduction the authors state we give broad context to the datasets draw out potential uses and discuss gaps and limitations while the paper does address the first and third points it does not in my opinion adequately address potential uses for these datasets questions iiia d in the datasheets provide some information on potential uses but for a general machine learning audience it seems important to broadly discuss in the main paper how these 15 datasets should be used by the ml research community should they primarily be used to investigate the criminal justice pipeline as most of the uses in the datasheets seem to indicate should these datasets be used as benchmark datasets for testing out new methods which are not necessarily tailored to criminal justice applications 3 i found myself quite curious about the updates made to the datasheet template as described in section 2 this seems like an important contribution of this paper but it is not highlighted as such what gaps in the original template did these updates fill what unique challenges do criminal justice datasets pose to metadata documentation 4 it remains a bit unclear to me how the authors envision researchers using section 4 which is nearly three pages dedicated to short descriptions of the 15 datasets is the intention that these descriptions provide a quick introduction to a dataset and if interested one should then go to the datasheet for more information the combination of figure 1 and tables 1 and 2 seems quite useful for this purpose in and of itself the benefits of including these short descriptions in the main paper vs in the supplementary material is not clear to me 5 while section 3 provides necessary context in particular for figure 1 i am not wellversed in this area and find it troubling that this section does not have any references minor points 1 in the introduction one of the papers highlighted contributions is stated as reporting on 30 datasets however its not clear to me that this is really the case throughout section 4 other datasets are mentioned in addition to the main 15 but these are not included in the index tables nor in figure 1 and are not provided datasheets while i do see the usefulness in directing readers to other potentially relevant datasets in section 4 i found myself a bit confused in particular moving from sections 1 to 2 as to where the 30 vs 15 datasets were coming into play now after reading the entire paper i dont feel that this is one of its contributions ive noted this as a minor point because there is actually fairly little mention of the 30 datasets and removing mentions of this would seem to affect very little of the paper 2 related to the previous point it is still a little unclear to me how the authors arrived at the 15 datasets for which they created datasheets since there seems to have been 30 datasets which met the mentioned inclusion criteria why these 15 datasets are these 15 datasets useful for an ml audience in particular 3 figure 1 just want to verify that the colors in this figure correspond to the stages as mentioned in table 1 it may be useful to indicate this somewhere on the figure 4 section 7 typo i believe trough should be through 5 table 2 inconsistent capitalization in the geographic resolution and maintained columns docsepthe paper presents a collection of data sources on the us criminal justice system with the goal to provide researchers with an overview of available data sources datasets stem from a variety of fields from crime reports to jailprison sentencing the authors furthermore provide datasheets for 15 selected datasets which provides a standardizedstructured resource for accessing relevant data characteristics my major complaint would be the lack of a maintenance scheduleroute towards updating datasheets and the collection in general this does not only prevent collaboration with other researchers but also results in the paper being an immutable artefact which in my opinion is not adequate for such a collection of datasets if this complaint were addressed i would recommend acceptance of the paper the paper makes an important contribution to the field of criminal justice datasets by collecting and systematizing a variety of available datasets the survey seems to be exhaustive within its limited domain of us datasets and provides a good overview of relevant data sources the authors provide datasheets for datasets on 15 datasets which provide a comprehensive and structured resource that answers many of the relevant questions wrt those datasets the authors discuss the funnel arising from the sequential decisions made in the court system and provide an overview of procedures that typically lead to this funnel this provides a good understanding of the structure of the different data sources the paper discusses a series of relevant limitations to the collected data raising awareness for potential use cases i think the paper would make an even stronger case if the results were presented in the form of a websitegithub repository enabling easier navigation this would also allow other investigators to contribute eg via pull requests and allow for updating resources if eg errors are found in the datasets or when new datasets become available i would strongly suggest further work in that direction in their current version the datasheets are an artefact for which maintenance and updates are unclear which i would consider a major drawback of the paper improving in this direction could be done by 1 including a statement regarding maintenance and updates 2 uploading latex sources that enable updating the datasheets 3 uploading a latex template for similar datasheets 4 outlining how people can contribute new datasets to this collection how datasheets can be updated and the criteria for such changes 4 providing a good overview improving the readme in the existing github repository docsepthe authors present an analysis of criminal justice related data sets i think this is an important topic and i appreciate the degree to which they are exploring pros and cons of these data sets overall this seems to be a valuable resource i have some concerns about the completeness of the model however this is a good summary of many diverse data sets collecting information that can be used to make informed choices about how to use this data however i have some concerns my biggest concern is the presumption in their model that a crime was committed the criminal justice system is imperfect and not everyone is convicted nor is everyone who is convicted guilty do no data sets address this issue even if that is the case given that the authors wish to identify gaps in the data sets why is that not represented in figure 1 outside of the word acquittal in addition the model is incomplete for example the list of outcomes of pretrial hearings is incomplete as people may be released but required to wear an ankle monitor just as an example i am not an expert in this area so there may be other outcomes that are missing too that i am unaware of further the order of operations shown is not accurate to actual criminal justice experiences for example many people have to pay fines even if they are not convicted for example fees for required ankle bracelet surveillance between charging and conviction or paying back a loan for bail money my neighbor was forced to pay a fine simply for appearing in court even though she was not charged in the end another thing that seems to be missing from the model provided is the type of defender public or private for example that participants had access to or even whether when and how such a person was assigned
### Summary:
|
the paper introduces a set of criminal justice datasets to the machine learning community surveying 30 datasets and creating datasheets for 15 of them reviewers appreciated that the paper raises awareness of these datasets in the ml community and the documentation work that the authors have contributed there were two main concerns inadequate discussion of ethics and lack of detail on how the ml community could work with these datasets the authors have addressed the first concern in a revision and partially addressed the second concern
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper presents an investigation of perplexityefficiency tradeoffs in deploying a qrnn neural language model to mobile devices exploring several kinds of weight pruning for memory and compute savings while their primary effort to evaluate pruning options and compare points along the resulting tradeoff curves doesnt result in a model that would be small and fast enough to serve the authors also introduce a clever method singlerank weight updates that recovers significant perplexity after pruning but there are many other things the authors could have tried that might have given significantly better results or significantly improved the results they did get the topline 40 savings for 17 perplexity increase seems fairly weak to me in particular the qrnn architecture contains two components convolutions alternate with a recurrent pooling operation the fact that the authors report using a pytorch qrnn implementation which runs on the arm architecture but doesnt contain a fused recurrent pooling kernel for any hardware other than nvidia gpus makes me afraid that they used a nonfused opbyop approach for the pooling step which would leave potentially 10 or 20 percentage points of free performance on the table the qrnn architecture is designed for a situation where you already have optimized matrix multiplyconvolution kernels but where youre willing to write a simple kernel for the pooling step yourself at the end of the day pooling represents a tiny fraction of the qrnns flops and does not need to take more than 1 or 2 percent of total runtime on any hardware if you demonstrate that your implementation doesnt spend a significant amount of time on pooling im happy to bump up my rating i think this is a central point thats critical to motivating qrnn use and deployment once pooling is reduced to 2 of runtime improvements in the convolutionmatmul efficiency will have increased effect on overall performance perhaps your pruning mechanisms improved matmul efficiency by 50 but the fact that youre spending more time on pooling than you need to has effectively reduced that to 40 although the engineering effort would be much higher its worth considering blocksparse weight matrices as described in narang et al baidu and gray et al openai while this remains an underexplored area its conceivable that blocksparse kernels which should be efficient on arm neon with block sizes as low as 4x4 or so and blockwise pruning could give more than a 50 speedup in convolutionmatmul efficiency in a realworld application you would probably also want to explore quantization and distillation approaches to see if they have additional efficiency gains overall results of 10x or more wall clock time reduction with 5 loss in accuracy are typical for domains that have seen more optimization for mobile deployment especially mobileoptimized cnns like mobilenet so i think thats entirely possible for your applicationdocsepthis paper proposes to evaluate the accuracyefficiency trade off in qrnn language model though pruning the filters using four different methods during evaluation it uses energy consumption on a raspberry pi as an efficiency metric directly dropping filters make the accuracy of the models worse then the paper proposes singlerank updatesru method that uses negligible amount of parameters to recover some perplexity i like this paper focuses on models performance on real world machines 1 the proposed approaches just work for qrnn but not for many other neural language models such as lstm vanilla rnn language models the title could be misleading 2 in the experiment section i think one baseline is needed for comparison the qrnn language model with a smaller number of filters trained from scratch with this baseline we can see if the large number of filters are needed even before pruning docsepin this paper the authors investigate the accuracyefficiency tradeoff for neural language models in particular they explore how different compression strategies impact the accuracy and flops and more interestingly also how it impacts the power use for a raspberrypi the authors consider the qrnns and srus for this purpose and use standard datasets for their analysis i am torn about this paper on one hand i feel that the analysis is interesting thoughtful and detailed the power usage statistics bring a different perspective to the compression community the section on inference time pruning was especially interesting to read on the other hand however there is limited novelty in the setup the authors use standard well known compression algorithms on common neural language modeling architectures and datasets and use outofthebox tools for their ultimate analysis further the paper needs additional work before it can be accepted in my opinion i detail my arguments below the authors begin by discussing swiftkey and similar apps but im not sure if its clear that they use neural language modeling as the backend do the authors have a source to validate this claim knowledge distillation is another algorithm that has been found to be quite competitive in compressing models into smaller versions of themselves have the authors experimented with that the r3 is an good metric but i suggest that the authors look at mean reciprocal rank mrr instead this removes the arbitraryness of 3 while ensuring that the metric of interest is the accuracy and not probability of being correct perplexity can you comment on the sensitivity of the results to the rpi frameworks for instance the rpi deployment tools architecture and variance in the predictions along the same line im curious how generalizable the rpi results are for other computing architectures for those of us who are not experts on hardware it would be nice to read about whether similar tradeoffs will exist in other architectures such as mobile phones gpus or cpus could the authors add some metaanalysis about the results if the perplexity goes up as a consequence of compression what kinds of tokens it that localized to is it primarily rare words that the model is less confident about or are the probabilities for most words getting skewed finally i feel that such an exploration will catch on only if the tools are opensourced and made easy to replicateuse if there were a blog or article summarizing the steps needed to replicate the power measurement including sources from where to buy the necessary hardware more people would be inclined on adding such an analysis to future neural language modeling work i am willing to revisit my rating as necessary once i read through the rebuttal update after reading the rebuttal i am increasing my score to 6 the authors alleviated some of my concerns but my major concerns about their novelty and the impact of their results remains
### Summary:
|
the area chair agrees with the authors and the reviewers that the topic of this work is relevant and important the area chair however shares the concerns of the reviewers about the setup and the empirical evaluation having one model that can be pruned to varying sizes at runtime is convenient but in practice it is likely to be ok to do the pruning at training time in light of this the empirical results are not so impressive without quantization distillation and fused ops the value of the empirical results seems questionable as these are important and wellknown techniques that are often used in practice a more thorough evaluation that includes these techniques would make the paper much stronger
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper empirically exams the question of whether graph neural newtworks can learn discoutinous rigid contact dynamics specifically the gns 1 architecture is studied on the following discountinous contact dynamcis tossing a cube to the ground in simulation with different stiffness parameters tossing a cube to the table in the real world and tossing objects with different geometris to the ground in simulation the experimental results show clear constrast to previous conclusions that it is difficult for gnns to learn such rigid contact dynamics instead learned gnn models can perform well on such dynamcis even better than tuned analytic physics simulators 1 sanchezgonzalez et al learning to simulate complex physics with graph networks icml 2020 strength the question itself is interesting and the results help shed more light into the anwser gnn can indeed learn discountinous contact dynamcis well on the tested tossing tasks the ablation studies are complete and provide good understanding of which are the important factors for gnns to perform well weakness my major concern is is the result really surprising and makes enough contribution to a corl paper in the original dpi 1 paper the learned gnn model already demonstrates correct behaviour ie non penetration and correct predicted cube pose on modeling the contact interaction between the rigid cube and the water particles the boxbath experiment and in the original gns 2 paper the gnn model also demonstrates correct behaviour ie non penetration and correct predicted particle positions between particles colliding with environment obstacles the waterramps experiment although these are not strictly rigid objects colliding with another rigid object but i would not be surpised that the gnn model should be good to handle the nonpeneration part of rigidtorigid collision based on the results from these previous papers with this being said these previous papers did not study if gnn can learn the sensitivity to initial conditions for rigidtorigid collision which is clearly demonstrated in this paper but with this conclusion alone i am not entirely sure it is enough contribution for a corl paper also most of the experiments considers tossing a rigid object into another there are many more rigid contact dynamics other than this eg a quadruped robot doing locomotion on the ground a robotic hand manipulating a rigid object conducting some experiments in these more complicated senarios would make the result more convincing and stronger another minor question regarding figure 4 how is the sysid done for the physics simulators more specifically how many realworld trajectories are used to tune the simulation parameters i would imagine with more realworld data used the parameters can be tuned more accurate ie the performance of the simulator in figure 4 should also be a curve instead of a horizontal line more details to clarify this would be appreciated 1 li et al learning particle dynamics for manipulating rigid bodies deformable objects and fluids iclr 2019 2 sanchezgonzalez et al learning to simulate complex physics with graph networks icml 2020 docsepthis paper proposes to use a graph neural network gnn similar to the meshbased graph network proposed in 3 to learn rigidbody dynamics with contact in 3d following 3 each mesh vertex of the rigid body is modeled as one node of gnn and each edge of the mesh as one edge of gnn the authors argue that using spatialequivariant relative displacement between nodes as in 3 and predicting acceleration rather than velocity same as 3 in the cloth case are important to achieve high accuracy the method is validated both with simulated datasets 10 and a real dataset 2 and outperforms recent baselines the strengths and weaknesses of the paper are both quite clear strength simple algorithm and network structure without heavy engineering and tuning good results on both simulated and real datasets that outperforms baseline learning methods weaknesses lack of novelty not really a weakness in my opinion if gnn works well already out of the box that is worth sharing with the community but i have to say the method and implementation details are both quite similar to 3 lack of insight its hard for me to intuitively understand why gnn works much better than contactnet and naive learning methods using eg grus there is a diagnostic example sec 41 but it does not provide hypothesis or theories why gnn is better at modeling discontinuities than other neural nets either docsepthe paper presents a learningbased approach to predicting object dynamics using graph neural networks object meshes are encoded using the finite difference of the mesh vertex velocity over a short history and distance to a boundary eg floor and edge features are relative displacements between nodes message passing is used to update node and edge features and the network predicts finite difference accelerations for every node extensive experiments are provided on simulated and real datasets of simple objects being tossed and comparisons are made to both learned baselines and common physics engines used in robotics the results suggest the graph network approach outperforms these baselines in terms of accuracy of predictions with relatively little training data strengths the paper was clearly written and enjoyable to read the representation of meshes in the graph network is intuitive and figure 2 was quite helpful in quickly understanding what was being proposed the experiments are very thorough in that they compare to a variety of relevant baselines on a variety of datasets the expected ablations on the proposed method are performed and provide some insight on the necessity for the different components incorporated in training the gnn the authors identify many of the relevant shortcomings of the method including making predictions for more complex objects and multiobject scenarios weaknesses while the paper is relevant to robotics there is minimal involvement of robots in the paper even the realworld dataset used from the contactnet paper appears to just have a human tossing the cube i find this limiting for robotics because then there is no action modality associated with any of the data meaning it will be hard to learn predictive models for planning and control there is also limited discussion on uncertainty and stochasticity in the dynamics which is particularly important for tossing objects for example tossing the object 10 times in the exact same way will have a different outcome all 10 times ie theyll vary at least a little bit due to slight variations in the execution and minute differences in the interactions with the environment these sorts of dynamic manipulations are very sensitive to these subtle differences and can sometimes produce drastic differences in outcomes particularly for more dynamic manipulations like dropping from higher heights eg an object dropped on a table might bounce and come to rest on the table or fall off onto the floor even when dropped from the same height in the same way these dynamics are not fully predictable without infinite modeling precision which is of course not possible so it seems important to address matters of uncertainty in the predictions and is worth discussing in more depth in the paper this is also related to the first point about involving robots one can control the actions to be more repeatable using a robot which should make quantifying the uncertainty more feasible on a realworld setup instead of having a human tossing objects for data collection i am also curious how the approach can be applied to partialview models which is a very common situation in robot manipulation the robot may only have a partialview point cloud of the object and not have access to a ground truth mesh docsepthe paper investigates the applicability of generalpurpose graph neural network simulators for modelling discontinuous dynamics similar to existing work the proposed method learns to predict the dynamics of a given mesh using a graph neural network that is trained with a onestep prediction loss the resulting model is evaluated on different simulated and realworld datasets performing on par or better than commonly used simulators even when trained on relatively few samples this result contradicts the previously held notion that this kind of learned simulator is unable to model discontinuous dynamics and thus opens various avenues for further research strengths the paper shows that generalpurpose graph neural network simulators for can be used to model simple discontinuous dynamics more precisely it shows that a slight variation of the previously proposed meshgraphnet 1 architecture can accurately model simple rigid contact dynamics up to a precision that matches and sometimes surpasses that of commonly used solvers this is an interesting finding as it contradicts the previous believe that learned generalpurpose simulators are illsuited to model discontinuous dynamics the meshgraphnet architecture is adapted to rigid body simulation by employing a shapematching projection during the rollout phase while the performance improvement from this additional step is minor in the measured metrics this shapematching is interesting as it ensures that the simulated rigid bodies can not deform over the course of a rollout the experiments are thorough and both model architecture and chosen training hyperparameters are clearly explained in the text experiments are repeated for 10 seeds and confidence intervals are reported different parts of the proposed method are explored through a series of ablations the paper has a clear story and is well written the figures are clean and complement the text nicely 1 pfaff t fortunato m sanchezgonzalez a battaglia p 2020 september learning meshbased simulation with graph networks in international conference on learning representations weaknesses while the results are interesting and promising the proposed method heavily builds on previous work thus lacks novelty section 31 essentially reintroduces 1 with some minor changes here and there the experiments are rather limited in scope while they shows a clear proof of concept it would be interesting to see how well the method works for more complex shapes very few baselines are considered in the experimental section making it hard to directly compare the approach to existing work this is somewhat alleviated through various ablations and comparisons to classical solvers while most design decisions are ablated over some interesting deviations from the meshgraphnet paper remain unexplained for example the message passing block is iterated over k times where k1 in previous work
### Summary:
|
this paper provides empirical studies to show that a gnn architecture can provide better contact behavior modeling for rigid bodies than other deep learning architectures and sometimes can be better than physicsbased methods all the reviewers agree that the paper poses a powerful and interesting hypothesis and showed convincing evidence that it is true and agree that the work is highly relevant to robotics many of the reviewers praise the clarity of the presentation and thoroughness of the empirical studies several reviewers appreciate the simplicity of the architecture the reviewers also noted some concerns but the authors addressed these concerns well in their rebuttal however the authors are highly encouraged to open source their code for the final paper for the sake of reproducibility and to encourage other researchers to build on their work in summary this work is a welcome addition to corl and will give valuable insights into the potential for gnnbased contact physics modeling in robotics in the future
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a method to model original texts and similar texts in a graph structure for language modeling with graph neural networks in the graph the nodes are texts or similar contexts and the edges are connections between the nodes the new model achieves the new stateoftheart on wikitext103 and shows substantial improvements over other language modeling datasets such as one billion word and enwiki8 datasets strength the authors propose an interesting hypothesis that referring to the training data could be helpful for language modeling and they showed that the method is able to make considerable improvements over the vanilla lm the method achieves the new sota on wiki103 which is impressive weaknesses the paper lacks a discussion part about the actual overhead for retrieval and the time overhead for running the model seems to be significant 820x slower some hyperparameters seem to be chosen quite arbitrarily like l and r maybe the authors could provide more insights as in why they chose such a small number does it affect the efficiency or the performance much the paper proposes a retrieval augmentation to the language models and use gnns on top of a graph structure of the original input and its neighbors and shows that it consistently improves over the vanilla lm model the empirical results are good and the authors provide examples showing that the retrieved examples indeed help prediction in language models the only concern is that through modeling additional neighboring contexts the method introduces significant overhead in running time docsepthis work build a novel gnnlm to do language modeling by using global context information the proposed model is novel and quite different from previous lm structures this work in my view draws the connection between traditional ngram language model and neural language model the overall performance is quite impressive in all standard lm datasets extensive ablation study is conducted to understand the model i would recommend accepting the paper based on the novel idea to build heterogeneous graph to do lm and the impressive model performance on ppl this work further extends knnlm to utilize not next tokens but all neighborhood information to get global context this good combination of gnn and lm can be valuable to the community the improvement on ppl also shows the importance to use global knowledge the heterogeneous gnn is standard i would like to know if the authors have thought about designing specific graph structure or avoid intercontext edge when building the graph or considering is and are are the same node when building the graph in fig 1 i would consider knnlm is a special version of gnnlm then why adding knn can greatly further improve the model performance overall base on the novel idea of creating global context graph gnnlm and show the significant improvements on all lm datasets i would like to recommend accept this paper docsepthe paper presents a gnn based language model where neighbor contexts are retrieved encoded via a graph neural network and used to enhance generation evaluation on three benchmarks indicates that the proposed approach can outperform a bunch of baseline models contributions 1 a new retrievalaugmented language model implemented via gnn techniques 2 improvements over stateoftheart models on three benchmarks the work is among the studies of enhancing language generation by context retrieval and the new idea is modeling the retrieved neighbor contexts through a graph neural network my concerns lie in the following aspects 1 comparison it seems that the proposed model is compared with a retrievalaugmented baseline only on wikitext103 why the comparison is not conducted on the other two benchmarks is it because there are no available results on the two other dataset in this case can you implement the model and make the comparison moreover there have been many retrievalaugmented models though some of them are not applied to lm tasks in the initial papers then can you make adaptations and compare with them as well since the major contribution of the work is the gnn module it would be important to demonstrate how useful the component is 2 clarity i feel confused at several places during the review process 1 eq5 the text below explains that fcdot is a neural language model but afterwards in table 5 it seems that gnn is a better choice what is the exact setting for this equation 2 the first paragraph of section 3 the authors say we retrieve k1024 nearest neighbors for each source token among them the top 128 neighbors are used in graph and all of them are used in computing the knnbaed probability then can i understand as the graph only involves 128 tokens if it is given gnn is better than transformer in table 5 how to calculate fcdot in eq5 for other tokens 3 how do you train the transformer and the gnnthere are some comments above figure 2 but do you train the two components iteratively if yes how do you set up the training procedure eg how many iterations do you do how to warm up the parameters of the gnn 4 eq 6 is really confusing you only compare wt with the ith token of wti why dont you use soft similarity eg cosine 1 relatively incremental technical contribution to the community 2 relatively weak comparison with baselines 3 many vague points that impede us from understanding reproducing the work the authors response answered most of my questions though i still feel that the technical contribution is big enough i slightly raise my score accordingly
### Summary:
|
this paper introduces a new type of language model the gnnlm which uses a graph neural network to allow a language model to reference similar contexts in the training corpus in addition to the input context the empirical results are good and the model sets a new sota on the benchmark wikitext103 corpus as well as improving over strong baselines on two other language modeling datasets enwiki8 and billion word benchmark the main drawback as noted by one reviewer is the computational expense of the method with significant slowdowns compared to the baseline two reviewers voted strong accept with a third raising several concerns the largest concern was the lack of comparison to prior work especially prior retrieval based methods on two datasets the authors responded with an ablation study comparing their method to knnlm and showed their proposed gnnlm performs better other concerns raised by the reviewer were the papers lack of clarity the authors should address the reviewers questions during the next revision and incremental technical contribution another reviewer highlighted the papers novelty and this ac agrees it is sufficient for publication overall the method is an interesting if expensive extension of retrieval based language models and the empirical results support its effectiveness
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes garnet a spectral approach to depend adversarial attacks on graph structure garnet consists of three steps low rank approximation of adjacency matrix adaptive filtering on node signals and label propagation all the 3 steps of garnet achieve low computation complexity and thus can be applied to large graphs extensive experiments on both homophily and heterophily graphs show that garnet achieve significant accuracy improvements compared with baselines when the structural perturbation on graphs are stronger strengths the paper is clearly written all the technical steps of garnet are easy to understand the paper presents extensive empirical evaluation on the proposed framework the datasets cover both homophily and heterophily graphs and from small to large scales weaknesses although the three main steps of garnet are all reasonable they lack some technical significance for example the low rank approximation step follows a standard algorithm and the description overcomplicates the algorithm although the authors call the reconstructed graph a knn graph it is indeed just obtained by rowwise preserving the largest elements in the rankr approximate adjacency matrix according to thm 1 the similarity score used by knn is exactly the element value of the rankr adj matrix the spectral embedding matrix is also wellknown eg used in spectral clustering and the authors are not really utilizing much property of the spectral embedding except deriving the lowrank approximation the adaptive filtering step is also not novel multiple existing works follow similar form of such filtering eg sign a theorem 2 does not seem useful since the upper bound is way too loose if we compare the upper bound with the scale of the original perturbation then the derived upper bound is even orders of magnitude larger if the upper bound is achieved then it would tell us that adaptive filtering can enlarge the effect of perturbation in addition the upper bound grows quadratically with filtering depth p indicating that heterophily graphs could be problematic from theoretic perspective while the experiments show significant accuracy gains further clarification could make the improvements more convincing in table 2 3 and 4 we can see that garnet achieves higher accuracy even on the clean graph i suppose this is due to the adaptive filtering step however since adaptive filtering is a known design it would be better to add one unvaccinated baseline model following eq 2 and 3 in these tables from fig 2 it is clear that reducing the rank is the primary reason for successful defense if this is the case then it is not clear why similar rankreduction based method gcnsvd achieves much lower performance than garnet in tables 2 and 3 more explicitly since in fig 2 gnnguard achieves much lower accuracy than garnet i would imagine gcnsvd to achieve much higher accuracy than gnnguard and slightly lower accuracy than garnet in tables 2 and 3 yet this does not seem to be the case a sign scalable inception graph neural networks 2020 i think the paper is lower than the acceptance threshold due to the lack of technical significance and question in empirical evaluation as detailed above docsepthe paper proposes a reducedrank and sparse approximation method to purify the graph structure for better robustness against adversarial graph attacks the approximation is accelerated through spectral embedding and sparsification moreover adaptive graph filter and label propagation are considered for further improvement on low homophily graphs strengths 1 the paper introduces an efficient reducedrank approximation method to purify the graph structure that is scalable to large graphs 2 it provides theoretical justification for the matrix sparsification method 3 the paper presents lots of experiments to demonstrate the robustness of the proposed method in different settings such as targeted and untargeted attacks homophilic and heterophilic graphs small and large graphs etc weaknesses 1 the proposed garnet combines several techniques proposed in previous works such as lowrank approximation in gcnsvd and prognn adaptive filter learning in gprgnn as well as correct and smooth in cs method overall the novelty of the proposed method is quite limited and the contribution of this paper is unclear 2 the comparison with the baseline models doesnt seem to be fair from my understanding although the baseline methods such as prognn use classic 2layer gcn as the backbone model according to their formulation their ideas can be generally applied to more advanced gnn models such as appnp and gprgnn as well in essence garnet in this paper chooses a more advanced and deeper backbone model that adopts adaptive filter as in gprgnn and adaptive label propagation as in cs to compare with the baselines which only use the basic gcn as their backbone models this doesnt seem to be a fair comparison since the depth eg number of propagation layers and filter design typically have significant impacts on the robust performance therefore the performance improvement is not convincing 3 as far as i am concerned the lowrank approximation idea is essentially similar to existing works and this paper proposes a more efficient way to achieve this therefore regardless of the efficiency and scalability the robust performance should be similar to existing works if appropriate backbone models are chosen in the comparison it is better to have a discussion on this 4 from the ablation study in section 44 figure 2 only compares garnet and gnnguard however gnnguard is less relevant in this context and the performance is much worse than prognn in this setting as showed in table 2 or table 3 it would be more interesting to see how the lowrank approximation in this work improves over prognn instead of gnnguard overall the contributions of the paper are unclear and the novelty seems to be limited in addition the comparison and ablation study in the experiment is not totally convincing i would like to suggest a rejection docsepthis paper proposes a spectral approach towards robust and scalable graph learning namely garnet there are three main components inspired by handling lowhomophily graphs in garnet low rank and sparse approximation of the adversarial graph a variant of spectral gnn with an adaptive filter and label propagation with special interest to the learned filter garnet is claimed to be nearlylinear so that it can be scaled to very huge graphs experiments against both targeted and untargeted attacks under the poison setting over a collection of graph benchmark datasets demonstrate that garnet could increase the adversarial accuracy more than stateoftheart defending methods strengths the paper is easy to follow with clear motivation and is well written the design of garnet is clearly motivated by handling the lowhomophily graphs due to the common assumption on the homophily of underlying graphs it is of great importance to go beyond these datasets and propose a universal solution to different types of graphs the acceleration of tsvd is remarkable and the scalability of the proposed method is clearly a strong contribution to the research on the robustness of gnns weaknesses the main concern of mine is the contribution towards scalability is a little bit overclaimed in the major advantages of garnet the authors claim that garnet has a nearlylinear runtimespace complexity however the total space complexity of garnet is not analyzed which could result in a dense matrix after the adaptive filter learning then is not nearly linear anymore another concern is that the proposed approach is heuristic and can be easily broken by an adaptive attacker see eg 1 for a detailed discussion an adaptive attacker can easily circumvent them by changing their attack to account for the defense for example it is relatively straightforward to add an additional term in the computation of nettacks scores that discourages adversarial edges that have a large impact on the highrank spectrum furthermore if the attacker knows what proposed defense is used they can specifically target the largest singular values to potentially cause even more damage for experiments the experimental setup should be more specific for better reproducibility since no code is available for examination several questions include what package do the authors use to conduct truncated svd i have experience in having the top eigenpairs of the whole ogb graph with scipy which takes hours to finish thus the run time that is less than 7 minutes is interesting to me for the training on ogbnarxiv and ogbnproducts do the authors use sampling techniques or just training with full batch the difference of evaluation between small graphs and ogb graphs should also be specified in the detail of settings 1 on adaptive attacks to adversarial example defenses neurips 2020 my concerns are mainly from three aspects the complexity analysis is vague while it is one of the main contributions of this work no adaptive attacker is evaluated the reproducibility of empirical evaluation is a little weak
### Summary:
|
the paper proposes a method to change the graph structure for better robustness against adversarial attacks the reviewers commend the authors for a clearly written paper and promising results several reviewers expressed concerns about experimental validation specifically comparison to truncated svd and choice of baselines complexity and novelty the rebuttal and followup discussion alleviated some of the concerns but the reviewers still have outstanding issues therefore the ac does not recommend accepting the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper presents a way to integrate 3d perception in hybrid force velocity controllers hfvc where force control is used to maintain contact points between the robot and an object and velocity control is used to move the object for example via a sliding motion the paper proposes skills of hfvcs that can propose subgoals and whose parameters are gated by a precondition module that tests when the skill is valid and can be applied skills incorporate input from 3d pointclouds which can be used to get rough object segmentation and their parameters include the desired object subgoal poses and robotobject contact points subgoals are then used in task planning assuming knowledge of a skill transition model and the paper evaluates this capability on a real robot with a limited set of simple rectangular or cylindrical objects the main strengths of this paper are a showing that compliant control can be achieved via feedback even with noisy and inaccurate and incomplete object perception and b that one can plan over skills as long as a skill transition model is provided additionally the paper is generally wellwritten and the robot experiments are convincing the main weaknesses are w1 there doesnt seem to be much consideration of forceacceleration limits in fact the paper does not provide much detail about the formulation of the optimization problem used to solve hfvcs given a desired object subgoal pose and robotobject contact points i would have loved to have seen the mathematical formulation of this problem currently placed in the appendix to be part of the main paper w2 objects are assumed to be rectangular or cylindrical which is limiting as there are multiple objects that do not fall under these categories eg plates cutlery tools it would be great to mention what changes would be required if a broader family of objects were allowed w3 only a single object is assumed to be movable in the scene this is sufficient for showcasing the controller but does not test for object interaction in cluttered conditions this is acknowledged in the limitations section but it would be useful to describe what would need to change to handle multiple moving objects w4 it is not clear what the simtoreal gap for the training of skill preconditioners in simulation is if friction and contact are modelled poorly in the simulator it seems like the preconditioner would also fail it would be great to have a comparison of preconditioner errors when real vs simulated data is being used it is not clear if fig 10 in the appendix refers to simulated data or real data i suppose the former docsepthis manuscript uses hybrid force and velocity controllershfvc as the skill representation and introduces a pipeline to plan skill sequences for contactrich manipulation the authors first modify hfvcs to work with inaccurate models and estimate object poses using the robot proprioception for closedloop feedback then the authors propose to learn a pointcloudbased precondition classifier to indicate whether hfvc executions will be successful then those learned preconditions are used in a searchbased task planner to complete contactrich manipulation tasks the authors apply the pipeline to manipulation tasks inside a shelf achieving a task success rate of 732 significantly outperforming a baseline without the learned precondition strengths 1 the paper is well written and easy to follow 2 the paper introduces a framework to enable hybrid forcevelocity controllers to accomplish various manipulation tasks without knowing the accurate model and object pose feedback weaknesses 1 there are many predefined rules in this work such as the prediction success definition 1 the object moved more than 15cm or 20 after skill execution 2 the final pose is within 7cm and 60 of the subgoal pose its not clear whether these rules are generally helpful in a more wide range of manipulation tasks more questions about these rules are in the issue section docsepthis work proposed a framework to solve contactrich manipulation tasks based on learning preconditions of hfvc the method relaxed the requirements for accurate models and feedback of hfvc the major contribution is learning the preconditions of hfvc to predict the outcome based on partial point clouds and skill parameters it can be used for task planning resulting in higher task performances furthermore the learned model and framework can be transferred from simulated to real environments strengths it is a promising and meaningful topic to study the combination of nonprehensile and prehensile contactrich manipulation in constrained environments the paper proposed a great framework to approach it the experiments and analysis are solid and thorough the author compared different ablation studies to demonstrate the effectiveness of each component the paper is well written and organized the video greatly demonstrates the performance in action weaknesses the heuristic is inadmissible for the a algorithm but used for practical reasons it should be mentioned in the main text line 96 it is a quite strong assumption that we assume object geometries are similar to known geometric primitives line 152 the force control direction which is chosen to be as close as possible to the robotobject contact normal while being as orthogonal as possible to the desired hand velocity direction does it mean it is an average between the two orientations what happens if they disagree docsepthis paper deals with the problem of manipulating objects under very constraining environmental conditions where nonprehensile skills are imperative to change the objects configuration so that prehensile manipulation is possible to achieve this the paper proposes to employ hybrid forcevelocity controllers whose activation and sequencing are respectively driven by a learned precondition model and a classical searchbased task planner the proposed framework is tested to manipulate simplegeometry objects through pushing pivoting sliding and toppling skills strengths 1 the proposed approach works under more relaxed realistic conditions such as partial observations and no closedloop feedback making the proposed framework applicable under a broader range of settings 2 the inclusion of a precondition model which depends on the point cloud information which allows the system to assess if the hfvc skill can be executed under the current environment conditions 3 performance of the proposed approach when compared to the ablation studies the proposed framework is extensively evaluated against different ablation conditions consistently providing better results under different settings weaknesses 1 motivation and position of the paper wrt state of the art when reading the motivation of this work something that got my attention is the lack of related works aspects as simple as hybrid forcevelocity controllers are not accompanied by a reference more importantly the full motivation of this work does not refer to any related work where nonprehensile manipulation strategies have been addressed in the past this gives the impression that they ideas and motivation introduced in this paper are 100 new which is very far from being true as an example one of the first works identifying and addressing the challenges of nonprehensile manipulation is 1 a phd thesis from 1996 1 k m lynch nonprehensile robotic manipulationcontrollability and planning cmu phd thesis 1996 in lines 5960 the paper cites 7 papers on nonprehensile manipulation however my first concern about this is why to cite that many papers without providing any insights about them this in my opinion does not bring anything useful to position this paper wrt this bunch of 7 papers because of this it is hard to see how useful these references are and how this paper positions wrt them the paper very briefly summarizes learningbased techniques in just a couple of lines 7172 making special emphasis on their lack of considering multistep tasks however this paper does not bring any contribution on this regard compared to different papers handling longhorizon tasks or skills sequencing which was not reviewed in this paper therefore i find this review of learningbased techniques a bit unfair as it is unclear if the review focuses on the learning capabilities of the cited works or the multitask features if it is the latter then the paper is missing to cite a large body of work addressing that specific problem 2 method description lack of formalism when introducing the proposed method the paper disregard mathematical rigor which is necessary in some parts of the method description for example it is not obvious how the objectsrobot poses are represented ie do we assume euler angles or rotation matrices or quaternions this detail is relevant because depending on the orientation representation one may need to consider specific constraints when working with different pose representations in a similar line there is no formal introduction of the hybrid forcevelocity controllers and their interplay with the torque control framework assumed in the proposed method therefore the lack of mathematical rigor leaves too many details open and makes the method description hard to understand in depth moreover when introducing the searchbased task planner the definition of a planning state is unclear in lines 129130 the paper describes that robotobject contact points are defined evenly spaced on the surface of the object primitives however it is unclear how the geometric shape of the robot endeffector may play a role here if so in other words it seems that the robotobject contact points definition does not consider the geometry of the robot endeffector if this is the case this should be said explicitly if not then it is unclear how the geometry of the endeffector is considered to define this set of points in lines 147155 the paper provides a very highlevel nonformal description of a quadratic programming problem used to optimize the desired robot velocities in the main text there is no mathematical description of how this problem looks like how it is formulated and how it is solved equally important this optimization problem seems to be the core of the framework when it comes to compute the desired velocity commands but no limitations regarding the qp formulation and solution are discussed in the paper although the main text refers to the appendix to know more details about the qp problem i think an appendix should serve as complementary contents and in my opinion the qp formulation is very relevant to understand part of the proposed pipeline in lines 177182 the paper reads we project velocity controls onto the plane that contains the interpolated endeffector trajectory preventing the endeffector from traveling too far from the intended motion note this is not the same as projecting onto the interpolated endeffector trajectory which would overly constrain hfvc actions and may conflict with force controller commands i read this several times and i still do not understand the motivation for the velocities projection on the plane of the robots trajectory nor the reasons why the latter approach may not work several method choices are not justified or motivated for example the values used for the precondition success definition are not discussed do they work for any kind of robot manipulator any kind of object geometry etc moreover the paper does not describe how these values were obtained on a different note the design of the model architecture is not discussed in detailed for example in lines 210211 the paper describes that the skill parameters are added as additional features to the network but no motivation is provided here in summary several model choices look very arbitrary along similar lines to what i have pointed out above the description of the task planner is very highlevel misses technical details and mathematical rigor no definition of objectrobot poses is given no definition of planning state is provided and no formal introduction of the realtime a algorithm the one used in the paper is given 3 limitations lines 305306 mentions some possible limitations coming from the task planner which is not part of the contributions of this paper in my opinion however the paper does not elaborate under which settings or conditions the task planner is not able to find a solution for tasks where a solution exists on a different note the paper does not discuss problems with the perception module the most important contribution of this paper is it really flawless last but not least no discussion about the qp formulation is provided which is the part of the framework that outputs the desired velocity commands
### Summary:
|
thank you for your submission to corl 2022 the reviewers have left comments below with key comments summarized here please address these comments in your rebuttal strengths the paper shows that compliant control can be achieved via noisy inaccurate and incomplete feedback it is a promising and meaningful topic to study the combination of nonprehensile and prehensile contactrich manipulation in constrained environments the paper is generally wellwritten the robot experiments are convincing the proposed framework is extensively evaluated against different ablation conditions consistently providing better results under different settings weaknesses objects are assumed to be rectangular or cylindrical which is limiting as there are multiple objects that do not fall under these categories eg plates cutlery tools please discuss what changes would be required if a broader family of objects were allowed only a single object is assumed to be movable in the scene this is sufficient for showcasing the controller but does not test for object interaction in cluttered conditions what would need to change to handle multiple moving objects it is not clear what the simtoreal gap for the training of skill preconditioners in simulation is there are several predefined rules in this work such as the prediction success definition how much were these tuned for the capabilities of the method vs what is needed for the realworld task more discussion is needed of previous work on nonprehensile manipulation see reviewer comments there needs to be more mathematical rigor in the method description see reviewer comments postrebuttal update the reviewers have discussed the authors response and have concluded that most of the issues in the reviews were adequately addressed they also agreed on the importance of the proposed approach for contactrich tasks
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
existing neural fieldsbased methods fail to reconstruct highquality surfaces for larger and complex scenes with sparse viewpoints in this work the authors inspect the issue as inherent ambiguity in rgb loss which provides insufficient constraints inspired by the area of monocular geometry prediction this paper proposes monosdf which explores the utility of depth and normal cues predicted by generalpurpose monocular estimators experiments demonstrate the geometric monocular priors significantly improve the performance both for single and multiobject scenes strengths the proposed method which incorporates monocular predictions to ease geometry learning is quite clean and easy to apply to any neural implicit method weaknesses the contribution is weak and relatively incremental the entire paper can be seen as a combination of two different parts 1 adding monocular cues as additional supervision to improve mvs 2 exploring different architecture choices for neural implicit representations for 1 although the idea is clean and easy to understand as stated in strengths it is not new in the scenario of learning neural fields for example several papers have already explored using predicted depth and other semantic features in learning nerf this work applies a very similar idea to surface reconstruction however the methodology did not make any changes auxiliary loss terms compared to the case of learning standard nerf i would expect different choices or additional analysis due to the task of surface reconstruction instead of simply applying the loss for example compared to nvs will normal cues be more important how the noise of the prediction affects the geometry quality for 2 part it is a bit disconnected from the main story it is always a nice contribution to conduct a systematic exploration of the best architecture for surface reconstruction although the findings from the paper about mlp vs explicit grids are not surprising the comparison itself can be a good topic however such experiments can be done at any settings with different loss functions the paper discussed the limitations and social impacts docsepthis paper proposed an approach to address 3d reconstruction from multiview images the paper is built upon several milestone paperstechniques and the results presented are relatively better it started off from signed distance function sdf and volume rendering of implicit surfaces the proposal is the incorporation of two losses ie depth and normal consistency estimated from individual images the estimation or ground truth for supervision is from a pretrained omnidata model 14 the paper also explored several architectures strength 1 the paper is easy to read and the main idea is clearly delivered in essence the paper took an offtheshelf depth estimator to serve as a strong prior 2 the proposed approach is robust across different numbers of images the method can not only be applied to single objects but also large scale scenes weakness 1 the novelty of this paper is insignificant it pieces together several techniques sdf volsdf omnidata etc and many of them are well established i think the paper can potentially compensate this weakness by addressing the points below 11 the exploration of the monocular geometric cues is also relatively weak it is curious to see how different depth estimators may affect the improvement when incorporating the additional prior 12 it is also peculiar that this paper chose depth and normal as the only two monocular cues to experiment the omnidata is able to generate high quality ground truth for 19 more more tasks 2 the color aspect of the reconstruction is never demonstrated or explained the results are mostly focused on the geometric reconstruction but not the color appearance or rendered 2d images 3 the experiments for the architectures are confusing the paper first showed results for comparing different architectures without monocular cues and arrived at the conclusion that the best model is multires fea grids then after adding the proposed monocular cues the paper concluded that mlp is the best model this behavior is not well explained the paper has addressed the limitations of the existing model it used for depth estimation however it does not address whether omnidata is the best prior to use compared to other models it also does not address other monocular cues to explore docsepin this work the authors proposed a novel and powerful geometric representation using neural implicit function previous neural implicit functions are trained purely on rgb reconstruction loss and have difficulty in representing more complicated geometry in this work the authors try to address this problem from two directions first they propose a novel depth and normal cues that significantly improves the quality of the reconstruction secondly they explored different representation functions including dense sdf grid simple mlp feature grid mlp and multiresolution feature grid mlp both of these changes significantly improve the quality of reconstructed geometry this is a high quality work the idea of using depth and normal cues are simple and effective the proposed multiresolution feature grid mlp representation is also novel and effectively improves the reconstruction quality over mlp solution the experimental results are thorough and the paper is well written and easy to follow i dont find any particular negative point no as far as i know docsepthis paper presents a framework to utilize monocular geometric cues to improve multiview 3d reconstruction quality efficiency and scalability for neural implicit surface models a systematic comparison and detailed analysis of design choices of neural implicit surface representations including vanilla mlp and gridbased approaches has been presented among these representations a simple mlp architecture performs quite well which demonstrates that mlps are able to represent complex scenes i really like the idea proposed in this paper to improve the reconstruction quality with sparse input shape priors should be added there are several ways to construct the priors one solution is to construct the parametric model for some special types like face and body this paper explores another way with the help of depth estimation from single image although the estimated depth and normal may contains noises or with wrong scales the proposed method well handles these issues except the question listed in the above i dont have other concerns to this paper
### Summary:
|
there was a range of reactions to this paper from borderline reject to strong accept although several of the reviewers highlighted that the contribution could be viewed as incremental it is clearly described and robust across different types of scenes and i concur with the three reviewers that give positive ratings therefore i am accepting this paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper presents an anomaly detection technique based on ebm energy based model the proposed technique is supposed to help adapt to new tasks faster with just a few rounds of training the proposed algorithm is a combination of several existing techniques a fundamental contribution is hard to identify the paper is very hard to read and also has numerous typos such as line 11 associates associate line 47 learn learns line 146 pixe pixel line 178 sigshink sigshrink line 245 autoencode autoencoder line 249 251 adapt adapting figure 1 does not really have labeled parts a b it also is not very intuitive or informative in explaining parts of the algorithm eg line 135 refers to fig a in relation to the dictionary d but no additional clarity about d is offered by reference to this figure the construction of taskspecific dictionary d has not been explained very well in general it would help to present the content in sections 31 and 32 in a more intuitive manner it certainly helps that the paper does an ablation study sec 43 and shows the benefits of the various parts of the algorithm update after author rebuttal thanks to the authors for addressing my concerns i am revising my scores na docsepthe paper proposes a framework for building an anomaly detector which can be adapted to new tasks at inference time with a few examples from the new task this is incontrast to standard anomaly detection methods which had to be rebuild for each new task for example a standard autoencoder can be trained for normal taskslike video surveillance and anything with a large reconstruction error can be labeled as anomalous now if the scene or camera changes then the autoencoder will have to be retrained this paper proposes a three tier framework where the trained anomaly detector can be adapted at inference time strengths i paper clearly defines the three tasks g1 g2 and g3 which makes it easy to comprehend ii the idea of using intermediate examples during the ld process as anomalies and using them to train an ebm is novel similarly the idea of borrowing ideas from sparse autoencoders and retraining it with a few examples makes the detector adaptive iii the results are quite impressive for example in table 1 the proposed method is even competitive with upperbound where the autoencoders or other methods are trained with many samples from each category weakness i my slight concern is that the whole framework might be too complicated to be actually deployed no potential negative societal impact docsepthis work presents a new algorithm for imagebased anomaly detection the algorithm relies upon the contrastivedivergence method for training energy based models and it presents a differentiable sparsecoding layer which seems to be novel beyond the algorithmic innovations the paper also presents practical techniques for improving the speed of generating samples from an ebm for images this work also presents experiments for fewshot learning their proposed algorithm across new domains the basic idea is to use sparsecoding as a way of approximating whether a novel sample is similar to indistribution samples or not the innovation lies in implementing sparsecoding in a differentiable manner that lets them train the entire architecture in an endtoend manner this paper presents a new algorithm for image anomaly detection with good empirical performance and some innovative ideas i believe the resultstechniques could be quite useful in other contexts where researchers want to perform nearestneighbor decomposition in a differentiable manner however the techniques in the paper sometimes seem to be put together a little haphazardly and the paper does not do a good job of clearly explaining the contribution of many of the important parts for example a lot of space is devoted in the paper on episodic training as a way of enabling fewshot learning however it isnt clear from the experiments that how important is the episodic metalearning for fewshot learning the biigest help for fewshot learning comes from the sparsecoding layer which doesnt have any learnable parameters for example if instead of breaking up the training into i separate episodes where each episode was one task if we simply trained in one episode on all the data what will be the performance loss also the paper sometimes makes claims that are a hard to understandverify for example the paper claims that proposed method is the first to allow adaptation to new tasks with a single forward pass without any training however even a simple process such as finding the distance to the 1nearest neighbor allows some level of adaptation in a single forward pass after the author response the authors replied back with more evidence about the importance of metalearning which seems sufficient updating my rating now yes the authors adequately discusses the limitations and potential negative societal impact of their work
### Summary:
|
the final consensus from three reviewers knowledgeable in the field was that the paper makes an interesting contribution in the area of anomaly detection the empirical results were seen as particularly impressive and the treatment of intermediate samples from langevin dynamics as abnormal was also seen as offering some novelty my own assessment is a bit more qualified than the reviewers while the empirical results are certainly nice the approach itself seems a somewhat complex combination of ideas from the literature this complexity was also noted in one of the reviews while we uphold the reviewers verdict we encourage the authors to spend a bit more time perhaps in sec 2 drawing out some higherlevel insights of the proposed approach and whether there might be simpler alternatives that could also work well
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose using the structured energy network as a loss to achieve faster and better performance on structured prediction problems they demonstrate that their approach achieves competitive results where a dynamic component of the loss can adapt to specific characteristics of the problem at hand the authors present a novel framework where a trainable loss function is used to train a network to perform the inference through a forward pass the paper is well structured and the main concepts are clearly conveyed to the reader the authors created many experiments and considered the ceteris paribus principle to ensure the results could be attributed to a particular change or aspect of their framework ensuring the quality of research they demonstrated their approach on various tasks and datasets obtaining good results nevertheless we miss code to ensure reproducibility or a statement ensuring such code would be made available upon acceptance we did not find potential negative societal impacts in the present work the authors did not provide details regarding the limitations of the proposed approach docsepenergy based networks have to run gradient based inference gbi which can be quite expensive in this paper the authors propose to use an energy based network to develop a structured prediction loss thereafter or simultaneously for the dynamic approach a feedforward network is learned that tries to match the energy networks output modeling structure and a crossentropy loss wrt ground truth outputs combined the proposed technique gives the inference efficiency of feedforward networks yet is able to model structure similar to an energy network strengths the proposed method is quite simple yet effective the idea is fairly obvious but getting the details right is laudable the paper is decently wellwritten and wellmotivated weaknesses limitations should be better discussed the use of a feedforward network to map the input to the gbi output of the energy based network can be thought of as learning to do gradientbased inference it is unclear to what extent the strategy can be expected to generalize relation with infnet the authors argue infnet is a special case of their proposed method while i understand that there is a traintest mismatch for infnet the followup by tu et al 2020 where two separate networks are used for learning an adversarial sampler and for performing inference gets very close to the proposed method it would be helpful to focus more on the discussion separating these methods currently the discussion makes it seem like the proposed seal method is akin to a slight extension post rebuttal the authors justifiably addressed my concerns and i have accordingly raised my scores it will be valuable to discuss the limitations of using a feedforward network to learn to do gbi docsepthe paper proposes the combined use of structured prediction energy networks lossnet as a loss function that considers relations in the output this loss function is then used to train a feed forward network tasknet to speed up inference at test time they proposed both a static and dynamic approach to train these two networks the paper is well written and easy to follow it also does a good job in placing itself in the literature and as the authors state themselves it is a generalization of previous work tu et al 2020 aimed at reducing inference time for energy networks results show remarkable improvement over the state of the art see questions
### Summary:
|
this work proposes using structured energy networks as loss functions for training feed forward networks to solve structured prediction tasks the reviewers find the paper to be well written and easy to follow the contribution is well positioned with respect to the literature and empirical results are strong during the discussion period the authors addressed the concerns of the most negative reviewer sufficiently for them to increase their score i can therefore recommend accepting this paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
highlevel summary this paper introduces a bayesian stackelberg markov game bsmg model that considers a defenders uncertainty over attackers types when implementing defensive strategies it also proposes to use a bayesian strong stackelberg qlearning method to learn defense policies by first simulating an adversary to obtain feedback of an attack and then computing the bayesian strong stackelberg equilibrium for the bsmg with a solver in this way this work relaxes the assumption that the defender knows attackers types in existing gametheoretic models for moving target defense strength this paper proposes a gametheoretic model for mtd that learns adversary types via repeated interactions with a simulated attacker it introduces a bayesian strongly stackellberg qlearning method that converges to the bayesian strong stackelberg equilibrium of the bsmg empirical results show that the proposed method has advantages over several baselines such as static policies urs and adaptive policies eg b expq and bnashq weakness the bsmg model itself is incremental since it does not provide any additional interesting insights other than adding the bayesian and stackelberg assumption into a markov game regarding the solution it is unclear to me why vanilla qlearning instead of other advanced rl algorithms say the sample efficient rl variant steve by beckman et al 2018 is appropriate in their solution especially when sample efficiency is important here in the experiments while it is helpful to compare bssq with several existing baselines it is unclear whether the performance of bssq is comparable to the bayesian stackelberg game model when the defender has complete information about the attackers types this comparison will generate insights into to what extend the knowledge of attackers types influences the effectiveness and efficiency of the defenders defense mechanisms minor figures 2 and 4 also figure 6 in the appendix are barely legible the fonts are too small docsepin this paper the authors model a problem of responding to an attacker in a stackelberg bayesian setting with an mdp the authors provide a qlearninglike solution to the problem its convergence to a stackelberg equilibrium asymptotically and some experiments to show the performance of the proposed method i think that the pros of this paper are the formal and precise characterization of the analysed problem as an mdp with adversaries but in my opinion this does not constitute enough novelty to be published at iclr moreover i have some doubts also on the significance of the provided experiments the proof of proposition 1 is key in the results provided by the paper i think that its proof should be moved to the main paper and due to the fact that it is not easy to follow to be revised to improve readability moreover i would like you to state explicitly the difference in the proof wrt the one present in 48 the experimental evidence you provided does is not statistically significant even if the expected value of the different states in the settings you tackled is larger for the proposed method the confidence intervals do not provide enough evidence that the proposed method is performing better than the baseline this dramatically compromises the strength of the experimental results you provided i think that a strong assumption of the proposed framework is the knowledge of the attackers distributions indeed usually one has only a little information about the behaviour of the attackers do you think it is possible to extend what you proposed also to a setting in which the attackers distribution is unknown do you think it is possible to evaluate also the loss due to lack of information regret in your setting minor section 5 section 5 eg eg algorithm 1 algorithm 1 cant cannot you should proofread the appendix and check it for errors eg 0 geq gamma 1 after rebuttals the authors made put a significant effort to improve the submission but i am still non convinced by the experimental results they are presenting for instance in figure 4 there is no way of distinguish between bss and nashq i suggest you to increment the repetitions of the experiment to highlight the improvement of your method over the literature onesdocsep paper summary this paper proposes the gametheoretic model of bayesian stackelberg markov games bsmgs a generalization of markov games as a formalism for studying moving target defense mtd systems a type of defenderattacker game with applications to cybersecurity an algorithm for finding the stackelberg equilibrium in bsmgs called bayesian strong stackelberg qlearning bssq is proposed and an openai gymstyle environment for testing the derived policies in particular mtd settings is introduced which allows for empirical evaluation of the policies effectiveness the paper then shows experimental results supporting the bssq algorithms success at finding the strong stackelberg equilibrium of bsmgs strong points unifying reinforcement learning with leaderfollower games is an interesting direction for research the introduction of new learning environments for these settings is itself a potentially valuable contribution the inclusion of the parameters used for the experiments in section 4 ie discount rate etc aids reproducibility weak points the lack of code with the submission doesnt allow for independent verification of the experiments or for examining the learning environments that have been introduced the sensitivity of the experimental results to choice of parameters is not included which choices of parameters mattered and why the discussion in paragraph 3 of page 5 about solving the bayesian stackelberg game is unsatisfying it appears that bssq can only tractably find a solution when the domain is relatively small given that the paper claims that a milp formulation allows for this to be solved effectively in the test domains then there should be a more detailed discussion of why the test domains presented here are broadly representative of the types of problems that bssq would be expected to be used for solving the experimental results are not presented with sufficient clarity see the questions for authors below decision recommendation i recommend rejecting the paper because i believe that the contributions are not sufficiently broad as to warrant acceptance in addition the experimental results are not described in sufficient detail to give confidence about their significance questions for authors can the authors clarify what is meant on page 16 line 1 by borrowing the game domain which parts of the cited framework does the system presented in this paper reuse and what has been added what does training of an agent in the system look like is learning taking place it appears from page 6 line 17 that there is a decay of exploration rate but what is the training process isnt the difficult part of the learning process already handled by the bayesian stackelberg game solver the number of trials used in the experiments is inconsistent why are 6 trials used in figure 2 mtd for web applications while 10 trials are used in figure 4 mtd for ids placement what exactly is a trial here is it a training run a test run with a trained agent etc if a trial corresponds to a training run then 100 episodes seems like far too few for agents to learn at least that is the the case in most other rl domains for each algorithm figures 2 and 4 show very similar rewards across the episodes does this mean that no learning is taking place for any of the algorithms if these are test results ie the agents have been trained using these algorithms then what did the training process look like what does the time taken by the agents figure 3 signify the choice of baselines here seems to be too weak as the paper says in paragraph 2 of page 7 the poor performance of baselines is expected since they are not modeling adversaries at all additional feedback style suggestions in figure 1 move the numbers in the graphs on the righthand side to outside the chart when that part of the chart is too small to contain the numbers page 13 line 2 the pdf links for equation 1 and 2 dont appear to be correct also 1 and 2 are overloaded in this proof referring both to the conditions for convergence near the top of page 13 and to equations 1 and 2 this should be clarified for easier reading and for removing ambiguity typographical errors page 2 line 4 bayesian stackelberg games 22 6 is should be bayesian stackelberg games 22 6 are page 3 line 1 extends should be extend page 3 line 24 a word like domains is missing after physical 22 and cybersecurity 6 page 4 line 15 can is better should be is better page 4 lines 5 and 4 the goal is repeated twice one of these should be removed page 7 line 6 throws of nashq should be rephrased page 8 line 31 there should be no hyphen in multiple followers page 13 line 2 note the should be note that page 17 line 9 propose should be proposed postrebuttal update thank you to the authors for your detailed responses and for uploading your code minor point your readme assumes that the gnome desktop environment is being used you may want to make the instructions platformindependent my main concerns about the learning process described in the paper remain the authors indicate in their response that it is difficult to quantify which is the most challenging part of the learning process this makes it much more difficult to reason about whether the learning process is primarily about using the bayesian stackelberg game solver and whether the interaction with the environment given the limited number of trials provides limited benefitdocsepsummary this paper studies the problem of learning how to adapt the defense methods in the domain of cybersecurity the paper proposes a new model called bayesian stackelberg markov games bsmg to capture the uncertainty of the attackers types as well as their strategic behaviors the authors design bayesian strong stackelberg qlearning that can converge to the optimal movement policy for bsmg the empirical studies verify the support the theoretical results detailed comments the empirical results give evidence that the proposed method is effective in practice however the reviewer had a hard time to understand the model of bsmgs and how the attackers behave in this model 1 given the definition of theta ie the probability distribution of the attackers it seems that the attacker is freshly drawn for each round according to thetak if the state is sk and the distribution is independent of the game history however the state transition function gamma depends on the attackers type and action which is very confusing if the attackers type is redrawn in each round then how should the attackers reason about their strategies for the current round are they myopic but this contradicts with the description of algorithm 1 in which the qvalue of the attackers are computed which implies that the attackers care about the future could you clarify on this 2 is it necessary to assume that the discounting factor for all attackers are the same does the result depend on this assumption
### Summary:
|
the paper proposes a new gametheoretic model bayesian stackelberg markov game bsmg for designing defense strategies while accounting for the defenders uncertainty over attackers types the paper also proposes a learning approach bayesian strong stackelberg qlearning bssq to learn the optimal policy for bsmgs it is shown that bssq converges to an equilibrium asymptotically experimental results are provided to demonstrate the effectiveness of bssq in the context of web application security overall this is an interesting approach and an important direction of research however the reviewers raised several concerns and there was a clear consensus that the paper is not yet ready for publication the specific reasons for rejection include the following i the experimental results are not presented with sufficient clarity no statistical significance tests are performed and the choice of baselines is weak ii the contributions are not sufficiently broad the learning process described in the paper is unclear and the framework requires a strong assumption of knowing the attackers distributions i want to thank the authors for actively engaging with the reviewers during the discussion phase the reviewers have provided detailed feedback in their reviews and we hope that the authors can incorporate this feedback when preparing future revisions of the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
higher resolution than existing method which is very important in the clinical setting as some tumors can be very small the authors compare to two other weakly supervised baseline and to one fully supervised upper bound 4 different reported metrics the discussion on the strength and weaknesses of each modules section segmentation properties of the global and local saliency maps is interesting and explains well the design choice to combine the two outputs additional ablation studies in the appendix i think the paper would benefit from mentioning that the current task is a case of multiple instance learning 1 not because its solutions are relevant here its not but because it gives i think an interesting and relevant framing to tackle the task this could be especially relevant in a future extension when trying to increase the resolution further for instance by treating the patches selected after the first stage as positive or negative bags it could be plugged into a regular network utilizing such information for instance by using a prior on how much the positive bags are filled with tumor i think a discussion on the training and inference cost is missing especially as two full networks are involved a higher cost will absolutely not invalidate the authors method but it would give some perspective the section on the aggregation function to train the local module should be clearer and more prominent in the paper as it is i believe the main ingredient to the methods success currently it is scattered across the paper when it should be one dedicated subsection in the mainmatter 1 maron o lozanoprez t 1998 a framework for multipleinstance learning advances in neural information processing systems 570576 docsep the paper is well written and easy to follow the method is evaluated on a large dataset and shows statistically significant improvements in term of segmentation accuracy compared to cam and gmic experiment are very comprehensive and include several ablation studies evaluating the impact of using multiplescale feature maps in the global module randomly sampling patches from negative examples choice of network number of patches in the local module etc the main weakness it the lack of clear methodological contributions compared to previous works in particular the proposed method is very similar to shen et al 2021 which also includes global local and fusion modules and aims to classify highresolution mammography images results are mixed while glam outperforms cam and gmic in terms of segmentation classification accuracy is worse for segmentation it is unclear whether the performance of the method dice of 0390 and 0335 is sufficient for reallife clinical applications docsep1 the problem of detecting lesions from highresolution images with the imagelevel label is important yet difficult the proposed twostage pipeline is reasonable and achieves better performance on a largescale dataset 2 the authors conduct extensive experiments and ablation studies to show the effectiveness of each design components 1 the authors put some important technical details into supplementary materials and cause some difficulty to fully understand the technical details of the whole framework 2 since the whole framework has two steps and the second step ie local module training relies on the first step model there are maybe some difficulties to effectively train the whole framework for example the labels of the extracted patches for local module training may be noise in this case how do you ensure the robustness of the local model docsep1 the proposed model can process highresolution medical images in a memoryefficient way 2 the paper is wellwritten and easy to follow 3 the method is evaluated on a dataset containing more than one million mammography images 4 the model outperforms existing baselines in the segmentation of both benign and malignant lesions improving the similarity score relatively by 396 and 20 respectively at the same time it preserves classification accuracy overall this paper is good enough and i only have the following concern the proposed method does not compare with enough weakly supervised segmentation methods and it would be better to make these comparisons in the final version
### Summary:
|
1 sa 2 wa 1wr the wr reviewer found the paper to have low contribution compared to previous work and low segmentation accuracy all reviewers agreed that the paper has merit it has been evaluated in a large dataset and the proposed method is supported with a lot of ablation studies the authors addressed the points raised by the reviewers and clarified their differences with the previous work during the discussion on balance i agree with the reviewers that the paper has merit it deals with a very interesting topic and i think that it will be a good contribution for midl 2021 during preparing the final version the authors should address all the points raised by the reviewers
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work proposes a bregman gradient policy optimization framework for rl two specific algorithms are proposed which are bgpo and vrbgpo where vrbgpo is an accelerated version of bgpo the authors provide the convergence rates results for these two algorithms and show their efficiency through multiple numerical simulations in general the paper is well written with some minor grammar issues the developed bgpo and vrbgpo are simple and easy to implement the derived convergence results match the stateoftheart one numerical results illustrate the effectiveness of the proposed algorithms however we still have some concerns as follows 1 my understanding is that the proposed algorithm is pretty general why does the work only focus on policy gradient rather than general gradient what is the unique property that has been used here to guarantee theoretical results compared to the traditional stochastic gradient methods 2 why do the compared algorithms shown in table 3 need a large batch size ie epsilon dependent while the proposed one only requires constant batch size 3 i agree that vrbgpo is better than bgpo but bgpo is not simulated in figure 3 some other minor issues pgt is not explicitly definied overall i believe that there are some merits of this work regarding the simplicity of the implementation of the algorithms theoretical justification of the convergence rates and numerical instances of verifying the efficiency of bgpo and vrbgpo docsepthis paper studies the convergence of policy gradient algorithms with constraints they modify the vanilla policy gradient with bregman divergence as a regularizer the authors also propose a new variance reduced policy gradient methods based on the storm estimator in nonconvex optimization my concern about this paper mainly lies in its lack of novelty both in the algorithm design and the theoretical analysis first of all mirro policy gradient with bregman divergence regularizer was already proposed in other papers for example the algorithm proposed in the current work is almost the same as that in the following paper yang l zheng g zhang h zhang y zheng q wen j pan g policy optimization with stochastic mirror descent arxiv preprint arxiv190610462 2019 jun 25 the difference of the current paper with the above one lies in the variance reduction techniques yang et al uses the spider estimator while the current paper uses the storm estimator proposed in cutkosky and orabona 2019 moreover even the adaptation of storm from nonconvex optimization to policy optimization was already studied in a recent paper feihu huang shangqian gao jian pei and heng huang momentumbased policy gradient methods 2020 therefore combining these methods with a mirror descent update do not seem to have enough contributions for the publishment of this paper after the authors response i thank the authors for their detailed response on the differences of the submission from closely related work given these comments and discussions i am willing to raise my score to accept this paper this paper has limited novelty in both algorithm design and theoretical analysis it is also highly similar to existing papers from several perspectives docsepthe authors consider the optimization problem of an mdp they designed two policy gradient algorithms based on the mirror descent method which are named bgpo and vrbgpo the bgpo algorithm is a momentum mirror descent method that finds an epsilonstationary policy with oepsilon4 samples the vrbgpo algorithm is a stormtype variance reduced mirror descent method that finds an epsilonstationary policy with oepsilon3 samples the analysis is nicely organized and the authors also provide a couple of experiments to verify their theoretical findings the paper discusses the momentum and storm version of the mirror descent pg method which is a relatively new result the analysis and the presentation of the results are clear and wellorganized although the result can be expected since both storm and mirror descent as well as their numerous variants are wellstudied the discovery of this paper does imply the convergence of several important special cases of the pg method including the npg and superadam version of pg so i think this is a good paper the main comments are provided in the main review part here i will add a few comments 1 the authors have used mirror descend several times in the paper please unify terminology and use mirror descent 2 because the second algorithm proposed by the author is a storm type variance reduced mirror descent pg method the authors should also mention the work by yuhao ding junzi zhang javad lavaei emphon the global convergence of momentumbased policy gradient the algorithms of this paper also apply the storm technique for the same reason the following work by nhan h pham et al empha hybrid proximal stochastic policy gradient algorithm should also be cited 3 regarding equation 4 the author should mention that the objective function with a horizon h there is a truncation error of fracgammah1gamma compared to the original infinitehorizon mdp 4 regarding assumption 2 and assumption 4 the author should mention that both of them are satisfied automatically given assumption 1 and the fact that all the rewards rsa are bounded 5 regarding assumption 3 the authors should also mention that the importance weights can be bounded theoretically instead of an uncheckable assumption by using the truncated gradient step instead of a gradient step see the work of junyu zhang et al emphon the convergence and sample efficiency of variancereduced policy gradient method the importance weights actually as some nice properties for softmax policy it is bounded by ectheta1theta2 in gaussian policy it is bounded as ectheta1theta22 so a simple truncated update is enough to control this value 6 there are still a few typosgramma errors in the paper for example in the first row of contribution a the based the should be based on the the authors should carefully check the spelling and gramma in the revision 7 regarding the experiments the authors compared with several different algorithms however there is a small issue wrt the selected algorithms vrbgpo variance reduction vr mirror descent md vrmdpo vr md mdpo vr only tpo ppo no vr no md from these comparisons im not able to see if the md really works the authors should also compare with algorithms such as proxhspga a hybrid stochastic policy gradient algorithm for reinforcement learning which applies storm technique while not using mirror descent
### Summary:
|
this paper proposes a policy gradient algorithm based on the bregman divergence and momentum method while one reviewer was initially concerned about the technical novelty of the paper given some existing works after the authors response and paper revision the reviewers are all convinced and have reached a consensus to accept this paper thus i recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposed a skill transfer method which exploits learned semantic correspondence in order to transfer skills from source domains to novel target domains although the means of executing actions to achieve skills might be very different in terms of low level control the proposed method can transfer the skill from source domain to the target domain with high performance the method was verified in two simulation environments and with real human video demonstration data the method was compared against sota methods that do not use semantic information and shown to exhibit superior performance especially generalization using real human video demonstrations and automatic extraction of skill labels are very impressive the paper is wellwritten but a heavy read a lot of details were only provided in the appendix which makes reading the paper challenging additionally dropping the notation for the sourcetarget domain reduces readability the assumption of predefined skill annotations is a very strong assumption yet if the demonstrationsdataset is from robot motions it is acceptable because these movements are programmedlearned for some purpose which might be used as the label of the corresponding segment additionally the authors convincingly showed the use of action recognition models in automatic labeling what is left is the assumption of having the same set of semantic skills in between different source and target domains the authors might discuss what happens if the semantic skill set is not exactly the same but similar related to the previous comment in order to match the semantic skills one way might be to focus on the changes the robot creates in its environment even though the changes are not exactly the same with different embodiments and arrangements still the relative changes created on objects by the end effector of the robot would be similar across different domainsenvironments the paper was shown to have superior performance against baselines that do not use semantic information i am curious how it would perform if such information is included in other skill transfer methods for example in acnmp which is a demorl method that can efficiently transfer skills between different embodiments docsepthis paper presents an approach for imitation across crossdomain based on semantic skills the approach can utilize demonstrations across substantially differing domains to accelerate the learning of longhorizon tasks this approach can be used to leverage human video demonstrations for teaching new tasks to the robot while being robust to noise in training and demonstration data the approach is tested across two simulated maze and kitchen environments as well as from real world videos of human to a simulated robot the results show that the approach can accelerate learning from cross domain demonstrations strengths the paper is very well written and organized the motivation and problem setup is very clear the approach presents a promising way to use large scale video datasets for behavior learning in robotics weakness the following points could be addressed what are the computational challenge of learning cross domain skill eg time for episodes how would it transfer to real robot manipulation from a human video demonstration update the complexity of algorithm is well addressed docsepthis paper proposes a method to leverage demonstrations collected in a different instance of an environment to accelerate the learning process of the same tasks in a new instace eg two different kitchens they do so by assuming the existence of different datasets one with demonstrations of the full task at a highlevel in a different and nonaccessible environment and a task agnostic dataset collected on the current environment where the agent can interact with the objects the general method is interesting and effective although there are many moving parts and i wonder if it couldnt be made simpler stengths the paper tackles an interesting problem bootstraping reinforcement learning with demonstrations of a long horizon tasks that come from a different instance of the same type of environment results show that the method can indeed accelerate the learning phase and surpasses the performance of many baselines the experiments section is well designed and shows several ablations that help convincing the reader of the benefits of the method using a real world video of a person solving those tasks can indeed show the potential of such methods to scale to larger datasets composed of humans solving tasks this data is plentiful online and could possibly enable exponential growth of the capabilities of the robot weaknesses the method is convoluted and has many moving parts there are several losses that are based on regularising the policy with a prior learned on the existing datasets these parts could possibly be swapped for normal supervised learning making it simpler and possibly without sacrificing performance it is unclear if some baselines fail at a highlevel or lowlevel in line 244 the authors specify that a baseline mostly fails because the low level policies fail but the plan is correct this should be better investigated as learning low level policies should be easier and generally not the main point of the paper the task is relatively linear in execution and therefore as they also show simply learning or emulating the sequence of high level action can achieve good performance as specified before it is unclear if such benchmarks fail at a highlevel difficult to solve or lowlevel easy to solve figure 7 shows that if some skills are missing from the demonstration these techniques indeed fail however i believe it would still be possible to learn a good planner with a larger dataset docsepthis paper focuses on imitation learning of highlevel semantic skills where the source and target domain are different the paper propose star a model dealing with crossdomain knowledge of longhorizon skills based on an assumption that the lowlevel skills are already learned from the given dataset in both source and target domains the work outperforms several baseline methods in the aspect of learning speed result metrics and no need for demonstrations in the target domain pros the motivation and problems to be solved are well explained good presentation of related works and their limitations the assumptions in the problem formulation are properly made and do not violate common sense table 1 in the appendix helps understand different models and their training objectives the experiment results look very interesting and promising especially the last experiment on learning semantic skills from human demonstrations in a video the current model solved some of the drawbacks of the related works the limitation section is well written letting me know that the authors share the same concern as mine ie the existence of predefined skills and corresponding data in all the domains cons the approach and model in section 41 are not well formulated from line 113 to line 139 i could not follow the techniques and causality the authors wanted to explain the role of the latent representation z in the current model is not clear in line 129 the author gives an example saying z capture the robot hand orientation but z should have some other meaning such as a highlevel execution description of the task why a tuple kz can represent a sequence of h steps model and math lack details and the explanation is not sufficient to help understand the model eg for eq 1 i do not understand how did the authors reach this learning objective and the content from line 135 to line 139 cannot help me understand it neither the reasons of capturing the discrete aspects of manipulation skills the model formulation since section 42 becomes easier to understand however the entire model seems very complicated it has a lot of submodules that need rich experience to get trained which decreases my confidence in reproducing the result by other researchers and allowing the people beyond the authors group to extend this work
### Summary:
|
the paper is well written and organized motivation and problem setup clearly stated tackling an interesting and fundamental problem of bootstrapping rl with information from a different problem instance and showing promising results especially when learning from human videos however there are several concerns raised that should be considered and clarified mainly regarding the explanation of the complex method impact of assumptions and baseline comparisons see the following proposed action items based on the provided reviews more details in the corresponding reviews proposal of action items clarifying the notion of domain as a different kitchen is still a kitchen domain a different domain would be something like a workshop or factory also eg l91 source and target environments are used and then switching to domains both wordings appear in several section of the paper and should be clarified discussing the assumption that semantic skill sets have to be the same for transfer consider a baseline method that also includes semantic information consider an analysis of time and computation complexity clarifying the failures of the baselines high vs lowlevel clarifying the approach and modelling descriptions discussing the consequence of the complex model design involving many different parts pretraining etc maybe having additional ablation studies to investigate their influence following the consensus recommendations of the reviewers and enhanced by the additional updates and clarifications of the paper during the rebuttal my recommendation is to accept the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary derive sentence representations from a bidirectional lstm encoder trained to distinguish real sentences from fake ones fake sentences are derived from real ones by swapping two words or dropping a single word yielding two different models the resulting representations are applied to various sentence classification tasks by using them as input to a logistic regression classifier trained for the task results are generally better than similar experiments performed with skipthought vectors trained on the same toronto bookcorpus this is a reasonable idea and the win over skipthought is quite convincing but the paper is short on substance and parts are confusing or superfluous some problems and questions 1 most of section 2 could be omitted since it doesnt really add insight to the wellestablished idea of pretraining parameters on an auxiliary task 2 section 3 calls the conneau et al 2017 transfer approach supervised it also distinguishes between semisupervised approaches that do taskspecific adaptation using labeled data and unsupervised approaches including the current one that also must do exactly that 3 in 42 does the 3layer mlp have nonlinearities in its hidden layers if so its not equivalent to a single linear layer as claimed regardless of whether a nonlinearity is applied to its output if not there is no point in using 3 layers 4 section 5 gives only minimal descriptions of the tasks often just acronym and type presumably because they are borrowed from conneau et al 2017 2018 more information needs to be provided 5 section 6 should show the best results from the conneau et al papers for calibration 6 were the baseline systems also supplied with glove word embeddings do they have the same number of parameters 7 details of the logistic regression classifier 8 why train on your method on only 1m sentences since training is fast wouldnt using more text give better results 9 given the recent very strong results from the elmo paper which you cite the current paper doesnt seem complete without some attempt to replicate this as a baseline eg use a deeper encoder combine state vectors through layers etc these features arent incompatible with your objective which might make for an interesting extensiondocsepsummary the paper proposes a discriminative training formulation for learning sentence representations where a classifier is required to distinguish between real and fake sentences the sentences are encoded with a bilstm and the resulting sentence representations are then used in a number of sentencelevel tasks classification entailment and retrieval the experiments show benefits on most tasks compared to skipthought and fastsent baselines and the information captured by the representations is analyzed with probing tasks showing that they are better at capturing certain kinds of information like the presence or order of words the paper proposes a simple and fairly effective approach for learning sentence encoders the basic idea is appealing and the experimental results are fairly good however at present it seems like more work is required for delivering a comprehensive evaluation and analysis my main concerns with the paper are the insufficient comparison with prior work its lack of clarity and organization in certain places and the limited amount of work please see below detailed comments on these and other points as well as suggestions for how to improve some of these issues major comments 1 better baselines and comparisons the results are compared only with skipthought and the weaker fastsent however there are far better models by now first already in the skipthought paper there is a version combining naive bayes bigram features which performs much better on some benchmarks for example that version would be better than the papers results on mr 804 moreover there have been many newer papers with better results on many of the tasks 1 2 4 and references therein at the very least mention should be made that there are better published results and ideally there should be some comparison to the more relevant papers 1 and maybe others 2 paper organization and clarity i found section 2 to be unnecessarily lengthy and disorganized it mixes motivation with modeling introduces excessive notation sometimes without clearly defining it what is laux why is u in eq 2 not defined on first usage and digresses to weakly related discussions the link to gans seems vague and the relation to introspective neural networks is not made clear the last paragraph is largely redundant with the introduction there is also a statement that seems just wrong maximizing the data likelihood pencxtheta1theta1thera3 the data likelihood is px maximizing the encoding of x can be trivially achieved by simply having a constant encoding whose probability is 1 the entire section 2 can be condensed to one or two paragraph essentially deriving the discriminative training task in equations 1 and 2 on the paper organization level this lengthy section is followed by the related work and then section 4 on training tasks for encoders there is again redundancy between section 4 and 2 consider merging sections 2 and 4 into one methodology section where the general task is formulated the sentence encoding bilstm with maxpooling and binary classifier the mlp are defined and the fake sentence generation is described this would make a better flow and remove excessive text 3 motivation and advantages of the approach the approach is motivated by shortcomings of sentence encodings based on language modeling such as skipthought which are computationally intensive due to the large output space and the complicated decoding process this is an appealing motivation although there have also been simpler methods for sentence representations that work as well as or better than skipthought 1 2 the second motivation is not clear to me and the claim that the training text collection should include many instances of sentences that have only minor lexical differences but found in completely different contexts needs more support either theoretical or empirical why wouldnt a language model be able to distinguish such differences the advantages of the binary classification task make sense the point about forcing the encoder to track both syntax and semantics is interesting have you tried to analyze whether this indeed happens the probing tasks are a good way to evaluate this but most of them are syntactic except somo and perhaps coordinv and bshift still more analysis of this point would be good one concern with generating fake sentences by swapping words is that it would not apply to languages with free word order have you considered how well your approach would work on other languages 4 relevant related work the fake data generation resembles noise used in denoising autoencoders a recent application is in unsupervised neural machine translation 3 but there is relevant prior work see references in 3 the binary classification task resembles that in 1 where they train a classifier to distinguish between the representation of a correct neighbor sentences and incorrect sentences 5 ideas for more experiments and analysis the results are fairly good by using only 1m sentences how good would they be with the full corpus whats the effect of training data size on the method table 4 is providing nice examples showing how the fake sentence task generates better sentences representations can this be measured on a larger set of examples in aggregate why is tsne needed for calculating the neighbor rank proving tasks are very interesting but the discussion is limited a more detailed discussion and analysis would be useful consider other techniques for generating fake sentences minor comments related work the skipthought decoder is a unidirectional lstm and not a bidirectional one as mentioned right related work more details on supervised approaches would be useful section 41 how many fake exampels are generated from every real example have you experimented with this section 42 mentions 2 hidden layers in the mlp but figure 3 indicates 3 layers is there a reason to use multiple layers without a nonlinearity in the mlp this seems unusual in terms of expressivity this is equivalent to using one larger linear layer although there might be some benefit in optimization table 1 seems unnecessary as there is no discussion of how dataset statistics refer to the results its enough to refer to previous work what are some results missing in table 2 specifically skipthought 1m on coco datasets the paragraph on sentence encoder implementation mentions a validation set accuracy of 89 for word shuffle which validation set is that how is convergence determined for word drop in analyzing sentence lengths figure 2 shows the fake sentence to be similar to skipthought on short sentences in sst do you have any idea why also fake sentence is better than skipthought on all lengths in mr not just longer sentences so im not sure theres any signal there figure 3 what is the test set for wordshuffle the idea to create negative samples focused towards specific phenomena sounds like a good way to go writing grammar etc introduction paragraph 3 last sentence start with the introduction paragraph 4 first sentences discriminative training task fake sentence detection discriminative training task of fake sentence detection motivation an useful a useful we assumes we assume then number the number this much this is much motivation do not differ do not differ much related work skipgram skipgram model training skipthought model training a skipthought model section 4 prior work use prior work usesused section 42 space between multilayer perceptron and mlp this also happens with other acronyms page 6 our models however train are trained table 3 caption is bigram in is bigram is coordination is iscoordination page 7 the analysis also indicates but does not figure 3 caption classificationproving task tasks references fix capitalization in paper titles references 1 logeswaran and lee an efficient framework for learning sentence representations 2 khodak et al a la carte embedding cheap but effective induction of semantic feature vectors 3 artetxe et al unsupervised neural machine translation 4 arora et al a compressed sensing view of unsupervised text embeddings bagofngrams and lstms docsepthis paper proposes a method for learning sentences encoders using artificially generated fake sentences while the idea is interesting the paper has the following issues there are other methods that aim at generating artificial training data eg z zhao d dua s singh generating natural adversarial examples international conference on learning representations iclr 2018 but no direct comparison is made also infersent which is cited as related work trains sentence encoders on snli httpsarxivorgpdf170502364pdf again a comparison is needed as the encoders learned perform very well on a variety of tasks finally the proposed idea is very similar to ulmfit httpsarxivorgpdf180106146pdf which trains a language model on a lot of unlabeled data and then finetunes it discriminatively finally there should be a comparison against a langauge model without any extra training in order to assess the benefits of the fake sentence classification part of the model it is unclear why the fake sentence construction method proposed by either swapping words or just removing them produces sentences that are fake andor useful to train on sure it is simple but not necessarily fake a language model would be able to discriminate between them anyway by assigning high probability to the original ones and low probability to the manipulated ones not sure we need to train a classifier on top of that i found the notation in section 2 confusing what kind of distribution is pencxtheta1theta2 theta3 i understand that pxtheta is the probability of the sentence given a model but what is the probability of the encoding it would also be good to see the full derivation to arrive at the expression in the beginning of page 3 an argument in favour of the proposed method is training speed however given that less data is used to train it it should be faster indeed in fact if we consider the amount of time per million sentences the previous method considered in comparison could be faster 20 hours of 1m sentences is 1280 hours for 64m sentences more than 6 weeks more importantly it is unclear from the description if the same data is used in training both systems or not it is unclear how one can estimate the normalization factor in equation 2 it seems that one needs to enumerate over all fake sentences which is a rather large number due to the number of possible word swaps in the sentence i am not sure the generator proposed generates realistic sentences only chicago landed in john on friday is rather implausible also there is no generation method trained here it is rulebased as far as i can tell there is no way to tell the model trained to generate a fake sentence as far as i can tell it is a bit odd to criticise other methods ofr using lstms with millions of parameters while the proposed approach also uses them a comparison should calculate the number of parameters used in either case what is the motivation for having multiple layers without nonlinearity instead of a single layer
### Summary:
|
this paper presents a new unsupervised training objective for sentencetovector encoding and shows that it produces representations that often work slightly better than those produced by some prominent earlier work the reviewers have some concerns about presentation but the main issuewhich all three reviewers pointed towas the lack of strong recent baselines sentencetovector representation learning is a fairly active field with an accepted approach to evaluation and this paper seems to omit conspicuous promising baselines this includes labeleddata pretraining methods which are known to work well for english including results from the cited conneau paperwhile these may be difficult to generalize beyond english this paper does not attempt such a generalization this also includes more recent unlabeleddata methods like ulmfit or radford et als transformer which could be easily trained on the same sources of data used here the authors argue in the comments that these language models tend to use more parameters but these additional parameters are only used during pretraining so i dont find this objection compelling enough to warrant leaving out baselines of this kind baselines of both kinds have been known for at least a year and come with distributed models and code for close comparison
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose graph vrnn the proposed method models the interaction of multiple agents by deploying a vrnn for each agent the interaction among the agents is modeled by the graph interaction update on the hidden states of the vrnns the model predicts the true state eg location of the agent via supervised autoregressive learning the proposed model can improve this estimation from partiallyobserved visual observations in the experiment the authors apply the proposed method to basketball and soccer data to model the positions of the players the paper is clearly written however section 32 needs to be elaborated more because using graph interaction update in vrnn is one of the main contributions i see two main weaknesses the first is that the states are learned by supervised learning where obtaining the state label ie the agent locations is very expensive indeed the authors had to develop their own soccer game to obtain these labels the second weakness is the weakinconsistent experiment results it seems not clear whether having the graph structure or stochastic modeling is really helping or not for example for basketball experiment graphrnn works poorly and for soccer graphvrnn works just as good as graphrnn the authors explained that this is due to the simplicity of the player behavior not much stochastic but the result in table 2 shows good performance for graphvrnn for future prediction task all these make it difficult to buy the claimed argument it is also a limitation that the model requires to know and fix the number of agents as minor comments in table 1 graphrnn works better for soccer t4 but not indicated in bold having a single rnn baseline will be helpful to compare with graphrnn it is confusing to call st a belief state because it is observed not latent in the qualitative results i think it can be compared to the heatmap of true distribution i think the following papers needs to be discussed as related works httpsarxivorgpdf180601242pdf httpsarxivorgpdf180203006pdf docsepsummary the paper proposes a method to predict the future statespaces in a multiagent system by combing the visual and temporal information using a mixed blend of graphnetworksvaernn gvrnn formulation the proposed approach is evaluated on two sports datasets 1 basketball sequences 2 soccer sequences the authors show how the overall formulation is better than each of individual components pros 1 the multiagent setting is interesting very natural and has potential for many applications 2 formulation encodes information about different aspects agents location temporal activities and each agents relation to other cons 1 the current evaluation is contrived a the task for future state prediction in current basketball and soccer sequence is not very clear a gaussian distribution defined with time as standard deviation could give similar results b no comparison with the existing approaches i think the work of felson et al iccv17 is relevant for the given paper and so it would be ideal to do evaluation on the datasets used in their work and if possible compare the different baselines that they have used c the goal is to predict the future state of an agent in a multiagent setting but it is not clear from the evaluation as how the presence of multiple agents influence the behavior of an individual d a better way to demonstrate the future statespaces could be through trajectory of ball or players similar to ones shown by walker et al eccv16 cvpr14 the current qualitative analysis is not sufficient to understand what is happening in the proposed pipeline e more challenging cases to demonstrate the proposed approach consider any multiperson tracking dataset and use the proposed formulation to predict multiple trajectories and hence statespaces at varying time for the people an amazing result could be shown as how a person changes trajectory as a group of people pass by 2 the running example of location of goalie is ambiguous by design goalie has to be near the goal post even if there is no visual information or any other information one can safely say this thing overall i think the work has the potential to be on something really interesting however i think it needs solid experiments and is not yet ready for publicationdocsep1 summary this paper presents a graph neural network based architecture that is trained to locate and model the interactions of agents in an environment directly from pixels they propose an architecture that is a composition of recurrent neural networks where each models a single object independently and communicate with other for the overall environment modeling the model is trained with a variational recurrent neural network objective that allows for stochasticity in the predictions while at the same time allows to model the current and future steps simultaneously in experiments they show the advantage of using the proposed model for tasks of tracking as well as forecasting of agents locations 2 pros novel recurrent neural network architecture to model structured dynamics of agents in an environment outperforms baseline methods new dataset for partially observable prediction research 3 cons forecasting task the authors argue that a discretization needs to be performed because of the many possible futures given the past and also provide an error measure based on likelihood however if trajectories are actually generated from these distributions i suspect the many possible futures generated will be very shaky can the authors provide trajectories sampled from this if sampling trajectories does not make sense somehow can the authors comment on how we can sample multiple trajectories lack of baselines the authors mention social lstm and social gan in the related work however no comparison is provided from a quick glance the authors of these papers work on trajectories however the social principle in those papers is general since its done from the computed feature vector could it have not been used on top of one of the baselines if not could the authors provide a reason why this is not the case additional comments as the authors mention it would be nice to extend this paper to an unsupervised or semisupervised task here are a couple of papers that may interest you httpsarxivorgabs180404412 httpsarxivorgabs170502193 httpsarxivorgabs180607823 4 conclusion overall the paper is well written easy to understand and seems to be simple enough to quickly reproduce additionally the proposed dataset may be of use for the community if the authors are able to successfully address the issues mentioned i am willing to improve my score
### Summary:
|
this paper proposes a unified approach for performing state estimation and future forecasting for agents interacting within a multiagent system the method relies on a graphstructured recurrent neural network trained on temporal and visual pixel information the paper is wellwritten with a convincing motivation and a set of novel ideas the reviewers pointed to a few caveats in the methodology such as quality of trajectories anonreviewer2 and expensive learning of states anonreviewer3 however these issues do not discount much of the papers quality besides the authors have rebutted satisfactorily some of those comments more importantly all three reviewers were not convinced by the experimental evaluation anonreviewer1 believes that the idea has a lot of potential but is hindered by the insufficient exposition of the experiments anonreviewer3 similarly asks for more consistency in the experiments overall all reviewers agree on a score marginally above the threshold while this is not a particularly strong score the ac weighted all opinions that despite some caveats indicate that the developed model and considered application fit nicely in a coherent and convincing story the authors are strongly advised to work further on the experimental section which they already started doing as is evident from the rebuttal to further improve their paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper presents a framework clear to generate counterfactual explanations on graphs this framework can be helpful for promoting explainability in graphbased prediction models specifically the authors use a graph variational autoencoder to generate the counterfactual graph and employ independent component analysis ica to find causal relations the generative counterfactual graphs have generalization ability and causality experiments show that clear achieves promising performance in various evaluation metrics originality the paper employs two existing technologies ie graph variational autoencoder and ica to generate the counterfactual graph although the two components are not new the topic is novel one of the contributions optimization is not original many works have employed various models to generate graphs by gradientbased optimization instead of enumeration such as gnnexplainer 1 and 2 existing works have proposed many methods such as information flow 1 and markov blanket 2 to identify causality among latent variables whats the merit of ica over these existing methods quality technically sound with wellsupported claims however the following issues are suggested to be further considered 1 the evaluation metrics are not sufficient one important metric for explanations is human interpretability requiring the generated explanations should be compact the authors should investigate human interpretability 2 in addition counterfactual explanations may be used to defend against attackers the authors are encouraged to conduct experiments on graphs generated by the adversarial attacks to verify the effectiveness of generated counterfactual explanations 3 the work lacks the evaluation of generalization on the unseen graph clarity overall the paper is well written and easy to understand however the following issues need to be clarified 1 the motivation behind employing ica to find causality is not clear 2 why the team culture is not the cause of grant application in the example of the preliminaries section 3 the grant application example for describing causality in counterfactual explanations is confusing it is advisable to describe it in mathematical form significance the method proposed in this paper would be helpful to give counterfactual explanations and promote explainability in graphbased data 1 lin wanyu et al orphicx a causalityinspired latent variable model for interpreting graph neural networks proceedings of the ieeecvf conference on computer vision and pattern recognition 2022 2 yang shuai et al learning causal representations for robust domain adaptation ieee transactions on knowledge and data engineering 2021 there are no potential negative societal impacts docsepin this work the authors study the problem of generating counterfactual explanations for graphs using graph neural networks gnn in contrast to some existing studies the work lists three unaddressed counterfactual properties ie i discrete optimization of graphs ii generalization of counterfactuals on unseen graphs and iii ensuring causality in the generated counterfactuals without prior knowledge of the causal model the work leverages a graph variational autoencoderbased framework to propose clear generative counterfactual explanation generator for graphs that ensures the optimization and generalization of discrete counterfactual explanations for graphs and enforces causality by using an auxiliary variable for estimating the underlying causal model better strengths 1 the paper nicely enumerates three important desiderata for counterfactual explanation generators for graphs and describes their utilities with respect to counterfactual explanations 2 the formulation of the objective for the clear framework is clearly explained with proper derivations and descriptions of the individual components 3 extensive experiments with both synthetic and realworld graph datasets highlight the effectiveness of the clear framework as compared to the baselines and show the utility of its components weaknesses and questions 1 one of the major drawbacks of the work is that they dont detail the perturbation mechanism used for generating counterfactuals they mention that clear aims to generate counterfactuals with slight perturbations on the explainee graph to elicit a desired predicted label but does not describe the perturbation process this is crucial for understanding the framework as one of the main challenges in evaluating the reliability of generated counterfactual explanations is efficiently perturbing the input data the perturbation process for generating similar graphs using very small perturbation is unclear ie how to generate perturbed instances that are not outofdistribution samples 2 it is unclear from section 47 and appendix c2 how the generated counterfactuals using clear promote model explainability the observation that the framework makes correct perturbation to achieve the target label by visualizing the degrees across the decision boundary is intuitive and shown in multiple previous works 3 the proposed clear framework uses a generative backbone clearvae and claims that it can generate counterfactual explanations for unseen graphs however it should still require graphs to belong to the same distribution as the training data 4 the paper details the problem of optimizing counterfactual explanation on graphs due to its discrete nature but then follows the optimization trick used by previous works for generating a counterfactual adjacency matrix 5 the use of gnnexplainer as a baseline is unclear the author assigns the given graph label to all the nodes inside the graph it would be great if the authors explain this a bit more are they assigning the same label to all nodes and then generating explanations for a given node or are they aggregating all node explanations to generate a graphlevel explanation 6 it would be great if the authors can motivate using causality metric for comparison it feels that the metric is biased towards the proposed framework as the auxiliary variable can provide additional information to identify the exogenous variables in the structural causal model which is captured by the clearvae training process yes docsepthe authors present a counterfactual generation method for graphs the proposed method clear aims to perturb a given graph in a way that is still close to the original while changing a classifier prediction and respecting the causality in the data generating process this is achieved by training a graph vae conditioned on the target label and an auxiliary variable s used by the original data generating process an scm which helps to identify the causal model the overall loss encourages counterfactuals to be close wrt a distance metric to the original graph while achieving the desired label y and inferring the correct latent structure by keeping the latent codes z close to a distribution that is conditioned on s and y in experiments they show that the proposed method achieves a higher number of valid counterfactuals that are more proximal while keeping causal relations than previous state of the art they also provide an ablation study on the different parts of their model overall review overall this paper tackles an important problem the method is sound and the results are encouraging some points could be improved such as motivating the need of graph counterfactuals or checking whether the model is repeating the same counterfactual three times therefore i recommend weak accept and i would be happy to raise the score based on the rebuttal and the other reviews strengths the problem of understanding the behavior of ml systems is an important one the proposed method is sound and it performs better compared to previous state of the art the authors provide the code ablations and information for reproducibility weaknesses this work focuses strongly on the method and comparison with previous state of the art but it lacks a bit of perspective on what is the final purpose of this research some more motivation in the introduction and some qualitative results showing interesting findings on a real graph would make this work more appealing i like the example in section 2 so you can focus more on motivating the problem in general rather than giving more examples in the introduction in the ablation study there is no clear difference between vae and clear the difference is however more clear on figure 4 since the proposed method predicts 3 counterfactuals it could just predict three times the same or very similar graph this would improve the validity metric while not adding any additional value to the user several recent works focus on providing a set of diverse 28 a b and nontrivial explanations c unexpected failure modes of the model that are more useful for the end user in fact in appendix d the authors acknowledge that optimizing for diversity is a current limitation but it would be interesting to study whether the vae is repeating the same graph or producing different graphs 28 mothilal ramaravind k amit sharma and chenhao tan explaining machine learning classifiers through diverse counterfactual explanations proceedings of the 2020 conference on fairness accountability and transparency 2020 a smyth barry and mark t keane a few good counterfactuals generating interpretable plausible and diverse counterfactual explanations arxiv preprint arxiv210109056 2021 b hvilshj frederik alexandros iosifidis and ira assent on quantitative evaluations of counterfactuals arxiv preprint arxiv211100177 2021 c rodrguez pau et al beyond trivial counterfactual explanations with diverse valuable explanations proceedings of the ieeecvf international conference on computer vision 2021 detailed comments originality the proposed framework and particularly the focus on causality preservation when generating counterfactuals are novel to the best of my knowledge quality the technical and written quality are good assumptions are clearly stated and proofs are provided in the appendix clarity overall the text is wellwritten and easy to follow in section 47 i believe that references to figure 8 should be to figure 4 in section 48 it is not clear why you do not test 01 many times you talk about preserving causality and without any context it is not clear if you mean that you are reconstructing a causal graph with directed edges and you want to preserve edge directions or if you want the generated graphs to be compatible with the original scm after some time reading it becomes clear but it would be better if you made it more clear since the beginning significance given the increasing impact of machine learning applications in our lives it is important to better understand how models make their predictions however the text does not fully transmit the significance of this work reproducibility the authors provide code equations and the necessary assumptions to reproduce their results yes the authors include a section in the appendix
### Summary:
|
this paper proposes a new method for producing counterfactuals on graphs this is performed using a vae on graphs with auxiliary variables to identify independent components and promote causality while this work is mainly a combination of existing ideas the resulting method is not trivial the engaged discussion clarified most of the concerns except a remaining concern around the diversity of the explanations the reviewer was encouraging to measure or optimize for the diversity of explanation that is explanations that are significantly different eg orthogonal from each other in latent space this is not a ground for rejection but it could improve this work and we encourage the authors to add this feature i recommend acceptance of this paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper presents lasso a latent subspaces orientation to tackle domain generalization problem the paper involves both theoretical and empirical results theorem 1 says the target domain loss is bounded by terms involving source domain losses label shift and the latent data shift which reminds me of the theorems from the seminal work by bendavid et al for domain adaptation theorem 2 extends theorem 1 but with latent subspaces theorem 3 says that for a given subspace indicator gamma the mutual information between the subspace and label is lower bounded by the sum of negative source domain losses the proposed method essentially introduces the subspaceindicating binary variables in the neural network this seems to work similarly to the dropout but with more structure and with learned and datadependent drop rates inference is also done by generating multiple subspaceindicating binary masks and ensemble them experimental results are provided on standard dg benchmarks the trend seems clear that for shallow network eg alexnet resnet18 the performance is not necessarily better than previous methods but with deeper network eg resnet50 the improvement is somewhat significant strength the proposed method is solid and novel the empirical results show solid improvement over baselines with a large network backbone weakness theoretical arguments are not particularly strong for example theorem 1 and 2 are rather for domain adaptation when one has access to the target domain so it is unclear how it is relevant to the domain generalization context while formulating a bound with latent space might be new but it does not seem to add any better understanding for domain generalization gamma is optimized based on equation 7 and it is unclear what authors mean by our principle is to encourage gammad to becomes more independent are there any part of the optimization problem that promotes independence might be good to add a baseline without updating gamma ie adding a dropout layer misc it is unclear what it means by but the gains shrink with resnet50 since larger resnet backbones are known to generalize better in section 413 the trend observed in the paper on the performance wrt the network size seems interesting might be good to add more experimental results with deeper networks to see if trend holds true while there is a question on the significance of the theoretical results overall the paper proposed a new domain generalization algorithm that is both technically and empirically solid docsep authors point out that the assumption of existence of fixed domaininvariant features and common hypotheses learned from a set of training domains could be overly strict and suboptimal authors propose a new method which doesnt have a single hypothesis shared among domains and give theoretical analysis of the proposed method authors also give results on benchmark datasets 1 authors correctly point out overly strict and suboptimal assumption that is popular in the literature currently 2 authors claims are backed both by theory and experimental evidence 3 some benchmark papers in the literature which dont have an assumption of domaininvariant features are not considered 12 both these papers dont have an assumption of domaininvariant features and are similar in spirit as the current paper 4 theoretical analysis in the given paper helps to come up with a loss but not actually analyze the method like in 12 can authors comment on the learning theoretic study how does risk bound change with the number of available domains and number of examples in the training 5 as we have a large number of domains and a large number of training examples then one should be able to have optimal error or loss i could not draw this conclusion from the theory that the authors provided can authors comment more on this 6 how were hyperparameters tuned for the baseline method 1 blanchard gilles gyemin lee and clayton scott generalizing from several related classification tasks to a new unlabeled sample advances in neural information processing systems 24 2011 21782186 2 blanchard gilles aniket anand deshmukh rn dogan gyemin lee and clayton scott domain generalization by marginal transfer learning arxiv preprint arxiv171107910 2017 j mach learn res jmlr 22 2021 21 paper has rigorous set of experiments and is backed by clearly explained motivation addition of suggested literature could further improve the paper docsepthis paper critically reexamines the rationality of domaininvariant based method for dg generalize to unseen domains without retraining it highlights that tackling all source domains equally without taking the underlying relationship between them to learn domaininvariant features can lead to a suboptimal solution to deal with this it introduces a latent space decouplingbased method called latent subspace orientation lasso to eliminate some irrelative representations for prediction by this the relationship between latent space and label space can be strengthened thus the model can generalize better the results show that lasso especially with threshold can commonly lead to a marginal performance boost for popular dg benchmarks in the abstract the author claimed that exploring diverse latent subspaces for individual hypotheses learning is superior compared with learning single hypothesis shared among domains however it is not clear to me what is the significance of doing so the authors try to give an explanation by delivering an upper bound target general loss which to me is quite questionable since the target domain is not available during the training phase simply using the mixture sources domain may not reflect the true distribution of the target one besides there are some other weaknesses as follows 1 is theorem 2 strictly lower than theorem 1 i can only find the authors explanation rather than strict prove this part is essential for this paper besides this is closely related to my given score 2 in section342 the author proposes to use sampling for model training i wonder about the performance of doing this in a deterministic manner just like using threshold at the testing phase 3 experiments are somewhat lacking 1 challenging datasets eg medical imaging for cv reinforcement learning in mldg should be considered since the proposed method is more related with general machine learning 2 only rsc is considered as the latest stoa baseline which is not desired minor 4 for the experimental part why not keep the same manner to select the validation set it would be better to keep this unchanged though i am not an expert in this area i have discussed my comments with my colleagues with rich experience in this area thus i am still confident in my comments docsepthis paper proposes a latent subspace orientation algorithm for domain generalization which is built on diverse latent subspaces and individual hypotheses with labelinformative features extensive theoretic and experiments show the effectiveness of the proposed method with two variants instead of using a single consistent latent space the authors propose to employ multiple latent spaces by capturing the labelinformative features from the available source domains some theoretical analyses are given to support their learning scheme and a very simple latent subspace orientation algorithm is proposed to tackle the domain generalization challenges experiments validate the usefulness of the proposed method on different datasets some concerns should be considered for rebuttal 1 generally the proposed method alternatively optimizes an indicator function and subspace learning although there are three theorems to support your claim the reviewer still has concerns about the standpoint and update of the subspace indicator apart from these claimed theoretic analyses on the subspace indicator delta the authors should give a more intuitive explanation of why it is useful which can help the readers understand your work rather than lots of equations moreover the reviewer is wondering if we fix these indicator updates after some iterations what will happen for the performance because the review doubt such an update is meaningless for this task in practical systems 2 why is maximizing the mutual information useful in your framework from the reviewers view using other adversarial learning or metric learning also can achieve your objective 3 the presented performance is unsatisfactory when compared to other works published in 2021 although this paper is very simple and easy to implement 4 the authors should clarify why the performance of using threshold outperforms those of using ensemble intuitively using an ensembling strategy has a tendency to have better performance moreover how could you set the value of tau is it a hyperparameter is there any adaptive learning strategy to determine it or empirically setting 5 many grammar mistakes are shown in the work and the authors should pay attention to your presentation this paper proposes a simple latent subspace orientation algorithm for domain generalization based on some theoretical and experimental observations the technical parts with some complex theoretical analyses are satisfied yet the model construction and experimental parts could be improved docsepthis paper introduces a new method for domain generalizable classification given a feature representation of input the proposed method first samples binary masks from a bernoulli distribution whose parameters are determined by the feature projects the input feature to multiple subspaces by using the masks and applying classifiers associated with each of the subspaces the parameters of the bernoulli distribution are predicted by a module that is learned together with the feature representation and classifiers although the paper tries to provide mathematical backgrounds and underlying theories of the proposed method unfortunately this reviewer finds no clear justification for learning and using the subspaces also the improvement by the method seems marginal or it even underperforms sometimes when coupled with small backbone networks although the paper presents lots of theorems this reviewer could not find a clear justification for the proposed method 1 how the diversity of latent representations could reduce the latent data shift issue theorem 1 and the first paragraph of page 4 suggest that to achieve domain generalization one has to reduce the latent data shift which is unfortunately not straightforward since no target data is available in the task the second paragraph argues that learning a single domaininvariant model is not enough to resolve the issue but such an argument is not theoretically proven moreover in the third paragraph the authors claim that modeling and utilizing diverse latent representations are the key to the success but the connection between the diversity of multiple latent representations and the latent data shift issue is totally missing further considering the technical details in sec 34 there is no guarantee of the diversity in the proposed method 2 how the subspace hypothesis could reduce the latent data shift in page 5 the authors argue that the compactness of the subspace can reduce the latent domain shift which is however not proven theoretically either 3 how the mutual information maximization is linked to reducing the latent data shift theorem 3 derives a lower bound of the mutual information between groundtruth label and data projected to the subspace ie masked feature representation and accordingly the masking module is learned to maximize the mutual information in eq 6 however there is no justification for this approach nor the connection between the mutual information and the latent data shift or domain generalization in general 4 how the independence of the masking weights leads to better generalization the argument in page 6 about the advantages of such independence is not rigorous the reason following the argument seems indecipherable due to these this reviewer feels the theorems derived in this paper are not very useful to justify the proposed method and cannot find clear reasons for its advantages other comments do not understand why z is a 2d tensor and what each dimension stands for would recommend drawing a figure to better illustrate the overall pipeline the first theorem resembles the well known theorem in literature of domain adaptation it would be useful if their differences are well explained in the paper this reviewer feels the theorems derived in this paper are not very useful to justify the proposed method and cannot find clear reasons for its advantages also the practical value of the method seems limited as its performance is not impressive or even inferior to previous work when it is incorporated with small networks postrebuttal i greatly appreciate the kind responses and the update of the manuscript and would like to upgrade my rating accordingly but since my major concern has not been resolved yet i am still leaning towards rejection
### Summary:
|
this paper proposes a novel method for improving domain generalization based on the idea of learning different subspaces for each domain authors provide theoretical analysis related to their proposal and further evaluate their proposed method on a subset of domainbed benchmark strong points the paper is wellwritten the proposed method is novel authors provide theoretical analysis in support of their proposal the theoretical results seem to be correct empirical evaluation shows that the proposed method improves over baselines on a subset of datasets included in the domainbed benchmark weak points the complexity of the theoretical results makes it very difficult for the reader to get any intuition about the underlying mechanisms at play the theoretical analysis is disconnected from the proposed algorithm it is hard to see how one could end up proposing such an algorithm following the theoretical results i suggest that authors would consider reorganizing the paper with less emphasis on the theoretical part perhaps simplifying the theoretical results and pushing the rest to appendix the empirical evaluation can be improved significantly domain generalization is a very wellestablished area at this point wilds is a carefully designed and wellknown benchmark and showing improvement in that benchmark would be very convincing but unfortunately authors do not discuss or even refer to it they instead report their results on a subset of datasets used in domainbed benchmark the domainbed benchmark is less challenging than wilds but even following domainbed closely and reporting the 3 evaluation metrics on all 7 datasets would have been satisfying however authors only report the results on 3 datasets reporting the results on a diverse group of datasets is particularly important in the case of domain generalization because we know that many methods are able to show improvements on a few datasets but it is challenging to beat the baselines on a significant majority of datasets final decision rationale this is a borderline paper on one hand the proposed method is interesting and novel on the other hand the theoretical contributions are very limited and the empirical evaluation is not strong enough for acceptance given that all weak points mentioned above can be addressed i recommend rejection and i sincerely hope that authors would strengthen their paper by addressing them before resubmitting their work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose the intrinsic sliced wasserstein isw distances for collections of probability measures the isw is a variant of sliced wasserstein sw which exploits the closedform solution of optimal transport on 1dimensional space the authors propose to use the eigenfunctions to project supports into 1dimensional space and use corresponding eigenvalues as its weights isw is hilbert embeddable which allows to reformulate testing problems over collections of probability measures to traditional ones in hilbert space the authors demonstrate it for hypothesis testing via resampling based test and testing by pvalue combination from coordinatewise tests strengths the authors propose a novel intrinsic sliced wasserstein isw for collections of probability measures by leveraging eigenfunctions for 1dimensional projection and eigenvalues as weights the paper is easy to follow the authors derive hypothesis testing in the hilbert for collections of probability measures problems under isw weaknesses the ideas to use eigenfunctions and eigenvalues for sliced wasserstein are interesting however the authors have not demonstrated the advantages of the proposed approach over other approaches eg slicedwasserstein general slicedwasserstein treeslicedwasserstein or the recent sobolev transport a variant of optimal transport on graphs note that all these approaches are hilbert embeddable as the proposed method in theory andor experiments the authors should discuss the complexity of the proposed intrinsic sliced wasserstein and report time consumption in experiments the authors have not discussed the limitations of their works clearly in the text as mentioned in the checklist the authors have not discussed the potential negative societal impacts of their work as in the checklist docsepit is wellknown that the wasserstein distance between two 1d measures can be computed easily hence a common approach in a higher dimensional euclidean space mathbbrd consists in slicing the space formally projecting on 1d subspaces compute the distances on the slices and integrating to retrieve a proper distance between the input measures this work proposes to adapt this idea in the context of measures supported on a compact manifold mathcalm the key idea is to rely on the laplacianbeltrami operator on the manifold which provides a collection of maps phiell mathcalm to mathbbr ell in mathbbn eigenvectors of the operator which can be used to pushforward the measures we want to compare to mathbbr namely they introduce isw2munu sumell alphalambdaell w2phiell mu phiell nu where lambdaell is the eigenvalue of the laplacian operator alpha is some weighting function authors then prove that isw2 defines a metric under mild assumptions the key aspect being that it induces a universal kernel which yields the separation property isw2munu 0 rightarrow mu nu other properties are mostly straightforward once this distance is introduced authors provide some approximation results for practical computations eg we do not have access to all the phiellell and similarly just a sampling of the projected measures afterward as this metric is hilbertian meaning that the metric space px isw2 isometrically embeds in a hilbert space they design a twosampletest between distribution supported on px whenever x is a compact manifold they showcase the relevance of their approach in different numerical experiments note phi mu denotes the pushforward for mu by phi here hashtag not used due to rendering issues originality using the laplacebeltrami operator as a way to build a canonical way to slice measures supported on a manifold feels pretty satisfying and is a novel idea to the best of my knowledge on the other hand one could argue that once the idea of using the laplacebeltrami operator to slice the measures has been introduced most of the work aside from the experimental section is essentially an adaptation of standard techniques in twosampletests literature clarity the paper was overall wellwritten it motivates the need to consider that our measures are supported on manifold in a convincing manner eg on a circle for daily event records and aside from few paragraphs see below it is fairly easy to read significance i think this paper provides a new interesting approach that may motivate further practical developments of optimal transport for measures supported on nonflat domain quality i think this is a competent paper which introduces a new idea that may be of interest for the optimal transport community few limitations of the work should be discussed further i did not identify ethical concern specific to this work docsepconsider the set of probability measures on some manifold the authors propose a new distance on this set that behaves in some way like the sliced wasserstein distance this distance is defined as the weighted average of the wasserstein distance between the pushforwards of the two measures by the eigenfunctions of the laplacebeltrami operator on the manifold some properties of this distance are given and the distance is then used in the twosample testing problem where each sample is a sample of iid probability measures on a given manifold some numerical illustrations of the twosample tests are then given sliced wasserstein sw distances have been originally proposed as computationally cheap versions of the wasserstein distance on the euclidean space mathbbrd the idea of the original sw is the following instead of computing the wasserstein distance between two measures in high dimension let us look at a certain number of projections of the measures on lines in mathbbrd then one can compute efficiently a distance between the two measures by averaging the distance between the projections this paper proposes a very reasonable generalization of this idea called the intrinsic sw isw when the support of the measure is a manifold or a graph as lines are no longer defined the distance is defined as an average of projections along the eigenvalues of the laplacebeltrami operator that should describe the geometry of the manifold and therefore replace the lines in mathbbrd the paper focuses on the notion of hilbert embeddability to justify the use of this distance this means that their distance iswmunu is actually equal to etamuetanuh for some feature map eta and some hilbert space h however the authors seemed to have miss a key link with kernel methods indeed the feature map eta defines a kernel on the space px of probability measures on x through kmunu etamuetanuh and the method they propose for the twosample tests is the kernel twosample test method from gretton al the authors seemed to have understand partially the connection as their proofs are adaptations of the ones from gretton al but the authors have missed that they could actually directly used the results from gretton al without any adaptation there are other concepts defined in the paper that can be rephrased in the more standard language of kernels a hilbert centroid def 1 is a kernel mean embedding tpq eq 21 is actually the mmd associated with the kernel on px given by the intrinsic sliced wasserstein distance most proofs corresponding to section 41 can be removed it suffices to check that the assumptions given in gretton al are satisfied for the kernel given by the intrinsic sw distance other comments the intrinsic sw distance depends on the choice of a weight function describing the importance given to each eigenvector having to tune such an important hyperparameter is clearly a drawback of this distance compared to the sliced wasserstein distance notation in prop9 in the appendix are extra confusing when defining the eigenvectors of some operator that are defined on some hilbert space h the variable x is used whereas x was used before to note an element of the underlying manifold as we also consider eigenvectors on the manifold this created a lot of confusion at first on my grade i put 4 so far as i believe the link with kernel methods is not made very clear in the paper whereas it is central to the approach proposed i could improve my grade if this link is made more explicit eg say explicitly that the test is a kernel twosample test and that tpq is a mmd and if proofs are significantly shortened by using gretton als theorems directly please tell me if i am missing something and if this test is not a kernel twosample test yes
### Summary:
|
in this paper the authors propose the intrinsic sliced wasserstein distances and a hypothesis testing framework for the proposed measure the idea of using eigenfunctions and eigenvalues for sliced wasserstein is interesting the authors addressed a part of the concerns raised by the reviewers however the advantage over the existing methods sliced wasserstein distances and mmds is unclear thus it needs a major revision and cannot be accepted with the current version i encourage the authors to revise the paper based on the reviewers comments and resubmit it to a future venue
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the proof for the optimality of the fairness constrained algorithms finite or infinite i will cover mostly on q5 i am not sure why it is called fair surely fairness can be defined differently for different tasks and different problems in the rmab setting the authors defined that it is fair to visit arm that are not visited long time this is very similar to job scheduling based on priority scheduling combined with longest job first with an exception that we have k actions to be activated at one the introduction the first page right column last paragraph tries to motivate rmab fairness constraint by applications for public health yet i really couldnt understand the two long sentences what do you mean by never talking to public health workers and moving to bad states while it is alluded that arms are patients pregnant mothers etc yet there is no concrete ideaexample to grasp the setting during the reading of the paper i was keep thinking about an arm as treatment maybe this is the reason why i am keep confused about the fairness still the abstractintroduction should be improved problem description is a bit frustrating are and states are somewhat coupled but not presented clearly what do we mean by arms evolving arm is a patient but arm is also the state of the patient this also makes me confused when omega is belief state and also expected reward in eq 4 how should you evaluate the novelty of 43 and 44 further the sec 43 and 44 both are about the uncertaintyunknown of transition matrix where 43 is about modeling the transition and 44 is about modelfree approach section titles can be appropriately changed i wonder whether there is any ways to incorporate fairness constraints naturally as reward like upper confidence bounds like idea there is increased reward for unvisited individual it may be hard to incorporate communitylevel constraint into the reward structure docsep1 this paper first considers the fairness constraint within the rmab problem and proposes a novel algorithm to find the optimal policy 2 this work considers both finite and infinite horizon cases with a theoretical guarantee in addition experiment results also show that the fawt algorithm outperforms other baseline algorithms 1 the requirement for theorem 1 and theorem 5 seems too strict the constraint may be satisfied for the simple simulation instance in the experiment however it is not clear whether this assumption still holds for some difficult simulation instances or some realworld applications if not the results of theorem 1 and theorem 5 cannot provide any guarantee for the performance it is better if the author can give more explanation for those requirements 2 for the fawt algorithm line 3 the arm i will add to the action set if it is not satisfied at the end of length l when the fawt algorithm starts to determine the arm at the lstep the fairness constraint is always satisfied since the number of the previous step is only l1l and the number of arms i that is not satisfied at the end of length l may be larger than k in this case the algorithm will fail whatever the action is chosen and it is not clear how it works during these situations for the previous weakness 2 it would be better if the author could give more detail about the algorithms performance and show how to avoid this situation docsepthe problem setting is novel and the proposed solutions are convincing the writing is strong the technical contributions appear sound and the experimental results illustrate the performance and limits of the proposed algorithms if i understand correctly all three algorithms described here assume that arms are functionally independent yet the examples cited at the beginning of the manuscript eg healthcare applications typically violate this assumption for example treatment options may interact or overlap in ways that allow results from one arm to inform another the manuscript is strong combining novel theoretical and empirical results on an interesting problem that i believe will be of interest to researchers in this area i therefore recommend acceptance my only substantive critique is that the text makes no mention of the possibility that arms may be dependent and that exploiting such dependencies could lead to better results eg faster convergence lower cumulative regret etc this has been the focus of the growing literature on causal reinforcement learning which began with the mab setting lattimore et al 2016 lee bareinboim 2018 causal approaches to fair policy optimization have also been ascendant in recent years zhang bareinboim 2018 nabi et al 2019 creager et al 2020 huang et al 2021 while i appreciate that such considerations may fall outside the scope of this manuscript which is effectively introducing a new method for fair rmabs some acknowledgment of the problem would be welcome as it could pose an interesting direction for future research in rmabs minor notes on p 4 monotonous should be monotonic in figs 67 the colors for random and oracle are much too similar please distinguish these more clearly
### Summary:
|
meta review the reviewers are in agreement with accepting the paper there are some concerns around the exposition especially around using the fairness term we strongly encourage the authors to consider this in the final version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper presents an approach for image to image translation by introducing extra layers into the generator which can be trained in an unsupervised way the paper is generally easy to follow i have the following concerns the novelty is quite marginal since the backbone network and the training process are well developed before and the technique of employing more layers to the generator seems like simply extending the network please consider to use different notations in section 3 to denote image from different domains also please use the same subscript for gab and gmathcalbadocsepthis paper proposes an unpaired imagetoimage translation method which applies a pretrained autoencoder and a latent feature transformer single block to perform iterative image transformation a progressive training and warmup strategy is used to settle the numerical exponentiation effects caused by powers of layers in the testing phase the discriminator is also used to adjust the inference time pros 1 compared with the vanilla cyclegan the proposed pol has significantly fewer parameters and similar performance 2 the flexibility offered by the common embedding space allows the modulation of the transformation strength or to compose several transformations 3 the discriminator is used to adjust the inference time and find the optimal number of iterations in the testing phase cons 1 cyclegan is the only competitor in most comparative experiments which is not sufficient besides cyclegan is not a good competitor for image restoration tasks debluring denoising etc so the potential of the proposed pol is questionable additional comparison results generated by other general image restoration methods 1 2 should be reported 2 is cyclegan also pretrained on the same dataset as pol for fair comparison 3 although progressive training contributes to more natural intermediate outputs the final output is not satisfactory for example the unnatural patterns on zebras in fig 4 4 more effective transformation modulation is a major advantage of the proposed pol but the provided experiments did not demonstrate this well i think it would be more appropriate to put the results obtained along the iterations of the recurrent block in the main text instead of in the appendix 5 why the proposed embedding transformer is similar to the feedforward network used in transformers 3 it seems that it is just a simple residual convolution module with expansion factor k to adjust the models capacity which is not related to transformer or selfattention 6 what are the significant advantages of pol compared with traditional rnns lstm gru is it possible to directly replace pol with rnns to achieve close results 1 neural sparse representation for image restoration neurips 2020 2 learning invariant representation for unsupervised image restoration cvpr 2020 3 attention is all you need neurips 2017docsep1 summary the submitted paper proposes to use a recurrent residual block for the task of unpaired img2img translation a number of strategies of how many times to apply this block are suggested 1 decision i do really like the direction of weight sharing in img2img models and this work is a pretty nice case this approach helps to decrease the number of weights and if done properly does not harm quality the results in deblurringdenoising look rather interesting cons however the downside of the presented recurrent block is the increased computation time at the inference step as far as i can judge this is especially crucial when the discriminator is involved as the stopping criterion could you provide a comparison of the inference speed fps or flops or any other measure between cyclegannicegan and pol second i believe this approach could be also put into the context of the adaptive computation time research field 1234 this may help to determine the number of layers to apply third the proposed block may be straightforwardly generalized to multidomain img2img translation and showcased on more interesting and recent datasets against stronger baselines like munit 5 funit6 etc this could make img2img part of your experiments more solid i suppose allinall to my mind there is great room for improvement for your submission to demonstrate the real power of pol therefore now i tend to rate the submission a bit below the threshold 1 httpsopenreviewnetforumidr1w1oxaf 2 httpsopenreviewnetforumidskzq3vydf 3 httpsopenaccessthecvfcomcontentcvpr2017htmlfigurnovspatiallyadaptivecomputationcvpr2017paperhtml 4 httpsopenreviewnetforumidhyzdrir9y7 5 httpslinkspringercomchapter101007978303001219911 6 httpsopenaccessthecvfcomcontenticcv2019htmlliufewshotunsupervisedimagetoimagetranslationiccv2019paperhtmldocsepthe paper proposes a method for unsupervised image translation between unpaired domains of images the main idea is to develop an iterative transformation module that operates in the embedding space overall i have the following concerns about the paper the motivation for this architecture is unclear the introduction motivates this model with fractals and iterated function spaces but that seems to have nothing to do with the types of applications shown here what do ifss have to do with denoising and pictures of zebras this iterated refinement strategy seems more similar to iterative refinementprojection algorithms like conjugate gradient and richardsonlucy deconvolution which are relevant to lowlevel signal processing operations like denoisingdeblurring but not zebra synthesis the paper includes as motivation the idea that applying that different levels of transformation can be achieved by choosing different numbers of iterations but the application of this is shown only for denoising table 1 no comparison is provided to the stateoftheart in unpaired image translation contrastive learning for unpaired imagetoimage translation taesung park alexei efros richard zhang junyan zhu eccv 2020 visually the results are not convincing not many results are shown and most do not look better than those from cyclegan the results may be cherrypicked since there was no statement as to how these results were chosen there is simply not enough visual evidence that the method has evidence over previous work additionally quantitatve comparisons do not give a compelling outcome but i would put more weight on visual comparisons anyway for the task of denoising it is unclear why one would want to use a generalpurpose unpaired translation method supervised methods ought to be much effective here and there is an enormous literature of related work that is not cited or compared with here if one is to use denoising as a motivation application rather than a toy example then much more rigor is required
### Summary:
|
all the reviewers shared the concerns about the novelty and the quality of the results comparisons with some sota results are missing and the inclusion of deblurringdenosing tasks is not convincing the authors carefully addressed these issues in the rebuttal but the reviewers didnt change their mind afterwards after carefully examining the results in the paper the ac agrees with the reviewers that the improvement on image quality if any seems to be too small to warrant a publication
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper considers risk estimation of predictors used by planners for robots robustness of plans is achieved by pushing the risk estimation to prediction rather than computing it at planning time this has two advantages a planners do not need to be updated when considering new predictors and risk measures and b it simplifies planning using expectations over biased riskaware distributions the paper shows the approach on didactic and realworld datasets strengths the paper is clear and well written it has good explanations and arguments for all design decisions and theory the literature review and framing is good weaknesses an explicit contributions statement at the end of the introduction would strengthen the paper the paper could highlight the significance and benefits of the simplification more eg what is the impact on runtime performance at deployment docsepthis paper proposes to learn a riskaware prediction model the key idea is to learn an extra encoder model of the latent random distribution minimizing a cost function biasing the latent random variable to generate riskbiased trajectories the article makes two main contributions 1 a riskaware prediction model 2 a novel training objective to learn a riskbiased distribution of the latent random variable first this paper tackles an extremely challenging and important problem to the robotics community secondly the proposed idea is interesting and the proposed approach novel overall the paper is well written the method is described clearly and the presented results support the claimed contributions however this paper has the following main limitations 1 it only considers simple scenarios with a maximum of 2 agents 2 the riskaware prediction do not depend on the egos plan therefore in future work and in a journal of this work i recommend the authors to 1 consider complex scenarios with a high number of agents 2 evaluate and discuss how this approach would scale to higher number of agents 3 will overconservative prediction lead to the freezingrobot problem how to address this issue the presented results can also be strengthened 1 present more qualitative results in the form of a video to clearly show the how the predictions vary with the risk value 2 how computationally demanding is to train such model what is the training time how much time it takes on training iteration 3 how does this model generalizes to new unseen scenarios finally i recommend the author to revise their citations it is good that they did such broad literature review but there are several sentences where the citation does not support the written sentence docsepthere are several recent works that aim to enhance robustness of planning against various sources of uncertainty this is typically accomplished by designing riskaware planning methods the issue in these methods is that is risk is approximately computed which may lead to unsafe behaviors this paper follows a different approach and instead of robustifying planning it robustifies prediction by making it riskaware particularly a new prediction objective is proposed to learn riskbiased distribution over trajectories of agents that exist in the environment this approach reduces samplecomplexity during online planning which facilitates realtime implementation the proposed method is supported by simulation results strengths the idea of proposing a riskbiased trajectory forecasting framework is quite interesting and novel this pessimistic forecasting allows the planner to be distributionally robust ie robust to unmodeled sources of uncertainty the proposed method does not require modification of the planner and in fact it seems that it can be integrated with various planners eg riskaware planners and riskneural planners a biased distribution is constructed from which samples are drawn to forecast future states of agents this biased distribution allows to accurately predict future states with only few samples weaknesses the evaluation is somewhat weak it should be improved to demonstrate more clearly the distributional robustness of the proposed method docsepthis paper tackles risk sensitive planning for vehicle prediction existing approaches often fail make diverse predictions suitable for risk aware planning additionally adapting an existing riskneutral planning pipeline to risk aware one may be costly instead of targeting downstream risksensitive control this paper encodes risk into upstream prediction components the authors key insight is that a risk sensitive measure of cost is the same as the riskneutral expectation of cost under a distribution of trajectories with risky trajectories upweighted this means they can use vaes to learn an importance sampler which biases towards risky parts of the trajectory space implicitly learning a risksensitive measure via proxy experiments are given on a small pedestrian proof of concept scenario as well as larger experiment on the waymo open dataset they demonstrate usefulness of their method as compared to an unbiased cvae strengths the paper is easy to follow with good mathematical layout and comprehensive existence proofs accompanying each of the measure formulae the contribution of the paper itself is a novel synthesis of variational autoencoder methods crossentropy importance sampling and risk sensitive measures this combination provides a valuable tool for adding robustness to the predictive layers of an perceptualcontrol pipelines the results and figures given in particular the histograms of sampling for the biased vs unbiased experiments do a good job showing the intuitive operation of their algorithm and how it effectively biases predictive distributions towards risky parts of the space weaknesses there is an odd statement towards the end of section 4 about proving that the solution to their optimization problem is unique this is under the assumption that the decoder is volume preserving and they even provide a citation for a method for implementing volumepreserving neural networks however they then state that they do not attempt to meet this assumption yet still present it as evidence their problem is well posed im not sure i follow this i think for the statement to be of any value i would as least like to intuitively understand what it would take to bring their method in line with the volume preservation assumption in section 52 there are a few userdefined constants s alpha beta it would be nice to see some analysis of how altering these values affects the end result the work is well situated within most of the related literature it deals with however the methods used also draw heavily on importance sampling and rareevent validation i would have liked to see more engagement with this work for instance how does the work fit alongside things like httpsieeexploreieeeorgabstractdocument7933977 or httpsproceedingsneuripsccpaper2018hash653c579e3f9ba5c03f2f2f8cf4512b39abstracthtml additionally how do their methods relate to concepts like the proven ideal optimal zerovariance sampler from the importance sampling literature the authors mention in their limitations section that while their method works for riskaware planning at the 95 level it may not be suitable for much rarer events i agree but i think this point is understated for the av domain in which their experiments take place extremely rare collision events are in fact the norm with safety requirements several magnitudes stronger than the ones given in the paper
### Summary:
|
summary this work builds a riskaware prediction model to construct riskbiased trajectories it learns an additional encoder model of the latent random distribution by minimizing a cost function biased by the latent random variable this avoids the need to update planners when new predictors and risk measurements are evaluated and simplifies planning utilizing expectations over biased riskaware distributions the paper uses didactic and realworld datasets to explain the approach this novel prediction objective learns riskbiased agent trajectories this strategy decreases online sample complexity for realtime implementation experiments on a pedestrian proofofconcept scenario and the waymo open dataset are presented and shows an improved performance over unbiased cvae strengths 1 the concept of developing a riskbiased trajectory forecasting framework is intriguing and novel this forecasting technique enables the planner to be distributionally robust that is resilient to unmodeled sources of uncertainty 2 the proposed method does not necessitate any changes to the planner and it appears that it can be integrated with a variety of planners eg riskaware planners and riskneural planners 3 to forecast future states of agents a biased distribution is constructed from which samples are drawn with only a few samples this biased distribution allows for accurate prediction of future states 4 the paper is simple to read with strong mathematical design and extensive existence proofs accompanying each of the measurement formulae 5 the papers contribution is a unique synthesis of variational autoencoder methods crossentropy importance sampling and risk sensitive measures this combination is a valuable tool for enhancing the predictive layers of perceptualcontrol pipelines 6 the results and figures provided do a good job of demonstrating the algorithms intuitive operation and how it effectively biases predictive distributions towards risky parts of the space weaknesses 1 a limited number of agents is considered more sophisticated multiagent scenarios should be considered and a discussion on how this will scale to more agents should be in the paper 2 it is not clear how to handle robots failing due to the simplified prediction approach 3 a discussion on how generalizable the method is was not done 4 the citations need to be reviewed and updated due to the numerous sections in the paper requiring references 5 it is not clear what the usefulness of the volume preserving decoder is 6 reviewers have listed some related work that has not been discussed
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work incorporates symmetries into a convolutional neural network to improve generalization performance and prediction accuracy the work incorporates various symmetries by designing equivariant neural networks and demonstrate their superior performance on 2d time series prediction both theoretically and experimentally this work studies an interesting and important question in deep learning however the reviewer feels that the paper in the current form is difficult to follow with many places unclear the overall result is based on a combination of previous work on equivariant convolutional neural network the reviewer finds it hard to parse the t and obtain a general methodology or systematic approach for dealing with symmetry from the work and there is no highlevel message conveyed in this work below are some more detailed comments in section 31 equivariant convolutions the result in 1 is quite unclear without much background provided equivariant resnet and unet what is the implication on skip connection with no effects for network equivariance not clear for section 32 35 the reviewer finds it very hard to follow the proposed approaches for dealing with each symmetry perhaps this is because the reviewer has limited domain background in equivariant cnn in general the reviewer has it very hard to follow and interpret the proposed methods for dealing with each symmetry and there is very little background provided 3 it would be better if the author can provide more numerical results on real data other than simulated data in table 3 it seems that some of the methods with symmetry consideration are even worse than vanilla resnet and unet this is a bit confusing overall the reviewer feels that the study is interesting with improved ways of dealing with symmetry for learning complex physical dynamics but the work needs significant improvement in presentationsdocsepthis paper studies improving the modeling of physical dynamics with equivariant neural networks in particular this paper focuses on a new type of data governed by physical models several special symmetry groups are considered to better characterize the system including uniform motion equivariance resolutionindependent scaling and resolutiondependent scaling etc simulation results show that the proposed equivariant model yields better accuracy and physical consistency than the nonequivariant models even with data augmentation given the type of distributional shift is known results on the realworld data show some of the equivariant models can generalize better than the nonequivariant models pros the idea of using equivariant networks in physical dynamics seems wellmotivated in cases global alignment is difficult and the distributional shift is unknown improving generalization by incorporating known symmetries seems to be a natural idea although the idea of equivariant networks has been proposed before the proposed treatments tailored to the modeling physical dynamics are new cons it is claimed the data is governed by the differential equation which has several symmetry properties however how the resnet and unet networks are used to solve the dynamics prediction problem is missing from the main text maybe due to the same reason the connections to the differential equations are unclear this paper is not quite selfcontained the content is targeted to a narrow audience questions is data augmentation available as a baseline for experiments in table 3 it seems different kinds of symmetries are incorporated separately not sure if this is a limitation if a system is known to satisfy multiple symmetries is it possible to incorporate all of them together in a network docsepphysical dynamics have symmetry properties which can be leveraged by neural networks for better accuracy and generalization this paper takes 2d navierstokes ns equation as an example and redesign the convolutions in networks simulations and experiments on real data are conducted to verify that the new models are effective the common logic of the work is reasonable however the novelty and technical contribution is limited 1 the symmetries of ns equations are wellstudied and borrowed from 37 2 the design of equivariant networks ie equivariant resnet and unet are incremental comparing to 910 and 47 some further questions 1 are the proposed redesign for uniform motion equivariance in eqn 2 and that for scale equivariance in eqn 5 compatible can we design networks that simultaneously have multiple kinds of symmetry even potentially to automatedly learn the symmetry of the unknown systems it is more attractive to me than the current approach 2 what is the detailed implementation of augm in table 1 how many augmentation for each instance on average will the performance increase with the number of augmentation is it possible to do augm for ocean dynamic more experiments will firmly validate the superiority of the proposed method over the data augmentation approach 3 shall we keep convolution which are original designed for image in physical dynamics sec 2123 are hard to follow fortunately i can get the main idea with almost skip this part maybe this part is not necessarily in that rigorous otherwise please move some theoretical results into the body to highlight the technical contribution overall this paper proposed methods to make cnn to be symmetric as ns equation dynamics the technical contribution need be highlighted docsepsummary the paper demonstrates that incorporating equivariance ie symmetries into model for predicting fluid dynamics improves its performance especially when the test distribution is transformed by those symmetry groups leveraging the recent literature on equivariant cnns the paper proposes a cnn model that is equivariant with respect to known symmetries of the navierstokes equations timespace translation rotation uniform motion and scaling this approached is validated on 2 datasets on fluid dynamics a synthetic dataset on rayleighbenard convection and a realworld ocean dynamics dataset on the synthetic dataset the proposed models demonstrate better performance under distribution shift on the realworld dataset the models yields predictions that are more accurate and physically consistent strengths 1 the problem setting and datasets are realistic and could have high impact given the growing importance of machine learning in physical dynamics and climate science better understanding on how to incorporate prior knowledge symmetries could lead to better modeling 2 the claim of generalization under distribution shift is wellvalidated the paper identifies 5 types of symmetries of navierstokes equations and in the synthetic dataset perturbs the test set by applying transforms from these symmetry groups the experiment results show that models that incorporate those symmetries perform better on this perturbed dataset though its unclear how a model that incorporates one symmetry eg scale symmetry performs on test set that was transformed by another symmetry eg rotation weaknesses 1 the claim of model equivariance is not directly validated even though the main claim that incorporating equivariance leads to better generalization model equivariance is never measured directly there could be components of the model that are not equivariant such as pooling or convolution with stride 2 at least the unet architecture uses stride 2 so it might not be equivariance it would be interesting if the degree of equivariance could be measured to see how that relates to the generalization ability 2 sample complexity claim is not validated in the abstract the proposed models are claim to enjoy favorable sample complexity however there is no experiment validating this claim for example with varying amount of data how do the models perform 3 the need for separate models for each of the different symmetries for each of rotation uniform motion and scale symmetries there is a model that incorporates such symmetry however its unclear if these could be combined in the conclusion it is claimed that all of our equivariant models can be combined but the conclusion also states that there does not exist a single model with equivariance to the full symmetry group of the navierstokes equation 4 minor limited novelty its not clear how the proposed models different from equivariant cnns in the literature 9 10 11 47 for symmetries not covered in existing work eg uniform motion the proposed equivariant model seems adhoc however i do recognize that making all existing architectures work for this realistic problem is nontrivial so this is only a minor weakness overall im on the fence about acceptingrejecting the proposed model is shown to generalize well under specific distribution shift but the main claim about the role of equivariance is not yet validated directly addressing this concern would make the paper stronger additional feedback and questions visualization in figure 1 is great the intro mentions noethers theorem about the correspondence between symmetry and conserved quantity is there any evidence that the proposed models conserve some quantity after rebuttal thank you for addressing my concerns i believe the claim of equivariance leading to improved generalization weakness 1 above has now been validated more directly regarding the validation of sample complexity claim a more direct investigation of performance against training set size for equivariant and nonequivariant models would be more convincing i understand that augmenting data does increase the training set size by a factor of 3 but the training set is then not iid so its not clear what the effective size of training set is in any case i do believe that equivariance reduces sample complexity overall i vote for accepting
### Summary:
|
symmetries play an important role in physics and more and more papers show that they also play an important role in statistical machine learning in particular employing symmetries might be the key to improve training and predictive performance of machine learning models in this context the present paper shows how previous physical knowledge can be leveraged to improve neural network performance in particular within deep dynamic models to this end they show how to incorporate equivariance into resnets and unets for dynamical systems on a technical level as pointed out by the reviews and also clearly mentioned by the authors the basic building blocks are well known in the literature however dynamical systems also raises their own challenges resp laws when it comes to modelling symmetries as the authors argue in the paper and also clarified in the rebuttal for instance it pays off to adapt the techniques known from the literature deal better with scale magnitude and uniform motion equivariance this is a solid contributions and will help many other who want to apply dnns to dynamic and physical models
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
authors have implemented semisupervised model for inferring beehive characteristics frames disease severity and disease type based on audio sensor and environmental sensor data contributions 1 sensor and environmental data collection from beehives together with sparse inspection data this may be important contribution to the domain but not a direct contribution to iclr 2 creating and training semisupervised model to infer beehive characteristics based on low quantity of data in general and sparse inspection data in particular authors propose to use embedding module that is trained for reconstruction the approach may be applicable to other domains with low quantity of audio data i am not an expert in beehive monitoring so it is hard for me to assess the novelty of the work done as specific to this domain looking from general point of view the approach seems reasonable while taking into account low amount of data available and even lower amount of inspection data although i understand why only low amounts of data are available the lack of data still raises questions whether the model is accurate enough and useful without larger amounts of data and if more data were available should the model architecture change significantly to achieve better accuracy specific questions and concerns 1 is there a reason why hive placement is discussed in inspections section i think that placement is independent of inspections 2 i found the figures to be illustrative but not very informative for example figure 3 does not tell much except that some predictions are close to actuals and some are quite off however this is not really informative since the paper is handling a vertical domain in my opinion there should be discussion of precision vs recall and possibly the cost of having false positives vs false negatives if you have false positive for disease the cost would be additional human inspection while if you have false negative the cost might be hive loss i acknowledge that there might be difficulties in addressing this since authors have very few inspection observations 3 related to point 2 in table 1 performance for disease severity and frames are shown by providing huber loss it is not clear how that relates into real world metrics that someone monitoring hives would consider how does huber loss translate into accuracy or accounting for false positives and false negatives 4 figure 4 is not referenced and not discussed in the text 5 figure 5 is also illustrative for eyeballing the similaritiesdifferences but it would be nice to have a quantitative evaluation of similarity possibly across frequencies 6 figure 6 right hand side again it is nice for eyeballing it seems that latent 2 output is the biggest differentiator between healthy and unhealthy hives perhaps this should be investigated and mentioned audio outputs seem to be less useful i would be concerned a bit that eyeballing may mislead that problem can be solved by just looking at the picture and that picture contains enough info for classification 7 what happens to prediction if audio is excluded and only environmental sensors are used for prediction is the accuracy very bad in summary this is a nice proofofconcept work in a specific vertical domain it may have some lessons for other domains but i am not sure it is strong enough for acceptdocsepthe paper presents a semisupervised model to predict the vitality of beehives the inputs of the model are data from sensors audio on one hand and environmental on the other hand such as temperature humidity the objective is to predict simultaneously 3 values of interest the frames state of beehives the potential diseases and their severity the architecure is composed of two modules the first one is an autoencoder in charge of embedding the audio spectrogram in a low latent dimensional space and the second one a mlp to predict the outputs from the latent spectrogram and the environmental data the paper presents results of the proposed architecture on a small dataset an ablation study to show the benefits of the autoencoder module and the role of the environmental data and a latent space analysis to understand the ability of the model to capture relevant audio information linked to the diseases in my opinion the paper is off topic the presented model is very simple with nothing innovative except for the application to beehives the ml aspect of the paper is quite small just 1 reference is ml oriented all other references are from biology field and the choice of the model is not really explained why not something sequential the experimental data are very small and it is hard to judge the relevance of the approach only ablation studies is presented no other approachesbaselines are considerd i do not see how the approach can be generalized to other similar applications with multimodal data audio combined to environmental time series i am sure that the subject is interesting and important but i dont think that iclr is the right conference for this paper in addition the experimental part is hard to follow the outputs of the model and the human labeling information are not clearly stated what are the different diseases type it is the same with diseases status the diseases severity seems to be discrete bottom of page 3 but a continuous huber loss is used the results are also very quickly analysed fig 3 gives the impression that disease severity is always under estimated when the severity is high but no analysis is provided in summary i can not state if the problem that the authors addres is a hard one or with simpler models same results can be achieved minor remarks add color to figures 14 and not black vs gray for more visibility docsepthe paper proposes audiodriven multisensory modeling for predicting beehive strengths the presentation is very clear it describes what the task is the data sources are its limitations etc and the proposed model based on gpn makes sense although it seems the gpn may not outperform a baseline mlp from table 1 overall this paper is a demonstration of utilizing machine learning methods and applying them in a pluginandtest fashion to problems in natural science keeping the nature of the venue in mind my main concern is the relevance and appropriateness of the technical content of this paper to iclr ie the authors chose a wrong venue
### Summary:
|
all reviewers and myself believe that iclr may not be the right venue for this paper hence my recommendation is to reject it as a brief summary i highlight below some pros and cons that arose during the review and metareview processes pros important domain but out of scope of iclr collection of sensor and environmental data which may be potentially hard to collect cons not the appropriate venue lack of machine learning novelty potential lack of generalization of the proposed approach experimental part is hard to follow not very informative figures
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
1 the paper introduces the idea of some existing handcrafted features into the deep learning framework which is a smart way for building light weighted convolutional neural networks 2 i have noticed that binary patterns used in the paper are trainable which means that these binary patterns can be seen as learned convolution filters with extremely space and computational complexity thus the proposed method can also be recognized as a kind of binary network 3 the baseline bcnn has a different architecture to the network using the proposed method thus comparisons shown in table 3 and table 4 are somewhat unfair 4 the capability of the proposed method was only verified on character recognition datasets does it can be easily used for other tasks such as face recognition or object detection on some relatively large datasetsdocsepin this work a neural network that uses local binary patterns instead of kernel convolutions is introduced using binary patterns has two advantages a it reduces the network definition to a set of binary patterns which requires much less storage than the floating point descriptions of the kernel weights used in cnns and b allows for fast implementations relying only on logical operations particularly fast on dedicated hardware this work is mostly descriptive of a proposed technique with no particular theoretical performance guarantees so its value hinges mostly on its practical performance on real data in that sense its evaluation is relatively limited since only figures for mnist and svhn are provided a list of additional datasets is provided in table 5 but only the performance metric is listed which is meaningless if it is not accompanied with figures for size latency and speedup the only takeway about the additional datasets is that the proposed lbpnet can match or outperform a weak cnn baseline but we dont know if the latter achieves stateoftheart performance previous figures of the baseline cnn suggest it doesnt and we dont know if theres significant gain in speed or size regarding mnist and svhn which are tested in some more detail again we are interested in the performancespeed or size tradeoff and it is unclear that the current proposal is superior the baseline cnn does not achieve state of the art performance particularly in svhn for which the stateoftheart is 17 and the baseline cnn achieves 68 for svhn bcnn has a much better performancespeed tradeoff than the baseline since it is both faster and higher performance then the proposed method lbpnet has much higher speed but lower performance than bcnn it is unclear how lbpnets and bcnns speeds would compare if we were to match their performances for this reason it is unclear to me that lbpnet is superior to bcnn on svhn also the numbers in boldface are confusing arent they just incorrect for both the latency and error in mnist same for the latency in svhn the description of the approach is reasonably clear and clarifying diagrams are provided the backpropagation section seems a bit superficial and could be improved for instance backpropagation is computed wrt the binary sampling points as if these were continuous but they have been defined as discrete before the appendix contains a bit more detail where it seems that backpropagation is alternated with rounding its not justified why this is a valid gradient descent algorithm also how the scaling k of the tanh is set is not explained clearly do you mean that with more sampling points k should be larger to keep the outputs of the approximate comparison operator close to 0 and 1 minor what exactly in this method makes it specific to character recognition since you are trying to capture both highlevel and lowlevel frequencies it seems youd be capturing all the relevant information svhn data are color images with objects digits in it what is the reason that makes other objects not be detectable with this approach english errors are pervasive throughout the paper a nonexhaustive list fig 4b x2 should be y2 particuarly to a binary digits in most case 05 possibility please refer to sec forward propatationdocsepthis paper proposed a lbpnet for character recognition which introduces the lbp feature extraction into deep learning personally i think that this idea is interesting for improving the efficiency of cnns as traditionally lbp has been demonstrated its good performance and efficiency in some vision tasks such as face recognition or pedestrian detection however i do have the following concerns about the paper 1 calculationimplementation of eq 4 i do not quite understand how it derived and how to use eq 3 in calculation i suggest the authors to explain more details as this is the key for implementation of lbp layers 2 effects of several factors on performance in the experiments are missing 1 random projection map in fig 5 2 k in eq 2 and 3 the order of images for computing rhs of eq 3 in order to better demonstrate lbpnet i suggest to add such experiments plus trainingtesting behavior comparison of different networks 3 does this network work with more much deeper 4 data the datasets used in the experiments are all wellaligned this makes me feel that the rhs of eq 3 does make sense because it will capture the spatial difference among data like temporal difference in videos how will the network behave on the dataset that is not aligned well like affnist dataset 5 how will this network behave for the applications such as face recognition or pedestrian detection where traditionally lbp is applied
### Summary:
|
this paper proposed a lbpnet for character recognition which introduces the lbp feature extraction into deep learning reviewers are confused on implementation and not convinced on experiments the only score 6 reviewer is also concerned empirically weak practical advantage wrt to literature unclear only evaluating on mnistsvhn etc is not convincing to demo the effectiveness of the proposed method
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper presents an approach to infer shape programs given 3d models the programs include placing and arranging predefined primitives in layouts and can be written as a program over a domainspecific language dsl the architecture consists of a recurrent network that encodes a 3d shape represented as a voxel grid and outputs the instructions using a lstm decoder the generation is twostep where the first step predicts a program id and the second step predicts instructions within the program id this aspect wasnt completely clear to me see questions below a second module that renders the program to 3d is also implemented as a neural network in order to optimize the model parameter in a endtoend manner by minimizing a reconstruction loss the method is evaluated on 3d shape reconstruction tasks for chairs and tables categories of the shapenet dataset the approach compares favorably to tulsiani et al which considers a shape to be composed of a fixed number of cuboids the paper is well written and investigates an important problem but it is hard to tease of the contributions and the relative importance of various steps in the paper 1 structure search vs prediction how does the model perform relative to a searchbased approach for program generation that would be slower but perhaps more accurate the prediction model can be thought of an amortized inference procedure for search problems what advantages does the approach offer 2 choice of the dsl compared to csg modeling instructions of sharma et al the proposed dsl is more targeted to the shape categories while this restricts the space of programs eg no intersection subtraction operations are used leading to better generation of chairs and tables it also limits the range and generalization of the learned models to new categories some discussion and comparison with the choice of dsl would be useful 3 is the neural render necessary wouldnt it be easier to simply use automatic differentiation to compute gradients of the rendering engine 4 it is also not clear to me how having a differentiable renderer allows training in an endtoend manner since the output space is discrete and variable length in csgnet sharma et al policygradient techniques were used to optimize the lstm parameters the details of the guided adaptation were unclear to me section 43 5 is the neural renderer reliable is is not clear if the neural renderer can provide accurate gradients when the generated programs are incorrect since the model is trained on a clean samples in practice this means that the encoder has to initialized well since the renderer is also learned would it generalize to new programs within the same dsl but different distribution over primitives eg a set of tables that have many more legs some visualizations of the generated shapes from execution traces could be added sampling programs from within and outside the program distributions used to train 6 all the above points give an impression that the choice of dsl and careful initialization are important to get the model to work some discussion on how robust the model is to these choices would be useful in other words how meaningful is the generalization from the supervised training set of templates chairs and tables 7 missing baselines the model is trained on 100000 chairs and tables with full supervision what is the performance of a nearest neighbor prediction algorithm this is an important baseline that is missing a comparison with a simplified csgnet with shape primitives and union operations is also important tulsiani et al consider unions but constrain that all instances have the same number of primitives which can lead to poor reconstruction results furthermore the training sets are likely different making evaluations unclear i suggest training the following decoders on the same training set used in this approach 1 fixed set of cuboids eg tulsiani et al 2 a recurrent decoder with cuboids 3 csgnet different primitives and operations 4 a nearest neighbor predictor with the hamming or chamfer distance metric docsepthis paper introduces a highlevel semantic description for 3d shapes the description is given by the socalled shapeprogram each shape program consists of several program statements a program statement can be either draw which describes a shape primitive as well as its geometric and semantic attributes or for which contains a subprogram and parameters specifying how the subprogram should be repeatedly executed the shapeprogram is connected with an input through two networks the program generator encoder and a neural program executor decoder both encoderdecoder are implemented using lstm the key ml contribution is on the decoder which leverages a parametrization to make the decoder differentiable the major advantage of the proposed technique is that it does not need to specify the shapeprogram in advance in the same spriit of training an autoencoder it can be learned in a semisupervised manner however in practice one has to start with a reasonably good initial program in the paper this initial program was learned from synthetic data the paper presents many experimental results including evaluation on synthetic datasets guided adaptation on shapenet analysis of stability connectivity measurement and generalization and application in shape completion the presented evaluations from the perspective of proposed experiments is satisfactory on the downside this paper does not present any baseline evaluation party due to the fact that the proposed problem is new in fact existing inverse procedural modeling techniques require the users to specify the program however the proposed approach could be even more convincing if it evaluates the performance of semantic understanding for example would it be possible to evaluate the performance on shape segmentation additional comments 1 how important is the initial program 2 the interactions among shape parts usually form a graph not necessarily hierarchical this should be discussed 3 what is the difference between 3d shapes and 3d scenes does this approach require a frontupright orientation 4 it would be interesting to visualizeanalyze the intermediate representations of the neural shape generator does it encode meaningful distributions among shape parts overall it is a good paper and i would like to see it at iclr 2019 docsepthis paper presents a methodology to infer shape programs that can describe 3d objects the key intuition of the shape programs is to integrate bottomup lowlevel feature recognition with symbolic highlevel program structure which allows the shape programs to capture both highlevel structure and the lowlevel geometry of the shapes the paper proposes a domainspecific language for 3d shapes that consists of for loops for capturing highlevel regularity and associates objects with both their geometric and semantic attributes it then proposes an endtoend differentiable architecture to learn such 3d programs from shapes using an interesting selfsupervised mechanism the neural program generator proposes a program in the dsl that is executed by a neural program execution module to render the corresponding output shape which is then compared with the original shape and the difference loss is backpropagated to improve the program distribution the technique is evaluated on both synthetic and shapenet tasks and leads to significant improvements compared to tulsiani et al that embed a prior structure on learning shape representations as a composition of primitive abstractions in addition the technique is also paired with marrnet to allow for a better 3d reconstruction from 2d images overall this paper presents an elegant idea to describe 3d shapes as a dsl program that captures both geometric and spatial abstractions and at the same time captures regularities using loops csgnet sharma et al 2018 also uses programs to describe 2d and 3d shapes but the dsl used here is richer as it captures more highlevel regularities using loops and also semantic relationships such as top support etc the idea of training a neural program executor and using it for selfsupervised training is quite elegant i also liked the idea of guided adaption to make the program generator generalize beyond the synthetic template programs finally the results show impressive improvements and generalization capability of the model can the authors comment on some notion of completeness of the proposed dsl in other words is this the only set of operators shapes and semantics needed to represent all of shapenet objects also it might be interesting to comment more on how this particular dsl was derived some of the semantics operator such as support locker etc look overly specific to chair and tables is there a way to possibly learn such abstractions automatically what is the total search space of programs in this dsl how would a naive random search perform in this synthesis task i also particularly liked the decomposition of programs into draw and compound statements and the corresponding program generator decomposition into 2 steps blocklstm and steplstm at inference time does the model use some form of beam search to sample block programs or are the results corresponding to top1 prediction would it be possible to compare the results to the technique presented in csgnet sharma et al 2018 there are some key differences in terms of using lowerlevel dsl primitives and using reinforce for training the program generator but it would be good to measure how well having higherlevel primitives improve the results i presume the neural program executor module was trained using a manuallywritten shape program interpreter how difficult is it to write such an interpreter also how easydifficult is to extend the dsl with new semantics operator and then write the corresponding interpreter extension minor typos page 3 consists a variable consists of a variable page 5 we executes we execute page 6 synthetica dataset synthetic dataset
### Summary:
|
this paper presents a method whereby a model learns to describe 3d shapes as programs which generate said shapes beyond introducing some new techniques in neural program synthesis through the use of loops this method also produces disentangled representations of the shapes by deconstructing them into the program that produced them thereby introducing an interesting and useful level of abstraction that could be exploited by models agents and other learning algorithms despite some slightly aggressive anonymous comments by a third party the reviewers agree that this paper is solid and publishable and i have no qualms in recommending it from inclusion in the proceedings
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper learns the binary basis mask for a few task sequence tasks later linear combinations of the basis mask can be used as a mask for the new task the parameter learned for combining the basis mask is significantly fewer than learning a novel mask for each new task sequence the paper also proposes a homogeneous mask learning for most of the datasets we dont have the flexibility to learn an enormous basis mask and model performance highly depends on the number of learned basis tasks the proposed model is evaluated over the mnist cifar100 and splitimagenet dataset over the smaller architecture comment 1 the proposed approach is parameter efficient and requires very few parameters for each novel task assuming that basis masks are learned 2 the paper follows the idea of supsup and the difference is how to combine the previously learned mask for the novel task also the task prediction idea is the same as supsup another contribution is learning homogeneous masks which may be helpful in many scenarios but the basic idea is the same overall it seems that the paper has limited contribution compared to supsup 3 the models are evaluated over the smaller architecture ie lenet and its challenging to evaluate the proposed model using only the given experiment for the larger architecture only parameter efficiency are reported table2 i would suggest the author please report the result for the resnet18 architecture over the cifar100 and miniimagenet datasets 4 the results in fig3 are confusing in the supsup paper fig3 supsup paper the result for the permutedmnist is 94 gn scenario while the paper reported the supsup result of 80 how author got this result as compared to the supsup the performance of the proposed model seems much lower a similar pattern we observe for the cifar100 dataset in fig4 performance is much poorer and for the gn scenario model only performs well for the very small task sequence 5 from fig4 right 5 what architecture is used for the imagenet dataset experiment 6 the performance of the proposed model highly depends on the number of learned base tasks sometimes we have only a 510 task sequence and if for the small task sequence we learn 50 or 100 task sequence it is much costly also we can observe that the homogeneous setting model does not perform well therefore for the limited task sequence model may not be helpful 7 the results are only compared with the supsup paper i would suggest to the author please consider the recent expansion based model abc for the comparison a efficient feature transformations for discriminative and generative continual learning cvpr21 b ternary feature masks zeroforgetting for taskincremental learning cvprw21 c continual learning using a bayesian nonparametric dictionary of weight factors aistats21 the paper is lagging in experimental results with the recent baseline also the contribution is limited compared to supsup docsepthe paper studies continual learning and that context the important and prominent problem of catastrophic forgetting similar to related work the paper achieves this through learning taskdependent masks on the neural network representation effectively generating different subnetworks for different tasks the main contribution wrt related work is that the masks are constrained to a subspace which is spanned by a basis of masks the experiments show some reduction in the number of required parameters over a single baseline method the paper is in general wellwritten and structured the proposed method follows the popular approach of alleviating forgetting through masking neurons for different tasks as mentioned by the authors this has been explored in prior work such as packnet and piggyback supsup related work like 12 are not discussed the main contribution over prior work is that this paper introduces a subspace of masks that is spanned by a mask basis which is referred to as impression my concern is that the contribution wrt prior works is rather slim in the experimental evaluation the paper only compares to a single baseline supsup and ignores the results of all other standard baseline methods mentioned in the related work including replay and regularizationbased methods which achieve significantly higher performance on the evaluated benchmarks eg fromp 3 achieves 90 on pmnist the contribution wrt supsup remains unclear beyond a reduction of the number of parameters which comes at the cost of a moderate loss in performance i would recommend the authors to motivate more clearly why the reduction in parameters is a problem of critical importance 1 sangwon jung hongjoon ahn sungmin cha and taesup moon continual learning with nodeimportance based adaptive group sparse regularization 2020 2 chungkuk yoo bumsoo kang and minsik cho snow subscribing to knowledge via channel pooling for transfer lifelong learning of convolutional neural networks 2019 3 pan p swaroop s immer a eschenhagen r turner r e and khan m e continual deep learning by functional regularisation of memorable past 2021 overall the contribution over related work seems limited to me but i am open to feedback from the authors if my initial perception is not correct docsepin the current paper the authors propose a novel way to adapt an existing continual learning algorithm leveraging principles from transfer learning more concretely they learn an initial set of masks or impressions from a small number of basis tasks and they use afterwards linear combinations of these masks in the learning process of new tasks therefore their approach is able to generalize to new tasks allowing for scalable and parameter efficient continual learning with much lower parameter overhead than existing methods positive aspects the proposed approach is scalable to a large number of tasks with a small parameter overhead than existing methods the paper is in general clearly written the related work section covers most of the relevant papers in the field negative aspects the scientific novelty of the paper is limited it is an incremental work relying heavily on wortsman et al 2020 the experimental validation is limited and not convincing the paper is not compared with any other stateofthe art methods please find below some of my concerns 1 first of all please state clearly in the introduction in which aspects your paper is different from wortsman et al 2020 2 in page 3 you mention for the first time the term gnsetting where is the meaning of gn 3 please clarify the following statement section 3 it then generates the mask m by setting the top s fraction of scores in s to 1 and the rest to 0 how is this fraction chosen heuristically or do you use any criteria 4 regarding the experimental settings state clearly for each dataset how do you define the tasks how many basis tasks do you have for each dataset how comes that for permutedmnist you have 250 learned tasks and the maximum could be 784 table 1 what is the relationship between number of masks and number of basis tasks you have one mask per one basis task 5 plots 23 and 4 are not clear 6 figures 3 and 4 minjmj 25 basis tasks how did you choose 25 why do you use this criterion for basis tasks 7 please use a standard evaluation methodology in order to be able to interpret your results 8 compare your approach against at least the following related methods packnet piggyback wortsman et al 2020 ternary feature masks tfm masana et al 2021 m masana t tuytelaars and j van de weijer ternary feature masks zeroforgetting for taskincremental learning proc of cvprw 2021 workshop on continual learning docsepthe paper describes an approach to continual learning by randomly initialized deep networks the main idea is to extend the supsup wortzman 2020 and apply linear combination of binary masks for efficiency the paper also considers learning from multiple instances of the same task the evaluation results indicate reasonable performance for the reduced parameters in common benchmarks strength the paper clearly describes the main technical proposal of applying linear combination of masks to randomly initialized networks the approach seems to efficiently scale to a larger number of tasks in the given problem setup together with the new homogeneous impression approach the paper seems to describe unique and novel technical ideas to improve on supsup wortzman 2020 weakness although the approach seems to novel there are several weaknesses in the current paper the first is the performance degradation compared to supsup baseline in cifar imagenet benchmarks although sec 5 discusses the limitation when there are small number of tasks fig 4 seems to indicate a serious performance disadvantage compared to supsup which does not look reasonable considering that mnist variants are artificial benchmarks and not practical in any application the performance degradation in cifar imagenet limits the significance of the work another missing comparison is the hopfield network in supsup as discussed in sec 2 i wonder how strong is the catastrophic forgetting if we apply one of the discussed approaches in sec 2 ie regularizationbased replay or parameter isolation methods as this paper only concerns the comparison to supsup fairly incremental i feel it is unclear how impactful this work is in the broader continual learning context regarding the task setup which also applies to wortzman 2020 i feel impractical to only consider the fixed lway classification from the same or similar domain at least the setup should consider a variable number of categories per task other what is the solid dashed lines within the boxes in fig 2 3 4 and 5 median and mean while the proposed idea seems novel and the results seem to indicate good efficiency in terms of model footprint the results show performance degradation to the baseline and i feel the submission has rooms for improvement for acceptance due to the narrow focus on comparison to supsup as pointed in the main review in overall i do not think the paper reaches the acceptance threshold
### Summary:
|
this paper presents an approach impresslearn to continual learning using the idea of taskspecific masks the idea builds upon another idea supsup wortzman 2020 which uses a backbone network shared by all the tasks and binary taskspecific masks however the number of parameters for an approach like supsup can become excessively large when the number of tasks is very large this paper presents a solution by having a small number of basismasks and learning a weighted combination of these basismasks to use as the taskspecific mask for each task the experimental results show that impresslearn yields significant parameter savings as compared to supsup there were several concerns shared by all the reviewers such as 1 limited novelty as compared to supsup and 2 limited experimental evaluation and not having enough baselines from my own reading of the paper i largely agree with the assessment of the other reviewers the authors responded to the original reviews and acknowledged some of the concerns raised by the reviewers the reviewers read the authors response but their assessment has remained unchanged the basic motivation and the idea is nice but offers limited novelty especially as compared to supsup if the authors could improve the experimental evaluation more baselines larger datasetsnetworks etc it will be a much stronger paper however in its current shape i as well as the other reviewers do not think that the paper is ready for publication
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper discusses a model for learning the architecure of a convolutional networks starting from a fully connected graph the idea is to learn the adjacency graph of the model together with the weights of the networks strenghts the idea of thinking outofthebox by imagining new architectures is very attractive and interesting weaknesses the actual advantages in the model do not look apparent given the results in the evaluation section the theoretical foundation of this work is unclear for example it is unclear how the proposed solution will work in practice in terms of backpropagation the performance results show that the proposed method is characterized by performance close to those of existing methods in general i really welcome this type of work nonconventional experimental and quite radical in terms of approach however unfortunately the authors do not provide a convincing description of their approach unfortunately it does not appear that the method is developed on a sufficiently strong theoretical basis for example it is unclear how backpropagation work in these circumstances when you dont have a stacked architecture the choice of the thresholds and the actual learning of the adjacency matrix is not described in sufficient detail the actual computation complexity and the tradeoffs in terms of computational complexityaccuracy is unclear in the experimental results the actual performance of the method appear very similar to the other methods in some cases they might be the same since the confidence intervals are overlapping questions what is the theoretical foundation of the method given the fully connected nature of the graph how does backpropagation work in this case in a sense in fact you have a dag how do you deal with cycles when you do you stop the backpropagation if you do not have a stacked architecture could you please provide the confidence intervals for all the results you presented in fact it seems that the values related to your approach are better than existing techniques in some cases but they look very close is the computational complexity justified also note you probably need a larger number of samples to learn the additional adjacency graph what is the tradeoff given the gain in terms of performance the actual additional complexity might not be completely justified docsepthis work proposes a novel method called dynamic graph network dgnet for optimizing the architecture of a neural network building on the previous work introduced by xie et al 2019 the authors propose to consider the network as a complete directed acyclic graph dag then the edge weights of the dag are generated dynamically for each input of the network at each node of the network the authors introduce an extramodule called router to estimate the edge weights as function of the input features the proposed method addresses the problem of optimizing the connectivity of neural networks in an interesting way where the architecture is not fixed but it depends on the input instances moreover i think that a strong advantage of the proposed technique is that the optimization of the architecture comes with a negligible extra cost both in terms of parameters and computational complexity overall the paper is well written and easy to follow my only serious concern is the degree of novelty with respect to yuan et al 2020 which was published at eccv 2020 the main difference seems to be that in the proposed method the graph is dynamic ie it depends on the input instances instead in yuan et al 2018 the graph is learned but fixed for all the input samples in the experimental results i would have expected a deeper ablation study on the importance of the dynamic graph since this is the main contribution of the paper instead there is only one experiment in the appendix table 6 therefore the impact of the dynamic graph in the performance of the proposed method is not clear and it is difficult to evaluate the importance of this contribution other comments in sec 31 the authors say that the resnet architecture can be represented with a dag where the set of edges is defined as eijji1i2 this is not true if you unroll the definition of the resnet architecture as done in eq 46 in 1 and compare it with what you obtain using eq 1 it is easy to see that the two resulting functions are different the definition of the convolutional block is not clear is it a reluconvbn triplet as in xie et al 2019 the use of a dag with edge weights for representing the architecture is not novel it was already introduced in xie et al 2019 in sec 43 table 5 shows a comparison with stateoftheart nasbased methods dgnet is implemented using regnetx and regnety as the basic architecture however in table 5 the performance of the basic architectures without the dynamic graph optimization is not reported this would be useful to evaluate the gain provided by the optimization of the architecture 1 veit et al residual networks behave like ensembles of relatively shallow networks nips 2016 after the discussion period i thank the authors for their responses and for updating the paper the authors have added a deeper analysis on the impact of the dynamic graph however i still believe that the novelty of the paper is a bit limited i have slightly increased my score to 6docseppros interesting extension of randwire to learn better architectures good experimental results cons idea could be seen as minor modification of randwire doubts about memory requirements the paper proposes an improve over the idea of randomly wired architectures 1 by exploiting a complete graph where edges are weighted by dynamically computed weights dynamically computing edge weights allows the network to optimize its topology while this idea could be seen as a relatively small modification of 1 it is still very interesting and the results prove its effectiveness in commonly used tasks it would have been interesting to show an analysis of what the method is actually learning eg if a particular topology emerges from the adjacency matrix in order to provide more insights on why the method is effective i also have a concern about memory requirements that i would like the authors to address it is true that dgnet does not require significantly more flops or trainable parameters however you use a complete graph which means that it requires to store in memory ol2 activation tensors ie the activation of each convolutional layer weighted by edge weight this could become a limitation for larger networks note that this is also a problem for 1 but in that case the sparse connectivity mitigates the issue 1 saining xie alexander kirillov ross b girshick and kaiming he exploring randomly wired neural networks for image recognitiondocsepthis paper presents a novel approach dgnet to generate a dynamic structure for the neural network by learning to predict and select the edges between computational nodes in an endtoend manner the method is based on a gating mechanism applied on top of a fully connected graph similar to the connectivity in a densenet designed to control the quantity of information received from each previous layer the experiments show consistent improvement in image classification imagenet and object detection coco pro i enjoy the interpretation of the weighted edges as a dynamic architecture able not only to address a more general class of models but also to adapt to each input accordingly using a secondorder approach the method is simple yet effective with the results on both tasks showing improvement at a low computational cost the paper is clearly written and easy to follow and understand cons in my opinion two approaches are very related to this paper the highway network 1 and the ablations performed in the resnet paper 2 in both cases the intuition is a little bit different improve the expressiveness by replacing the residual connection with something more powerful an inputdependent gate while i agree that this paper offers a more general framework working on a densely connected graph instead of a limited subset of residual edges plus some other minor technical differences map gates vs scalar gates the connection to those papers should be clearly discussed in the related work also it is interesting to notice that in 2 this kind of gating decreases the performance compared to the simpler summation contrary to what we see in dgnet this idea suggests an additional ablation study constantgating instead of dynamic ones by setting the alpha scalar equals to 1 this is related to the static experiment in the supplementary material but more closely to the densenet architecture and it would clarify that the improvements come indeed from the learned structure and not from the ability to combine features from different levels it is not clear from the paper how the resnet architecture is adapted to the dynamic setup i guess the graph is fully connected on each stage as long as the spatial dimension is preserved but since the paragraph about the multistage architectures comes only in section 31 that speaks about classical architectures not the dgnet it is not very clear what densely connected means in the experiments is any particular initialisation scheme used for the routers could it be important for the optimization to start with some parameters that lead to the original version of the model initialize the parameters such that the residual connections start with alpha1 and nonexistent connections are close to 0 for example in 3 all the skip connections are initialised such that the extra module is ignored in the first iterations minors and observations even if the focus of this work is the dynamic of the edges it would be interesting to observe if the architecture learns to use all the computational nodes and only connect them differently than what we used to do or if it rather prefers to drop nodes thus using a lighter model in the end maybe a regularization term that encourages that would be useful for computational efficiency a statistical analysis in this direction could reveal interesting ideas the formatting of table 1 is hard to follow at least a vertical line between baselines and dgnet would improve the readability 1 highway networks srivastava et al 2 identity mappings in deep residual networks he et al eccv 2016 3 nonlocal neural networks wang et al cvpr 2018 my main concern regards the connection to previous works that use gating to aggregate information from previous layers which i see very related to the current work however the proposed method is more general and seeing these gates as a routing mechanism that allows the model to learn its own structure in a differentiable way is interesting and could lead to further more advanced ways of doing this so with a clearer discussion of those previous methods and how the current approach differs from them i lean towards acceptance update i thank the authors for their response and for updating the manuscript according to our questions i agree with the other reviewers that the novelty of this paper is quite limited however the idea of using a dynamic learned graph from a general large search space of models is interesting provides good empirical results on both image classification and object detection and the authors provide ablation for the new components that motivate the paper i maintain my initial score 6
### Summary:
|
the idea presented in the paper is interesting and has caught the attention of the reviewers however there seem to be only a tepid support for acceptance with a reviewer championing rejection there is little novelty in the approach but empirical validation shows results that consistently improve over selected baselines i am afraid that more evaluations would be needed at this stage to consider this work for acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper analyzes the storage and recall of factual associations in gpt and shows evidence that these associations correspond to localized directlyeditable computations it utilizes a causal intervention to identify decisive neuron activations in a models factual predictions and find that the middle mlp layer plays an important role strengths this paper is clearly motivated and wellstructured in terms of writing the causal tracing analysis tool is powerful and has some interesting findings that the essential role of mlp module computation at the middle layers when recalling a fact this paper proposes rankone model editing rome and counterfact dataset to evaluate edits in standard benchmark and the proposed counterfactual one in language models rome can achieve competitive results on standard benchmarks and better results in specificity fluency and consistency metric weaknesses the finding that mlp module computation at the middle layers plays an important role when recalling a fact is not surprising to me after all selfattention only linearly combines inputs and does not conduct transformation and mlp requires doing nonlinear transformation in a transformer block during the pretraining stage hence its kind of intuitive to me of have these results rome seems to edit one fact at once this is not scalable as usually we want to inject knowledge of a corpus rather than a single fact further rome can not take unstructured knowledge ie plain text which is widely existed and used during pretraining results on gptj is less useful than standard finetuning baseline which makes to cast doubts on how useful rome is when model size is largereg gpt3 175b the lack of human evaluation on generated text as an automatic evaluation metric may not highly correlate well with human beings docsepthis paper studies how factual knowledge eg a tuple of subject relation and object is stored in autoregressive transformerbased lms more specifically the authors identify specific weights in mlp that are responsible for the correct output by comparing forward computation results ie hidden states with clean and corrupted inputs causal tracing to verify this observation the authors propose a knowledge editing approach rome that modifies a mlp weight matrix by a rankone update causal tracing this method identifies specific hidden states that are more influential on the predicted object the authors use the gap between the probability of the object with the original and corrupted inputs as a proxy of the importance of hidden states based on this analysis mid layers at the last subject token have high values contributing more to the probability of the object rome rome is designed to change the model output ie the object given the subject and relation to do so this approach modifies the weight matrix in mlp with a rankone update so that the modified network outputs the desiredmodified object this is done by choosing k a vector before the matrix and v a target vector where v requires optimization based on the desired object once an instancespecific k and v are computed the weight matrix is updated using a closed formula evaluation rome is evaluated on an existing benchmark zsre and a newly constructed dataset counterfact and compared against other knowledge editing approaches such as knowledgeeditor and mend in terms of efficacy is the object successfully edited generalization does it work on paraphrases as well and specificity are similar subjects kept unchanged strengths this paper is tackling a wellmotivated and very important problem how lms store factual knowledge and how we can control it rome can be used for impactful applications such as resolving temporal mismatch between lms and realworld knowledge the authors analyze their hypothesis thoroughly and propose an efficient way to modify model parameters the authors keep rome simple but show its effectiveness this paper is very wellwritten each part of this paper is clearly explained despite the complexity of the problem weaknesses my only concern about rome is scalability to find v for the modified object o this approach requires optimization for each test instance this might be a speed bottleneck when the data size is very large although modifying a weight matrix is rankone update im not sure if this approach is considerably cheaper than other methods like mend mend uses a hypernetwork but once its trained inference is just forward computation the authors mentioned the major limitation of this study that only focuses on relational knowledge eg kb triples studying other types of knowledge eg more complex knowledge about entities would be an interesting future direction docsepthis paper provides evidence that knowledge recall in autoregressive language models corresponds to localized computations within feedforward mlp layers at a range of middle layers and hence can be edited with a simple method that they name rome they compare rome to other existing modelediting approaches both 1 on a zeroshot relation extraction benchmark and 2 related to generalization and specificity ability and demonstrate strong results this paper offers very detailed empirical analysis and ablations that provide useful insights and it is written very clearly the work done here is both significant and original as factual storage capability of autoregressive language models are not wellunderstood to the best of my knowledge i do not see a major weakness that would prevent acceptance authors adequately addressed the limitations and potential negative societal impact of their work
### Summary:
|
the paper proposes a method rome to analyze the storage and recall of factual knowledge in a largescale autoregressive language model and find that such knowledge can be controlled by changing weights in the mlp layer the reviewers all agree the paper is wellmotivated wellmotivated and scientifically sound the area chair is also impressed with the through experimentation and the quality of the writing the main issue the reviewers 1iypgugp pointed out is that the method is not scalable for practical knowledge editing as the method can only work per fact basis but the authors confirmed that the goal of this study is not to provide a practical tool but more to understand the inner workings of lms which are valuable on its own the authors have comprehensively addressed the reviewers points eg clarifying misunderstanding adding a human evaluation study extra results on smaller models comparing its strength and limitations compared to other methods i would vote for acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper provides a very nice analysis for the nonsmooth l1 dictionary learning minimization in the case of orthogonal complete dictionaries and linearly sparse signals they utilize a subgradient method and prove a nontrivial convergence result the theory provided is solid and expands on the earlier works of sun et al for the nonsmooth case also interesting is the use a covering number argument with the de metric a big plus of the method presented is that unlike previous methods the subgradient descent based scheme presented is independent of the initialization despite a solid theory developed lack of numerical experiments reduces the quality of the paper additional experiments with random data to illustrate the theory would be beneficial and it would also be nice to find applications with real data in addition as mentioned in the abstract the authors suggest that the methods used in the paper may also aid in the analysis of shallow nonsmooth neural networks but they need to continue and elaborate with more explicit connections minor typos near the end of the paper and perhaps missing few definitions and notation are also a small concern the paper is a very nice work and still seems significant nonetheless fixing the above will elevate the quality of the paper docsepthis paper studies dictionary learning problem by a nonconvex constrained l1 minimization by using subgradient descent algorithm with random initialization they provide a nontrivial global convergence analysis for problem the result is interesting which does not depend on the complicated initializations used in other methods the paper could be better if the authors could provide more details and results on numerical experiments this could be used to confirm the proved theoretical properties in practical algorithms docsepthis paper studies nonsmooth and nonconvex optimization and provides a global analysis for orthogonal dictionary learning the analysis is highly nontrivial compared with existing work also for dictionary learning nonconvex ell1 minimization is very important due to its robustness properties i am wondering how extendable is this approach to overcomplete dictionary learning it seems that overcomplete dictionary would break the key observation of sparsest vector in the subspace is it possible to circumvent the difficulty of nonsmoothness using randomized smoothing and then apply the existing theory to the transformed objective my knowledge is limited but this seems to be a more natural thing to try first could the authors compare this naive approach with the one proposed in the paper another minor question is about the connection with training deep neural networks it seems that in practical training algorithms we often ignore the fact that relu is nonsmooth since it only has one nonsmooth point only with diminishing probability it affects the dynamics of sgd which makes subgradient descent seemingly unnecessary could the authors elaborate more on this connectiondocsepthis paper is a direct followup on the sunquwright nonconvex optimization view on the spielmanwangwright complete dictionary learning approach in the latter paper the idea is to simply realize that with yax x being nxm sparse and a a nxn rotation one has the property that for m large enough the rows of x will be the sparsest element of the subspace in rm generated by the rows of y this leads to a natural nonconvex optimization problem whose local optimum are hopefully the rows of x this was proved in sww for very sparse x and then later improved in sqw to the linear sparsity scenario the present paper refines this approach and obtain slightly better sample complexity by studying the most natural nonconvex problem ell1 regularization on the sphere i am not an expert on sqw so it is hard to evaluate how difficult it was to extend their approach to the nonsmooth case which seems to be the main issue with ell1 regularization compared to the surrogate loss of sqw overall i think this is a solid theoretical contribution at least from the point of view of nonsmooth nonconvex optimization i have some concerns about the model itself indeed complete dictionary learning seemed like an important first step in 2012 towards more general and realistic scenario it is unclear to this reviewer whether the insights gained for this complete scenario are actually useful more generally docsepthe paper proposes a subgradient descent method to learn orthogonal squared complete n x n dictionaries under l1 norm regularization the problem is interesting and relevant and the paper or at least the first part is clear the most interesting property is that the solution does not depend on the dictionary initialization unlike many other competing methods the experiments sections in disappointingly short could the authors play with real data how does sparsity affect the results how does it change with different sample complexities also it would be nice to have a final conclusion section i think the paper contains interesting material but overall it gives the impression that the authors rushed to submit the paper before the deadline
### Summary:
|
this paper studies non smooth and non convex optimization and provides a global analysis for orthogonal dictionary learning the referees indicate that the analysis is highly nontrivial compared with existing work the experiments fall a bit short and the relation to the loss landscape of neural networks could be described more clearly the reviewers pointed out that the experiments section was too short the revision included a few more experiments the paper has a theoretical focus and scores high ratings there the confidence levels of the reviewers is relatively moderate with only one confident reviewer however all five reviewers regard this paper positively in particular the confident reviewer
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a new privacypreserving algorithm and proves convergence guarantees on the existing differentially private normalized sgd algorithm and the variant proposed in the paper the new variant of sensitivity reduced normalized sgd achieves better convergence guarantees on smooth nonconvex problems than previous algorithms the improvement comes from using treeaggregation technique that reduces the amount of noise needed to make partial sums differentially private strengths 1 the paper uses the treeaggregation technique to propose a new technique that achieves better error guarantee on smooth nonconvex objectives than prior works although the dependence on the dimension d and epsilon is worse 2 the paper presents a new analysis of differentially private normalized sgd which achieves same error rates as previous papers without needing any privacy amplification techniches or needing full batch gradient descent which were needed in previous papers weaknesses 1 although the error dependence of algorithm 2 on n is better the dependence on d and epsilon is worse for the new algorithm 2 the paper admits that runtime is worse for the new algorithm is proportional to n113d23 even in the regime when n is proportional to d this is n3 also number of steps needed by algorithm 1 is proportional to n2 both of these numbers are large but the n3 runtime of the new algorithm is particularly problematic 3 the paper admits that it is not obvious that privacy amplification by shuffling can be directly applied to their treeaggregation based method 4 the paper does not have any experiments showing the convergence properties of the new algorithm i understand that this is a theoretical paper but some small scale experiments would be nice to better gauge the algorithm docsepthe paper proposed a few new algorithms for private erm the key idea of the new algorithms is using tree aggregation in momentum normalized sgd from cutkosky and mehta 2020 the proposed algorithms can achieve better convergence rate than the existing method strengths 1 the idea of using tree aggregation for momentum seems to be quite novel in the field 2 the proposed algorithms can achieve better convergence rate weaknesses 1 the proposed algorithm seems to require a larger memory to store some tree aggregation nodes limitations are welldiscussed docsepthis paper proposes the use of momentum techniques in conjunction with the tree aggregation framework in differential privacy to obtain better rates for differentially private stationary points for the empirical loss this problem has got recent attention in the literature and all existing rates are slow od14sqrtnepsilon and require high running time this paper focuses on both aspects first providing algorithm with the above rate but with better running time which is primarily based on noisy sgd with momentum which crucially makes use of tree aggregation and second providing a sharper rate of odnepsilon213 for this problem gradient complexity of the method is on73epsilon43d23 but running time is actually at least cubic in the sample size this is achieved by the use of a recursive gradient estimator for which its terms have low sensitivity due to the small displacement of iterates together with smoothness of the loss strengths 1 this is a definite improvement upon the state of the art 2 the problem is of key relevance in differential privacy and machine learning 3 at the technical level the algorithms do not require privacy amplification techniques although they use multiple random shuffles of the data which is somewhat baffling regarding this last point i wonder whether the shuffling is necessary at all i suspect this can be circumvented using analyses of incremental sgd such as nedic bertsekas incremental subgradient methods for nondifferentiable optimization siopt 2001 weaknesses 1 the improvements seem to be marginal compared to existing literature 2 regarding algorithmic performance it is arguably more interesting to understand the out of sample performance of the algorithm 3 the technique of recursive gradient estimation used particularly in the second method is already well known see eg bgm21 paper cited and also bassily guzman nandi 2021 noneuclidean differentially private stochastic optimization the paper does not reflect this fact in its presentation the main difference between the current submission and past work is that in the current paper this is analyzed for multiple passes on the data which adds another layer of complexity into the analysis 4 the paper contains typos and some small inconsistencies which make reading difficult yes limitations on running time and possible additional improvements are discussed it is not discussed though how far from tight the obtained upper bound is which i think it is an important consideration
### Summary:
|
the authors obtain improved guarantees for differentially private smooth nonconvex empirical risk minimization following the discussion stage there appears to be a consensus in favor of accepting this paper due largely to the clear theoretical improvement and the fundamental nature of the problem however reviewer nfxu who remains unenthusiastic about the paper is still concerned about lack of clarity regarding a number of algorithm design choices the necessity of making the entire optimization trajectory dp whether or not the magnitude of noise changes between steps and whether shuffling is the data is truly necessary while these points do not cause direct concern over the correctness of the results they are nevertheless important to clarify and i urge the authors to do so thoroughly in the cameraready revision
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.