Input
stringlengths 251
41.6k
| Output
stringlengths 137
9.7k
|
---|---|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary the paper proposes a parametric form for a matrix representation of a graph to be used as a building block within graph neural networks gnns in essence people use different normalized versions of the adjacency and laplacian matrices within gnns the authors in turn propose a generic parametrized version that encompasses those normalizations and that can be learned from data reasons for score my overall evaluation is slightly positive although the methodological contribution is minor my positive recommendation is based on the wide applicability of the simple idea proposed the parametric graph shift operator can be readily used in existing applications of gnns possibly boosting performance pros 1 simple contribution explained in a straightforward manner this can help adoption by practitioners 2 numerical experiments rightly illustrate the value of the contribution cons 1 there is no clear justification for the specific parametric form in 1 other than it can recover existing choices of the gso this of course can be obtained with other conceivable parametrizations the authors try to justify this by saying that 1 is the most general affine form of the adjacency matrix but this does not seem to be a rigorous statement most general in what sense what is the space of all affine forms of matrices 2 the value of the theoretical analysis is unclear the paragraph after theorem 1 is supposed to highlight the value of having real eigenvalues the authors mention without having to worry about complex values worrying in what sense why would it be difficult to implement the examples at the end of page 4 if the eigenvalues would be complex the authors then mention the difficulty of doing spectral clustering with complex eigenvalues but the connection with the ongoing discussion is quite loose 3 similarly the value of theorem 2 is unclear to me why are these bounds useful the discussion after theorem 2 again tries to enlighten in that direction but it falls short some typos 1 in remark 1 absence of an edge should be existence of an edge edit after author response we thank the authors for their response to my concerns and those of other reviewers i have updated my score from 6 to 7 based on their changesdocsepthe authors consider the problem of learning a parametrized graph shift operator or message passing operator in the context of graph neural networks they consider a family of gso that they name pgso based on seven scalar parameters and show that it includes most commonly used operators such as the adjacency matrix or the laplacian the spectral properties of the pgso are analyzed finally some empirical results are provided demonstrating the pgso as a dropin replacement for standard gso in gnn architectures graph neural networks have gathered a large amount of interest in the past years and found widespread use in adapted problems however recent analyses have shown that typical architectures may have limited expressiveness eg bounded by the wl test the paper is overall wellwritten and proposes a method which could be of interest to practitioners on the other hand the paper has some minor weaknesses in that the proposed parametrization is redundant with some existing techniques eg m3 corresponds to residual connections which are widely used and given the simplicity of the proposed method a more comprehensive empirical evaluation could be beneficial additionally the proposed operator is still restricted to 1neighborhoods and hence cannot by itself solve the expressiveness problems encountered by graph neural networks this lack of expressiveness in conjunction with the mixed empirical results which show overall good performance improvement but with overlapping error bars potentially lower the impact of this contribution given the potential interest this method could hold for practitioners i believe that the paper could be substantially improved by making the empirical evaluation more comprehensive in particular a more comprehensive analysis of the training dynamics as in figure 1 as well as the effect of initialization would give a clearer picture of the effect of the pgso in practice specifically one might wonder about the importance of initialization for the m2 parameter given that it controls the influence of the edge information and for example whether it is possible for it to change sign during training perhaps an informative experiment could be to initialize the pgso at different commonly used gso parameters eg adjacency laplacian normalized laplacian and observe how such initialization affects performance edit after author response we thank the authors for their response and providing some more empirical information overall i feel that this paper presents a neat idea that could be of interest to some people in the community and i have modified my score from 6 to 7 it would be great for the authors to discuss the importance of initialization as in particular it seems to me that the sign of m2 can never change from its initial value indicating perhaps that practitioners should try initialization at either and select the better performing model docsepthis paper describes a parameterized family of graph shift operators defined for any graph on n vertices as an n x n matrix where the ijth entry is 0 whenever edge ij does not appear in the graph the paper studies some spectral properties of the parameterized family and experiments with using them as components of graph neural networks this paper appears to me to be a hammer in search of a nail it was not clear what problem is meant to be solved here nor does it seem that the topic is fundamental scientific investigation the paper is generally poorly written and was difficult to follow i suspect much of that relates to the lack of clarity in the problem however the writing is also unfocused at the paragraph level in short this paper needs substantial work before its ready for publication docsepthe paper proposes a parameterized graph shift operator pgso as a replacement of fixed handpicked gsos for application to gnn architectures and the experiments illustrate that pgsos automatically adapt to the regularized gsos for different settings the contribution is of interest to the ml community as existing studies show significant improvements in performance of gnns by using certain gsos in specific contexts and therefore a data driven characterization of gso helps introduce adaptivity to the network representation in gnns pros the paper is well written and easy to follow the theoretical and empirical spectral analysis of pgso reveals its applicability to many existing gnn architectures the experiments are sufficient to illustrate the utility of using pgso over a constant gso for different settings cons 1 remark 1 seems to contradict definition 1 do the authors mean presence and not absence of an edge in remark 1 2 an explicit discussion on how pgso parameters are learned or updated in every epoch is missing 3 the authors compare the final accuracy results of gnns with pgsos and those with constants gsos the evolution of accuracy with the number of epochs for some of these results will help evaluate the cost of learning the parameters of pgsos
### Summary:
|
summary the authors observe that a range of laplaciantype operators used in graph neural networks can be embedded in a parametric family so that the precise form of the laplacian used can be determined by the learning process empirical evaluation and some limited theoretical analysis are provided discussion the authors have provided detailed replies and also additional experiments that has addressed major concerns and most reviewers now agree the paper is good one reviewer is more skeptical mostly regarding presentation i agree with some of the points raised in this regard but see them as less of an issue i would consider the presentation improvable but acceptable one weakness i should mention is that the two theorems provided are frankly trivial i appreciate this is only a conference submission but i would nonetheless call the fact that symmetric matrices have real eigenvalues theorem 1 an observation not a result that similarly holds for any direct consequence of gershgorins theorem theorem 2 the entire page used to state this could perhaps be put to better use for additional empirical results recommendation the program committee the ac and program chairs were hesitating about this paper but decided to recommend acceptance the idea is neat and simple presentation and empirical evaluation are fine if improvable we strongly recommend the authors to invest time what is phrased as theory is trivial but also admittedly not the main focus of the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper is a survey on 15 datasets related to us criminal justice it describes the us justice pipeline in a flowchart and sorts the datasets with according to it the authors propose a datasheet for each dataset which summarizes relevant information on the dataset data collection motivation uses distribution and maintainance they highlight that there are parts of the justice pipeline to which no dataset correspond and discuss domainspecific difficulties in the data collection process 1 the paper formalizes the us justice pipeline as a flowchart and uses it to link the datasets to relevant parts of the pipeline which makes it easy for researchers to choose a dataset 2 a datasheet is associated to each dataset providing a very practical and useful summary of each dataset 3 a gap in coverage of the justice pipeline by existing datasets is identified which could guide the collection of future datasets although the authors note it is difficult 4 political and social implications of the availability of such datasets are quickly discussed 1 although it is central to this kind of data the ethical discussions are a bit limited section 6 only contains one sentence about it and the datasheets do not seem to contain any informations regarding these questions 2 the authors provide download links for the datasets but i would have liked to have a unified method that download the datasets automatically making the use of multiple criminal justice dataset practical for researchers 3 the criminal justice pipeline described in figure 1 does not seem to be supported by any reference i believe there are two possibilities either it is a wellknown pipeline then i would appreciate to have a link to relevant previous works or it is a contribution of the paper in which case reference to eg the law could be nice but i admit i am no expert of the us criminal justice it also looks impossible to escape the pipeline after being charged i would imagine that after probation one can be free but it does not appear on figure 1 4 the authors mention many datasets but do not include data sheets for each of them what motivates the choice between the datasets chosen for datasheets and the others 5 some datasets contain multiple data records about one given individual authors mention that partial criminal path can be reconstructed from these datasets it is not clear how many of these partial paths can be reconstructed for all records 10 of records 1 6 the authors mention that different data collection processes lead to different kinds of unfairness in the datasets they claim that bias may be present at each step of the pipeline but fail to propose any even very basic statistics to support this claim 7 i find figure 1 to be a little to strict are the authors sure that their association between one dataset and one part of the pipeline are correct i am worried that someone could find other uses for these datasets either by applying new methods on it or by combining multiple datasets that are gathered in the survey docsepthis work surveys datasets in the criminal justice field that are often used for machine learning or fairness research it supplements this survey with a catalog of datasheets that are newly generated by the authors except for 1 where the dataset creators already generated a datasheet along with the survey there is a synthesis of knowledge where authors discuss gaps in the datasets and potential downstream analyses utility and quality of the submission impact originality novelty relevance to the neurips community will all be considered this piece reviews an important area of criminal justice datasets these datasets are used in a large body of fairness literature the work provides a survey of relevant criminal justice datasets and supplements these datasets with data sheets this is important and relevant to the neurips community and provides a nice survey while adding additional value in the form of data sheets there is a nice synthesis of information from the authors survey in the discussion section this also adds value to the survey completeness of the relevant documentation for datasets sufficient detail must be provided on how the data was collected and organized what kind of information it contains how it should be used ethically and responsibly as well as how it will be made available and maintained for benchmarks best practices on reproducibility should be followed the main contribution of this work is documentation so it certainly has that accessibility and accountability for datasets there should be a convincing hosting licensing and maintenance plan there is a github repo for maintaining and updating datasheets utility and quality of the submission impact originality novelty relevance to the neurips community will all be considered this isnt the most novel or original work since it is completing datasheets for datasets that have examined by a number of works in the past this piece cites many such related works however no other work is as comprehensive in its survey and no work systematically creates datasheets so concerns of novelty are somewhat minor the discussion on biases could be strengthened at the point where it is discussed it feels like an afterthought but there are significant problems here beyond just implicit biases for example earlier there is discussion on how for example hispanics are often reported as white this would lead to significant problems with biases and errors in analyses in downstream models especially for things like fairness by accounting for demographics this discussion could be expanded to touch on more of the findings from the authors exploration that might affect models or analyses similarly much of the discussion makes it seem like omitting information is a strictly bad thing for example the authors seem to lament that victim information is not included to conduct analyses with but these datasets require significant care and taking into account privacy considerations more engagement with these privacy considerations would strengthen the paper completeness of the relevant documentation for datasets sufficient detail must be provided on how the data was collected and organized what kind of information it contains how it should be used ethically and responsibly as well as how it will be made available and maintained for benchmarks best practices on reproducibility should be followed some of the datasheets are not particularly detailed if the contribution of the work is the datasheets i would have expected a bit more for example for the cpii datasheet the question what data does each instance consist of has the response as the data is compiled from 27 different sources each source has a different set of variables all sources report on the date time and location of the crime as recorded and the type of the offense i wouldve expected a table or breakdown here to make this information up front similarly on this same page there are some broken references i would suggest a pass over the supplementary datasheets to add more detail to spots like this and fix broken refs accessibility and accountability for datasets there should be a convincing hosting licensing and maintenance plan if the goal is to raise awareness of these datasheets and the underlying gaps for these datasets i would suggest creating a project webpage to host them in a way thats more prominent than as pdfs in a github repo this will raise the impact make it more accessible and might get others to update the datasheets as new information is identified such as for the questions regarding whether the data is being used already docsepthe authors conduct a survey of criminal justice datasets their main contributions are a discussion of these datasets in context of the full criminal justice pipeline and a public repository containing datasheets for 15 selected datasets this is a wellwritten paper on an important topic papers like this will become increasingly important in the realm of machine learning datasheets for datasets and specialized repositories are essential for responsible data use the survey is thorough and thoughtfully done i think the paper could benefit from an expanded discussion of related work and misuse of these datasets docsepin this paper the authors surveyed datasets from the national archive of criminal justice data nacjd and compiled a list of datasets arising from a variety of points in the criminal justice pipeline for 15 of these datasets the authors created publicly available datasheets using a template that they updated from the original in order to better suit criminal justice datasets the authors briefly describe each of the 15 datasets in the paper and create two index tables that summarize for each dataset 1 the type of criminal justice information and demographics covered and 2 the size composition maintenance and license information finally the authors discuss challenges in working with criminal justice datasets and illustrate these points using examples from the 15 surveyed datasets 1 it is in the best interest of the research community among others to broaden which datasets are used when studying criminal justice rather than focusing evaluations on a few datasets eg compas with this paper the authors have initiated this process by shining a light on 15 potential new datasets which are already publicly available 2 the 15 datasets are thoughtfully organized and presented in particular i found mapping the datasets onto the pipeline figure 1 to be a useful tool for getting quickly acquainted with the datasets and it nicely complements tables 1 and 2 3 in creating the index tables and a new datasheet template tailored to criminal justice datasets the authors have initiated the important discussion about what metadata should accompany criminal justice datasets and how these metadata questions might be standardized for describing new criminal justice datasets going forward this is an important discussion for datasets of any field but is particularly challenging and important for criminal justice datasets in which context is often not properly considered 4 the paper is wellwritten and should be easy to understand for a lay machine learning audience major points 1 since the authors are bringing 15 criminal justice datasets to the attention of the ml community it seems important to discuss in the paper why introducing these criminal justice datasets is beneficial to the ml community and to society to this point i feel it is important to answer the following questions how does providing the datasheets improve how these datasets can be used in ml beyond the official documentation does introducing these datasets help alleviate some of the existing problems with the use of criminal justice datasets in ml or will this just extend current problems to new datasets 2 in the introduction the authors state we give broad context to the datasets draw out potential uses and discuss gaps and limitations while the paper does address the first and third points it does not in my opinion adequately address potential uses for these datasets questions iiia d in the datasheets provide some information on potential uses but for a general machine learning audience it seems important to broadly discuss in the main paper how these 15 datasets should be used by the ml research community should they primarily be used to investigate the criminal justice pipeline as most of the uses in the datasheets seem to indicate should these datasets be used as benchmark datasets for testing out new methods which are not necessarily tailored to criminal justice applications 3 i found myself quite curious about the updates made to the datasheet template as described in section 2 this seems like an important contribution of this paper but it is not highlighted as such what gaps in the original template did these updates fill what unique challenges do criminal justice datasets pose to metadata documentation 4 it remains a bit unclear to me how the authors envision researchers using section 4 which is nearly three pages dedicated to short descriptions of the 15 datasets is the intention that these descriptions provide a quick introduction to a dataset and if interested one should then go to the datasheet for more information the combination of figure 1 and tables 1 and 2 seems quite useful for this purpose in and of itself the benefits of including these short descriptions in the main paper vs in the supplementary material is not clear to me 5 while section 3 provides necessary context in particular for figure 1 i am not wellversed in this area and find it troubling that this section does not have any references minor points 1 in the introduction one of the papers highlighted contributions is stated as reporting on 30 datasets however its not clear to me that this is really the case throughout section 4 other datasets are mentioned in addition to the main 15 but these are not included in the index tables nor in figure 1 and are not provided datasheets while i do see the usefulness in directing readers to other potentially relevant datasets in section 4 i found myself a bit confused in particular moving from sections 1 to 2 as to where the 30 vs 15 datasets were coming into play now after reading the entire paper i dont feel that this is one of its contributions ive noted this as a minor point because there is actually fairly little mention of the 30 datasets and removing mentions of this would seem to affect very little of the paper 2 related to the previous point it is still a little unclear to me how the authors arrived at the 15 datasets for which they created datasheets since there seems to have been 30 datasets which met the mentioned inclusion criteria why these 15 datasets are these 15 datasets useful for an ml audience in particular 3 figure 1 just want to verify that the colors in this figure correspond to the stages as mentioned in table 1 it may be useful to indicate this somewhere on the figure 4 section 7 typo i believe trough should be through 5 table 2 inconsistent capitalization in the geographic resolution and maintained columns docsepthe paper presents a collection of data sources on the us criminal justice system with the goal to provide researchers with an overview of available data sources datasets stem from a variety of fields from crime reports to jailprison sentencing the authors furthermore provide datasheets for 15 selected datasets which provides a standardizedstructured resource for accessing relevant data characteristics my major complaint would be the lack of a maintenance scheduleroute towards updating datasheets and the collection in general this does not only prevent collaboration with other researchers but also results in the paper being an immutable artefact which in my opinion is not adequate for such a collection of datasets if this complaint were addressed i would recommend acceptance of the paper the paper makes an important contribution to the field of criminal justice datasets by collecting and systematizing a variety of available datasets the survey seems to be exhaustive within its limited domain of us datasets and provides a good overview of relevant data sources the authors provide datasheets for datasets on 15 datasets which provide a comprehensive and structured resource that answers many of the relevant questions wrt those datasets the authors discuss the funnel arising from the sequential decisions made in the court system and provide an overview of procedures that typically lead to this funnel this provides a good understanding of the structure of the different data sources the paper discusses a series of relevant limitations to the collected data raising awareness for potential use cases i think the paper would make an even stronger case if the results were presented in the form of a websitegithub repository enabling easier navigation this would also allow other investigators to contribute eg via pull requests and allow for updating resources if eg errors are found in the datasets or when new datasets become available i would strongly suggest further work in that direction in their current version the datasheets are an artefact for which maintenance and updates are unclear which i would consider a major drawback of the paper improving in this direction could be done by 1 including a statement regarding maintenance and updates 2 uploading latex sources that enable updating the datasheets 3 uploading a latex template for similar datasheets 4 outlining how people can contribute new datasets to this collection how datasheets can be updated and the criteria for such changes 4 providing a good overview improving the readme in the existing github repository docsepthe authors present an analysis of criminal justice related data sets i think this is an important topic and i appreciate the degree to which they are exploring pros and cons of these data sets overall this seems to be a valuable resource i have some concerns about the completeness of the model however this is a good summary of many diverse data sets collecting information that can be used to make informed choices about how to use this data however i have some concerns my biggest concern is the presumption in their model that a crime was committed the criminal justice system is imperfect and not everyone is convicted nor is everyone who is convicted guilty do no data sets address this issue even if that is the case given that the authors wish to identify gaps in the data sets why is that not represented in figure 1 outside of the word acquittal in addition the model is incomplete for example the list of outcomes of pretrial hearings is incomplete as people may be released but required to wear an ankle monitor just as an example i am not an expert in this area so there may be other outcomes that are missing too that i am unaware of further the order of operations shown is not accurate to actual criminal justice experiences for example many people have to pay fines even if they are not convicted for example fees for required ankle bracelet surveillance between charging and conviction or paying back a loan for bail money my neighbor was forced to pay a fine simply for appearing in court even though she was not charged in the end another thing that seems to be missing from the model provided is the type of defender public or private for example that participants had access to or even whether when and how such a person was assigned
### Summary:
|
the paper introduces a set of criminal justice datasets to the machine learning community surveying 30 datasets and creating datasheets for 15 of them reviewers appreciated that the paper raises awareness of these datasets in the ml community and the documentation work that the authors have contributed there were two main concerns inadequate discussion of ethics and lack of detail on how the ml community could work with these datasets the authors have addressed the first concern in a revision and partially addressed the second concern
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
knowledge distillation kd is to improve a lowcapacity student networks performance with a highcapacity teacher network current works focus on how to define kd loss this work proposes to improve kd from data augmentation da this work proposes an entropybased datamixing da to improve two kd methods kd and crd strengths 1 the method is simple and clearly presented 2 the experiments are done on different data augmentation techniques 3 results on cifar100 and tinyimagenet show increased performance weaknesses there are three contributions in the method part 1 data augmentation 2 longer training and 3 cutmixpick 1 and 2 are already investigated in works 12 1 cui wanyun and sen yan isotonic data augmentation for knowledge distillation ijcai2021 2 beyer lucas et al knowledge distillation a good teacher is patient and consistent cvpr2022 the improvement from 3 is very minimal after rebuttal the authors have addressed my concerns i increased my score to boardline accept after considering other reviewrs comments the authors have addressed the the limitations and potential negative societal impact of their work docsepthe paper establishes a connection between data augmentation da and generalisation of studentteacher training namely showing theoretically that given two subsequences of the original dataset d where the subsequences differ by some data augmentation scheme the subsequence that performs better is the one that induces less variation in the predicted probabilities of the teacher network edited review in light of the authors active discussion and contributions during this phase as well as their addressal of my own concerns i will tentatively raise my score from 4 to 7 obviously this is conditional on some changes to the manuscript namely making the math more digestible we already discussed this in great detail preferring plots to large tables where applicable or having plots supplement them i leave this to the authors discretion since space constraints will mean decisions will have to be made as to what should be a table vs what should be a plot and what should be relegated to the appendix further work on plotting equation 8 vs its approximation vs both bad and good students the authors have acknowledged this and even provided preliminary results thanks and good work strengths the efficacy of any da scheme for student training could in principle be evaluated before the training of any student model since computing the variance term vdelta only requires the pretrained teacher model experiments support the proposed theory and are numerous run over many different architectural variants as well as data augmentation schemes proposal of an entropybased scheme to improve results weaknesses some of the notation in section 31 and 32 is incredibly confusing and looks to be rushed firstly i would recommend simply using log instead of mathcall to denote the logarithm alternatively if you want to be more abstract you can simply define the crossentropy loss as mathcallcex y f yt fx and it would make equation 6 more readable in fact you have already defined it as such in equation 1 in section 32 i have several concerns in equation 6 in line 1 you have an expectation s sim dn yet in line 2 it confusingly turns into an expectation x sim d yet you still have the inner summation why isnt it just this instead ex sim d px log fx textconst ie the expectation implies the inner summation multiplication over 1n so having both seems redundant and incorrect furthermore in the inner multiplication youre missing the transpose so it should be pxnt log fxn since that is how it is defined in equation 4 in equation 7 i am also not sure why the last line expands out into a variance covariance term covariance to me implies that youre computing the variance between two sets of random variables dont we only have one random variable that random variable being delta the empirical risk over whatever random draw s sim d maybe i am missing something crucial but it seems like one could just remove the last line of equation 7 leaving us with vx pxt log fx then say that the variance is going to be proportional to the pt and that f is held constant actually it seems like from equation 9 that the covariance is over the ps for the different classes what is confusing is that for a given x you are indexing into its predicted probability for class i as as ptxi and xi is apparently the ith example as you have implied in equations 3 and 4 if pt outputs a probability distribution over x then you should be using ptxi instead each element in s is drawn the same underlying distribution d i dont follow this since it says earlier that s1 and s2 are elements from d but not necessarily iid if so how can ex sim s turn into ex sim d table 1 i appreciate that the results are averages over three runs but please indicate the standard deviation since space seems a little tight in this table maybe just indicate the stdevs below each number in parentheses also such information can be nicely condensed into a scatterplot with one axis denoting test accuracy and the other denoting mean stdev when necessary prefer plots over big tables and leave big tables to the appendix na docsepthe paper presents a theoretical development of a straightforward to use measure of effectiveness of data augmentation techniques for distillation it proposes that this measure need only be based on an analysis of the teacher without consideration of the student the paper also presents an enhanced version of the cutmix data augmentation technique which performs a distillation of the augmented dataset to maximize the information content of the data the method yields an informative measure that empirically can be reliably used to judge the efficacy of different data augmentation schemes at least for the methods considered the justification and derivation of the proposed measure is reasonable but it possibly depends on an overly strong assumption on the student to be welltrained this may not be the case in some applications where the student has a much lower capacity than the teacher however the proposed measure may still be effective but would need more detailed theoretical justification to show this the cutmixpick data augmentation technique demonstrates improved performance over the standard cutmix method there is no discussion of limitations and potential negative societal impacts however distillation techniques to the extent they can be used to train smaller networks may positively affect societal impacts related to high power usage during training and inference time docsepthe major contributions of this paper are 2fold 1 the authors suggest that given a fixed teacher model a good data augmentation da scheme is characterized by a lower variance of the teachers mean output probability this claim is supported by statistical analysis and empirical results image classification 2 an entropybased sample picking scheme is introduced to synthetically reduce the variance of the teachers mean probability this scheme cutmixpick further improves the results strengths 1 setups with lower variance although it is not clear how u is calculated see weaknesses below obtains acceptable improvements in cifar100 and tinyimagenet experiments weaknesses though this work is executed well and the image classification results show improvements the novelty and significance of this work are limited see below 1 the clarity of section 32 can be improved though proposition 1 seems interesting i have questions regarding the std of the teachers mean output probability it is unclear as to how u in equation 9 is calculated particularly what is s and how is it determined why is k640 would it be reasonable to calculate m using the variance of the output probabilities of the teacher for the correct class over all samples 2 experiment coverage results are limited particularly imagenet results are not discussed and analyzed not that i have anything against cifar100 tinyimagenet analysis but i tend to think that imagenet results are critical to add merit to the findings 1000 classes higher resolution224 x 224 most kd works report results on imagenet1k 1 2 3 4 5 if compute is a problem authors can consider using a smaller dataset for experiments imagenet100 a subset of imagenet popular in the selfsupervised learning community i believe the authors can also use publicly available pretrained imagenet models if required httpspytorchorgvisionstablemodelshtml what is the reason for not including flip flipcrop cutout autoaugment mixup results in tables 3 4 3 for std of mean prob in tables 1 2 is it reasonable to use the training set as the training set is used for distillation not the test set 4 for cutmixpick experiments what is the percentage of the training set used for distillation after picking filtering overall i enjoyed reading this paper in my opinion the weaknesses of this paper outweigh the strengths but im willing to change my opinion based on the rebuttal 1 mller rafael simon kornblith and geoffrey e hinton when does label smoothing help advances in neural information processing systems 32 2019 2 shen z liu z xu d chen z cheng k t savvides m 2021 is label smoothing truly incompatible with knowledge distillation an empirical study in iclr 3 chandrasegaran k tran n t zhao y cheung n m 2022 revisiting label smoothing and knowledge distillation compatibility what was missing icml 4 heo byeongho and kim jeesoo and yun sangdoo and park hyojin and kwak nojun and choi jin young 2019 a comprehensive overhaul of feature distillation iccv 5 tang jiaxi et al understanding and improving knowledge distillation arxiv preprint arxiv200203532 2020 the authors have discussed limitations potential societal impacts in supplementary section 6
### Summary:
|
after a lively and interactive author discussion period all reviewers ended up recommending to accept this paper the work examines the ways in which different data augmentation schemes can increase knowledge distillation performance providing some theoretical analysis with actionable insights and experiments to back it up the work focuses on the generalization gap of the student under different sampling schemes and asserts that their study leads to the conclusion that a good data augmentation scheme should reduce the variance of the empirical distilled risk between the teacher and student reviewers were generally positive about the clarity of the manuscript after some changes during the author discussion the ac recommends acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a new sequential recommendation model with behavior pathways effectively capturing specific evolving item patterns for each user the pathway attention using learned binary routes can effectively remove unnecessary items for a given user sequence experimental results show that the proposed model significantly outperforms existing models on seven benchmark datasets and achieves stateoftheart performance overall the idea of using behavior pathways is interesting to me but it has several weak points especially in evaluating the proposed model strengths originality it is interesting to introduce behavior pathways in long user sequences besides this paper utilizes a router to choose item patterns selectively originality because the idea of using the router is a modelagnostic property it can be effectively applied to various sequential recommender models clarity it is wellwritten and easy to understand weaknesses quality although the evaluation is extensive the proposed model has not been compared with recent sequential recommendation models please refer to the following references 1 kun zhou hui wang wayne xin zhao yutao zhu sirui wang fuzheng zhang zhongyuan wang jirong wen s3rec selfsupervised learning for sequential recommendation with mutual information maximization cikm 2020 2 qiaoyu tan jianwei zhang jiangchao yao ninghao liu jingren zhou hongxia yang xia hu sparseinterest network for sequential recommendation wsdm 2021 3 ziwei fan zhiwei liu jiawei zhang yun xiong lei zheng philip s yu continuoustime sequential recommendation with temporal graph collaborative transformer cikm 2021 quality in section 43 the visualization result shows that the proposed model effectively captures the pathways in user sequences however it is wondering if this result is generalized for other cases it is necessary to show a quantitative result on whether the proposed model effectively captures a useful pathway one possible evaluation is that the proposed model also shows a comparable result using a subset of sequences derived from a pathway quality in table 3 retrl1 is similar to sasrecl1 however retrl1 shows a better performance than sasrecl1 it is wondering if the proposed model is fairly compared with the existing model clarify there are some minor typos 1p 37line uses users 2p 41line second first 2p 44line first second this paper does not address the negative societal impact however this paper seems not to have any negative impact docsepin this paper the authors propose the recommender transformer retr with a novel pathway attention mechanism retr can dynamically plan the behavior pathway specified for each user and sparingly activate the network through this behavior pathway to effectively capture evolving patterns useful for recommendation strength 1 the paper is well written and easy to follow basically the authors try to use the behavior pathway in the transformer 2 the experimental results are good compared with the baselines weakness 1 the novelty of this work is not very high the mechanism of router is wildey used in the moestyle transformer it seems that this work applied it to the sequential recommendation the gumbelsoftmax is also widely used to optimize the discrete binary variable 2 it is unclear why need use this swith rounter in sequential recomemndation usually the length of users behaviors is very short like less than 25 for most of transaction do we really need this router in the transformer actually recent work verify that a simle mlp can outperform the transformer in the sequential recommendation zhou kun et al filterenhanced mlp is all you need for sequential recommendation proceedings of the acm web conference 2022 2022 the motivation of this work is thus not very strong 3 in the experiment we pair the groundtruth item with 100 randomly sampled negative items that the user has not interacted with does this raise a smapling bias yes docsepthis paper proposes a recommender transformer with a pathway attention mechanism it is characterized by its ability to capture three types of user pathways and predict user action sequences with high accuracy the paper demonstrates the usefulness of the proposed method in comparison with several stateoftheart methods using several types of real data strengths starting with the actual example in figure 1 the motivation for proposing the method is well explained making it easy to understand the content of the proposed technique the authors have conducted prediction experiments using seven different behavioral log datasets from various sites they compared the accuracy of the proposed method with seven existing methods and confirmed that the proposed method outperforms them experiments are conducted using real data not artificial data weaknesses the meaning and boundaries of the three types of pathways are vague the definition of each should be clearly stated also are these three types sufficient for example what is the difference between the correlated behavior pathway and the drifted behavior pathway they seem to have similar properties in the local and shortterm the types and definitions of pathways should be better justified such as providing references that support the authors definitions although the experimental results quantitatively demonstrate the effectiveness of the proposed method the architecture in figure 2 is straightforward and somewhat lacking in technical novelty there is no mention in the paper of negative impacts on society also i cant think of any docsepthe authors propose recommender transformer retr with a pathway attention mechanism which can generate the behavior pathway hierarchically and capture the evolving patterns dynamically through the pathway the key design is a learned binary route to prevent the behavior pathway from being overwhelmed by trivial behaviors the authors also show retr has high accuracy and efficiency compared with other selfattention or transformer based sequential recommendation methods through experiments pros 1 the paper is generally easy to follow 2 the idea of using pathway in recommendation algorithms seems to be new 3 the authors conducted extensive experiments on seven datasets to prove retr can make accurate recommendations cons 1 the essential difference between retr and other sequential recommendation methods with attention mechanism is not clear it seems to me that retr not only utilizes the onpathway tokens but also leverages the offpathway tokens as they also convey contextual information and the difference between the two kinds of tokens is their weight however in sasrec and all the other selfattentionbased methods different tokens already have varying attention weights so that more important historical items may have higher attention weights and less important historical items may have lower attention weights i dont understand why is it necessary to design a pathway 2 the targeted problem is not new and several recent works have been proposed to address the same issue of selfattention apart from reference 6 in the paper there are several others trying to improve selfattentionbased recommendation methods for example the lightsans work published in sigir 21 it would be useful to also conduct experimental comparisons with these more recent baselines in addition to 6 besides there are many sparse attention works in the literature and it is interesting to replace the proposed pathwaybased method with other sparse attention methods to see if pathwaybased method is superior 3 besides selfattentionbased methods other types of sequential recommendation methods eg temporal graphbased sequential recommendation methods have also achieved stateoftheart results considering that attention weights can be regarded as edge weight in a graph it might be useful to compare with some recent temporal graph based sequential recommendation methods eg jodie in kdd 19 and tgn in icml 20 especially tgn also adopted attention in its model 4 there is no evidenceexperiment to show that retr will not be overwhelmed by trivial behaviors as this is the main claim of the paper it should be necessary to have more justifications the case study in section 43 seems to be a cherrypick result 5 all the datasets that used in this paper are relatively small larger datasets such as msd and netflix should be more desirable minor issues 1 in the second line in line 41 and in the first line in line 44 should be exchanged 2 in column actions of table 1 the commas are not consistent 3 the authors should use grurec or gru4rec consistently in the paper to avoid misunderstandings 4 the ablation study of multihead attention is missing the authors have adequately addressed the limitations and potential negative societal impact of their work
### Summary:
|
this paper presents recommender transformer retr with a pathway attention mechanism that can dynamically zeroingout the interactions eg the trivialnoisy ones in transformerbased sequential recommender systems extensive experimental results demonstrate the effectiveness of the proposed architecture overall this paper received mixed reviews with borderline scores the reviewers raised concerns around baselines and evaluations some of which the authors promptly addressed in the revision during the rebuttal period i also read the paper in details myself i do agree with some of the concerns from the reviewers but i dont think a method needs to beat every other published papers to be published and i think the current baselines are more than thorough enough my biggest complaint about the paper is around the writing specifically how the proposed idea is presented this paper tries to tackle an important question which is that in sequential recommendation not every interactions are useful in helping predict future interaction the selfattention mechanism in transformer kind of addresses this problem but in a more softer fashion with attention weights this paper presents a simple yet effective method to introduce a pathway mechanism that adaptively zeroingout some of the interactions via a binary pathway router in order to train such a model endtoend gumbelsoftmax sampling is utilized the most important part of the contribution to me is that this is an improvement to the transformer architecture as opposed to a new model which is what this papers writing suggests the proposed approach is effectively modelagonistic and doesnt marry to a particular loss function or finergrained architectural choices number of layers etc currently there are many baselines in the paper but each made some different modelarchitecture choices which could contribute to the difference in performance or not but we wouldnt know an ideal evaluation should have been to take all the transformerbased baselines that are currently in the paper add this pathway mechanism without changing anything else and show that the results improved over the transformer architecture in this way we know the improvements are exactly coming from introducing the pathway the authors might argue some of the current results are already supporting this argument but my point is to emphasize this point very explicitly rather than leaving it for the readers to infer from what i read in this paper i truly believe this pathway idea has its potential therefore i would especially want the authors to further refine the presentation to better convey the idea which in turn will hopefully increase the impact of this paper once it is eventually published some minor comments the way the paper is currently written seems to suggest there are only three types of pathways and the network is capable of capturing all of them i am personally not a big of fan of overinterpreting what a neural net is trying to do therefore i wouldnt overly focus on the characterization of different pathways and only show the qualitative examples at the end as a highlevel demonstration in eq 2 softmax should really be sigmoid if a 01 prediction is made there then the following line logit is probably not the right word here the qualitative examples at the end figure 3 can be more carefully examinedlabeled for example the current categorization is quite ambiguous indie refers to the type of developers while jpg refers to the genre of the game they are certainly not mutually exclusive
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work proposes to prune vit from all components comprehensively which considers the interactions between different components in pruning vits different from the homogeneous components of cnn the components of vit are always heterogeneous thus this work constructs a taylorbased optimization function to take full advantage of the interactions between heterogeneous components to avoid the huge computation cost of the hessian matrix this work derives an approximation to transform the hessian matrix into pruning ratios finally it solves the optimization target towards the optimal tradeoff between accuracy and computational cost this work is validated on deit and swin on imagenet and also on detection experiments strengths 1 this work is wellwritten and easy to follow 2 the motivation of this work is clear weaknesses 1 please verify the definition of w as w b w w while the optimization target eq1 is min l lw w lw thus l lw w lw lb w lw and cw w cb w i think this target is not correct which maybe cuased by the definition of w 2 based on the definition of w i think we cannot get eq2 from eq1 3 the pruning results on swin are directly trained on imagenet1k while swin can achieve much better performance after pretraining on imagenet22k so the author should also provide experimental results on it no need to train swin on imagenet22k but directly load the pretrained weights on imagenet22k then do pruning along with finetuning on imagenet1k please refer to weaknesses docsepthis paper presents a model pruning method for vision transformers by jointly considering multiple possible pruning dimensions to address the problem of joint optimization a new collaborative pruning is designed experiments on multiple backbones ie deit and swin and multiple tasks ie imagenet classification and coco detection show the effectiveness of the method strengths the idea of jointly considering multiple pruning dimensions is natural and wellmotivated the method is tested and works well on multiple tasks and backbones weaknesses the idea of joint optimization of the vision transformer architecture is not very new many previous methods have explored the joint optimization problem for network acceleration recent work like autoformer also considers multiple dimensions for vision transformers table 3 presents an important experiment to compare with previous stateoftheart pruning methods since vitdeitbs are considered as the standard models in many previous papers it is better to provide the results on multiple model sizes eg vitbst to clearly show the effectiveness of the proposed method according to table 6 pruning multiple dimensions may not lead to ideal actual speedup on gpus the method introduces a new pruning algorithm which may not be easy to implement since the code is not available i am a bit worried about the reproducibility of the method post rebuttal i would like to thank the authors for the detailed feedback and additional results the response addressed my concerns about the insufficient experiments and actual speedup on gpus i would like to upgrade my rating to 5 limitations of the proposed method are not discussed docsepthis paper presents a new neural network pruning method for vision transformers the proposed technique can effectively accelerate most vision transformers such as vit and swin transformer by collaboratively pruning components such as multihead selfattention hidden neurons and embedding neurons extensive experiments show that the proposed technique is efficient yet competitive in accuracy as compared with the stateoftheart the manuscript has the following pros as follows 1 the paper has a clear motivation and the proposed technique is based on the theoretical analysis 2 the paper is clearly written and easy to follow in general 3 a large number of experiments have been performed to validate the performance in various aspects i have some small concerns about this paper as listed first in sec 44 the author said that the proposed method can bring accuracy gains when compressing the deitbase i think this conclusion is a bit inappropriate as described in sec 41 and 42 the pruned models will further finetune 300 epochs which means that the pruned models have a longer training schedule than baseline models the accuracy gains may come from the longer training schedule in addition the pseudocode for the whole pruning algorithm in the appendix is the core contribution of this work which should be moved to the main paper yes docsepthis paper proposes savit a structureaware pruning method for transformerbased architecture which jointly prunes parameters in different components by considering interactions between these components experiments on different vit architectures and vision tasks demonstrate the effectiveness of savit strengths the idea of collaboratively pruning all components of a model is interesting the performance gain is impressive when compared to sufficient stateoftheart methods this paper is wellwritten and easy to follow weaknesses its not clear how much computation cost and data is needed for the proposed method during pruning when compared to stateoftheart network pruning methods the main novelty of this work is the interaction of different components during pruning so the ablation study on this design is important in my understanding line 297305 and table 7 aim to give such ablation studies while its not clear whether the setting of without secondorder interations line 300 in this ablation study means dropping all hessianbased terms in eq 4 or only drop the crosscomponents terms green blocks in figure 2b i think the latter one can better reflect the main contribution of the proposed method the authors havent discussed any limitations
### Summary:
|
the paper received three positive reviews and one negative review the raised issues contain technical correctness imagenet22k pertaining insufficient experiments and speedup on gpus computational cost clarity on ablation studies during the rebuttal and discussion phases most of the issues are addressed and reviewers are willing to upgrade after checking all the reviews rebuttals and discussions the ac agrees with the reviewers that the raised issues are well addressed the authors shall revise according to the suggestions to further improve the current manuscript in the cameraready submission also the comparison to token selectionbased vit acceleration methods a shall be included in the experiments a not all patches are what you need expediting vision transformers via token reorganizations liang et al iclr 2022
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
these are several concerns 1 in the view of motivation i dont think the motivation is strong enough and is convincing also i dont think rewarding correct predictions but not penalizing incorrect ones is a reasonable way in my opinion rewarding the correct predictions may be a good way but penalizing the incorrect ones should also be important 2 in the view of experiments though the authors add table 7 in the appendix which is the result for training 90 epochs i still doubt why eureka loss does not work better than recent works when training 200 epochs which is also a common setting recently and it seems that using ce at the beginning of training is important and cb works the best moreover in table 2 the results on few are especially not very good comparing with others which makes it harder for me to believe that rewarding the highlikelihood area really matters a lot for tail classes it seems that the experiment results are not strong enough to support the proposed opiniondocsepthe submission makes an intriguing claim that retaining focus on correctly predicted rare classes can improve performance for training with classimbalanced datasets to illustrate this claim the paper shows that one can find improvements at overall accuracy if a combination of the focal loss which weights down examples with high predictive likelihood and the crossentropy loss is used such that the loss transitions from the focal loss to the ce for examples with predictive confidence above a threshold for examples belonging to the top rarest classes on longtailed cifar10 this produces a mild improvement at overall accuracy around 05 when the top 40 rarest classes receive this mixed loss further experiments with cocodetection finds sparse improvements around 04 when applying the mixed loss to the tail classes based on the above findings the paper argues for not weighting down confident predictions especially if these belong to rare classes however perhaps these experiments are insufficient to arrive at such a conclusion to ensure that the minor improvements in cifar10 are in fact due to the claimed reasoning one could also look at other combinations of the losses that do not conform to the claim for example apply the loss to the top k most confident examples without stratifying by rare classes randomly select k of images etc for cocodetection apply hfl to the head classes and fl to the tail classes since improvements are so small it would also be nice to see some standard deviation bars over multiple trials also were the choices of focal loss hyperparameters made to elicit their best performance from figure 2 it looks like it underperforms the crossentropy loss the paper proposes a new loss meant to reward the wellclassified rare examples this augments the crossentropy loss with a log1py term scaled with a number that reflects the frequency of class y such that rarer classes are scaled higher experiments have been conducted on 2 image classification datasets and 1 dialogue dataset in all cases the proposed loss appears to result in improvements over baselines some questionscomments about the experiments it appears that the proposed loss performs particularly well when combined with cb are the competing methods also similarly augmented for table 4 why is the focal loss only evaluated for 2 settings of gamma shouldnt there be a hyperparameter search and the best gamma used there are a lot of comparisons with a lot of numbers being taken from past reported results for all such comparisons has it been ensured that the architectural and training details are fixed across comparisons otherwise the comparisons might not be fair especially given that reported improvements are minor especially when improvements are minor it becomes important to look at aggregate numbers so id suggest reporting standard deviations over multiple trials for all experiments some typos down applications downstream applications a effective number an effective number thus the likelihood so that the likelihood deferred courage deferred encouragement overall the paper is clearly written and reports exhaustive experiments with the caveatsquestions above while the motivating experiments in section 22 are not very compelling in part due to the very minor improvements the key intuition that the classification of rareclass hard examples should be continued to be encouraged so that their predictive confidence doesnt drop as these examples are weighted down by some of the other methods sounds interesting although some of the phrasing about rewarding wellclassified examples can be a bit awkward my main concerns as of now are about experimental details which are described above in the questions post rebuttal thanks to the authors for responding im still not sure if the experiments are particularly compelling there appear to be differences amongst the baselines with regards to class balancing and the motivating section is still weak there are new experiments on a larger dataset but now with a different loss simplified el which is close enough to the proposed loss that this does not work very well as a motivation anymore apart from this taking some of the comments from the other reviewers and the authors responses into account i am retaining my initial ratingdocsepthis paper deals with learning imbalanced class distributions first it empirically finds that the highlikelihood area for the rare classes benefits classification then based on the findings it proposes a new learning objective called eureka loss which can be viewed as a combination of the frequencybased and likelihoodbased methods to reward the classifier when examples belong to rare classes in the highlikelihood area are correctly predicted empirical results on two typical tasks ie image classification and language generation tasks illustrate its superiority compared with other baselines pros 1 overall it is wellwritten 2 it clearly discusses the existing two methods ie frequencybased methods and likelihoodbased methods furthermore it highlights the limitation of likelihoodbased methods that they neglect the correctlypredicted tail class examples 3 the motivation for the design of the new learning objectiveie eureka loss is based on the empirical finding that the highlikelihood area of the rare examples is important to improve the performance cons 1 the finding is mainly on empirical observations which may lack theoretical support why is the highlikelihood area of the rare examples is important for generalization 2 for the experimental settings eg inaturalist 2018 the iid assumption does not hold for the training and test set 3 for the experimental results how to tune the hyperparameter of the eureka loss in validation set or test set since the reason in 2 i guess the hyperparameter selection becomes difficult minor comments for the last subfigure in figure 1 the ordinate value for the loss is negative which is wrong docsepsummary this paper made a finding that weighting up correct predictions for rare class examples also can help to improve the performance of imbalanced classification in light of this finding it proposes the eureka loss to add additional gradients for examples belong to rare classes in the highlikelihood area when correctly predicted experiments on several largescale benchmarks demonstrate its effectiveness pros the paper is clearly written and easy to follow the experiments are thorough and demonstrate the effectiveness cons while the finding is quite interesting i think the design of the proposed algorithm is quite arbitrary its not clear to me why the authors choose to add a term for rare classes rather than changing the weights directly why dont the authors use hfl in the end currently it seems that there lacks complementary theoryintuition that could explain why weighting up the already correctly classified rare examples help with the performance additional questions figure 4 seems quite interesting it seems that the functionality of eureka loss is quite different from hfl i could intuitively understand that the eureka loss function would encourage the examples to have likelihood of either 1 or 0 have the authors visually checked the examples with a likelihood of 0 does that mean training on a carefully selected subset gives better performance postrebuttal update i thank the authors for the responses while i still think the idea is potentially interesting and original i could not increase the score given the fact that this manuscript is naturally incremental without theoretical justifications
### Summary:
|
this submission got 1 reject and 3 marginally below the threshold the concerns in the original reviews include 1 lack of theoretical justification the motivation and claim are from empirical observation 2 the performance improvement is minor compared with the existing methods 3 some experiment settings and details are not explained clearly though the authors provide some additional experiments to the questions about the experiments reviewers still keep their ratings the rebuttal did not address their questions ac has read the paper and all the reviewsdiscussions ac has the same recommendation as the reviewers the major concerns are 1 the theoretical justification is not clear the additional explanation given by the authors in their rebuttal ie the prediction becomes sharper and thus the model generalization ability can be improved is not justified 2 the experiments are not very convincing and can be further improved in the following two aspects 1 the motivation experiments should be conducted in a consistent manner instead of using simplified el in some cases 2 the effectiveness of el should be more significant otherwise it is not clear whether the claim is true or not at the current status of this submission ac cannot recommend acceptance for the submission
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a glipv2 model trained on visionlanguage grounding where localization and visionlanguage understanding tasks are reframed through the lens of grounding to have a unified model additionally a new inter imagetext token loss is introduced which provides some performance gains strengths the losses and unified framework introduced in the paper are sound and an important direction for visionlanguage work it is ideal to have a single model capable of performing both grounding and understanding type tasks and making the visionlanguage understanding tasks more grounded for interpretability and improved performance the figures are helpful well made illustrations and there are extensive experiments for the papers method to be validated the method does outperform its predecessor glip consistently albeit small gains but it does have large performance improvements on vl understanding tasks compared to prior work mdetr and others eg phrasecut vqa weaknesses while there are some performance improvements over glip i dont see significant method changes other than the inter loss this loss does not seem to contribute that much 12 point improvement compared to the ablations without the loss and the performance improvement compared to glip is quite small the authors did not include limitations or negative societal impact discussion docsepthis work proposes a new vl grounding framework called glipv2 which unifies several localization and vl understanding tasks in a same unified interface it shows that doing pretraining on localization data imagetext pairs with this setup improves downstream model performance on all taska and achieves sota performance on most of them it inroduces an inter sample contrastive loss which improves performance strengths proposed work improves sota performance on a variety of localization and visionlanguage understanding tasks the paper is well written and structured they present both finetuning with individual task specific heads as well as prompt tuning and using same weights for different tasks for zero shot the authors do thorough ablation on all the different combinations of the pretraining objectives and dataset combinations weaknesses i am concerned about the novelty as the additional loss term is somewhat similar to well known regionword loss applied over full batch in multiple works and same setup as glip but showing performance on new visionlanguage tasks although this work shows better performance compared to previous works i wonder if this is also due to additional data used for pretraining which is the localized data generated by glip it seems there is additional localization data generated using glip model that is used for training if so are the glip and glipv2 models trained in a teacherstudent setup in fact how does a glip model perform if retrained on this additional data kind of like selftraining are the glip baselines in table 1 using this additional localization data if not then the comparison is a bit unfair line 1184185 is unclear to me what the authors are trying to say i understand the explanation why interimage regionword loss is different from clip but the methodology is not very clear table 2 if there is triantest overlap i would suggest the authors to remove those results authors should have been careful about this while preparing the training dataset to deduplicate the train set against any downstream testvalidation sets yes they have but i would encourage them to discuss in detail any potential negative societal impact of their work docsepthis paper presents a new vl model glipv2 that builds upon the original glip v1 model centered around multimodal pretraining glipv2 aims to unify visual localization tasks eg object detection and uses vl understanding tasks like visual question answering to do so three distinct pretraining tasks are used 1 phrase grounding where the model computes the alignment between image regions and tokens 2 and 3 the standard masked language modeling task notably taskspecific classification heads are not used for pretraining in finetuning on downstream tasks glipv2 performs competitively the model can also be used in zeroshot and prompttuned settings strengths similar to clip glipv2 can perform openvocabulary tasks because of the classificationtomatching trick that computes the dot product between the fused visual and linguistic representations this means the model is particularly adaptable compared to many other vl models because it can handle new and outofdomain visual classes the zeroshot and prompttuning experiments have impressive results and are in and of themselves exciting to see as tasks for a large vl model the ability to evaluate models on downstream tasks with either little or no parameters updates and remain comparable to full finetuning shows the model has learned a lot during pretraining weaknesses glipv2 uses glip v1 to generate bounding boxes for the unlabeled image text pairs in the pretraining data versus just an offtheshelf object detector that has not had linguistic supervision this also clouds the data and pretraining approach a bit given how similar glipv2 is to glipv1 the text transformer appears to use the text transformer from clip including its pretrained weights let me know if im misunderstanding if it does use those weights this skews these results because clips text transformer already had visual supervision from the contrastive pretraining section 5 is titled conclusion and social impacts without a description of social impacts any biases learned by glip v1 for instance having significantly lower detection accuracy for certain demographics of people in images or performing worse for images where people do not fit in the gendered roles of the training data propagates to the pretraining data for glipv2 as well this is compared to detection datasets that were handannotated its a component of using largescale data and particularly using large scale data labeled by another ml model that the authors should address this in their limitations section
### Summary:
|
all three reviewers provided positive reviews and scores for this paper they were happy to see the strong empirical evaluations and improvements over glip impressed by the zero shot results and found the new combination of pretraining objectives interesting a few questions and concerns were brought up by reviewers that had to do with differentiation to the glip paper and model these concerns including novelty in the loss term tasks accomplished need for detection boxes at training time etc were well addressed by the authors the reviewers also acknowledged that their questions were answered given these positive reviews and discussions i recommend acceptance note to authors please address the comments raised by the ethics reviewer in your final manuscript thank you
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes to learn neural environment models they term synthetic environments such that planning in the model with modelfree rl results in good performance in the real environment the authors study learning full models as well as rewardonly models they call reward networks investigating the feasibility of learning these models as well as their robustness to different innerloop algorithms and hyperparameters this is an interesting line of work and initial investigation the exposition of the method and description of experiments are quite clear and complete however there are a few weaknesses the motivation for this work deserves more attention the authors suggest that ses could be used for automl iterating on algorithms in expensive environments like robotics agent or task analysis or agent pretraining however im not sure these are particularly well aligned with the authors statement of the problem implementation or experiments most applications are predicated on the relative performance of various algorithms when trained in the se being correlated with performance in the true environment however the problem formulation in eqn 1 describes maximising performance absolutely this is approximated in alg 1 with a generic trainagent function that approximates the argmaxtheta but it seems the specific trainagent functions chosen are highly relevant to potential use cases it is not a generic standin for an argmax the authors certainly understand the relevance of this as they study generalisation of ses to different algorithms and their hyperparameters and even address generalisation algorithmically by varying hyperparameters during training of the ses however this is introduced as a brief comment in the experiment descriptions rather than being central to the method it feels that to use ses for the potential applications discussed it would make more sense to reformulate the problem statement to explicitly account for different innerloop algorithms further simply maximising the final performance of all innerloop algorithms seems quite limited when applications involve comparing different algorithms it might be interesting to algorithmically address this usecase by changing the fitness function to account for the relative performance of different algorithms however it would also be useful to address this empirically given the current formulation of the objective and the authors likely already have the data to do so at least partially does the performance of different hpsalgorithms in the ses correlate with their performance on the true environment it may also be possible to motivate the work from a more traditional modelbased rl point of view which might be more closely aligned to the problem statement given in the paper a closely related work is that of nikishin et al 1 who use implicit differentiation to address the bilevel optimisation nb i dont fault the authors for not citing this i believe it is only available as preprint they argue that models learned only to further optimisation of a policy as in this works problem statement may perform better than traditional models when capacity is limited however the esbased bilevel optimisation is expensive so it may be difficult to make a strong case for this approach another weakness of the study is the simplicity of the environments used i understand the authors may have limited computational resources but it is difficult to know how highly to weight an empirical feasibility study carried out on such smallscale tasks in the absence of more compelling empirical findings it is more critical to explore the algorithmic choices and motivations in this vein i did appreciate the investigation of which inductive biases might help during reward learning overall i find the case for rns is perhaps more intuitive denseifying rewards once in an expensive optimisation to allow fast iteration later using an additive potential form even guarantees that this will not change the optimal behaviour sadly it seems this type of rn did not outperform the baseline much in the experiments in the rn case optimising for the speed of training makes sense but i still find the chosen objective strange why not optimise the areaundercurve returns for the full innerloop clipping at a somewhat arbitrary solution threshold seems like it would result in suboptimal rns this appears to have occurred in the halfcheetah experiment in the appendix minor comments in the se experiments it would be good to show the performance of the fixed hyperparams evaluated with the same fixed hyperparams this would show the generalisation gap compared to evaluating with varied hyperparms as is currently shown overall many figures are a bit hard to parse try using more smoothing or subsampling for curves for the barplots maybe rearrange them so the comparable bars ie measuring the same metric steps or episodes are next to each other 1 controloriented modelbased reinforcement learning with implicit differentiation nikishin et al 2021 update the authors have addressed many of my concerns ran some additional useful experiments and clarified their reasoning on several points in the overall discussion with reviewers i still think there is a bit of a gap between the bigpicture motivation for the work and the instantiation we see studied empirically the authors are working on an empirical investigation to help make that connection however even in its absence i am sufficiently positive about the paper overall now to lean towards acceptance i found the study interesting and creative while the work is interesting i find the motivation a little strained at not well aligned with the problem statement or algorithmic instantiation the empirical study is fairly clear but limited by the simplicity of the domains and the restriction largely to assessing feasibility rather than giving clearer indications of how the method could be used in practice docsepthe authors propose a framework for learning to synthesize a proxy models for usually nonlearned components of an rl loop namely transition dynamics and reward as defined in the interacting environment they learn the parameters of these model by metalearning onto the real environment with a learning agent with slight detail variants depending on what the learning objective is showing that not only can this setup lead to successfully learn the proxy parameters but also improve learning performance for the agents first and foremost let me start by saying that i really like this manuscript 1 the writing is extremely clear detailed and makes the paper a pleasure to read i also found that the authors tended to answer questions that were popping as i was reading paragraphs immediately thereafter which is a likely sign of writing that has gone through a lot of careful iterations 2 the authors have gone through a lot of work to justify the choice of optimization methods how and why the framework was setup in certain ways and how the hyperparameters for a now fairly complex learning system were chosen considering the code release and the extremely detailed appendices i suspect the work will be extremely easy to reproduce truly commendable and a great example of what good ml research looks like i also particularly enjoyed the frank discussion about the works limitation due to choosing es as an optimization method for the outer loop 3 the central idea of the method is relatively simple but seems to be quite effective it honestly borrows from a lot of previous literature reward learning populationbased optimization etc whilst effectively producing a novel take on learning syntethic environment models so overall i am extremely happy to argue for acceptance as it is now lets look at what i perceive to be some of its weaknesses 4 the manuscript at times feels very dense especially when looking at methodology and experimental details and its understanding is greatly helped from the presence of a sizable appendix it very much feels like the authors could have a feasibly split the paper into two one on es and a follow up on rn providing more space to experimentally analyse es and rn separately and make for a cleaner and easier review process 5 some of the hypotheses presented to justify the improved agent training feel relatively handwavey consider for instance the following quote emphasis mine optimizing ses for the maximum cumulative reward automatically yielded fast training of agents as a sideproduct likely due to efficient synthetic state dynamics it is implied that this metalearning approach leads to generally discover better dataproducing dynamics for the agents used in the system but i wonder whether this is primarily due to the structure of the tasks employed in the experimental settings as state space and transition dynamics of these simple control problems are generally not very informative and the useful policy learning wise parts of these mdp are instead both easily discoverable via search and extremely informative in short i wonder if wed see similar improvements for tasks environments where task difficulty is extremely affected by the emerging complexity of the environment or whether we can actually show that this is a property of ses rns when trained in this manner which would be quite outstanding although that said the effecitveness of rns do seem to indeed point to the latter hypothesis nits additional comments figure 2 the labels are inverted dotted lines should be the thresholds section 6 but generally in many parts of the paper it is claimed that learning rewards is generally easier than learning transition functions would it be possible to find a reference for this my personal opinion is that it should indeed be often the case but that generally it is not true eg imagine an mdp with a small transition matrix but an extremely stochastic reward function this is a good paper it contains excellent writing good research and makes for a great example of what an iclr paper should look like the research problem and proposed methods are interesting and well placed in the literature and the experimental section is exhaustive docsepthis paper aims to learn proxy environments synthetic environments or ses and reward functions reward networks or rns parameterized as neural networks such that these proxy models provide beneficial transitions to make it more sampleefficient to learn a policy for a fixed target environment referred to as the real environment ses replace the observed state and rewards during training while reward networks provide a synthetic reward that is a function of the real reward current state and next state the proposed method formulates this problem as a bilevel optimization where the inner loop consists of standard rl under the proxy model and the outer loop consists of nes with the aim of optimizing either the performance on the true target environment ses or a the number of training steps needed to reach a certain return threshold on the real environment strengths this paper extends prior work on metalearning mdp transition dynamics for improving training performance on a target environment by introducing a method that aims to learn both the state and reward transitions this formulation is more general than prior works experimental results are provided for multiple formulations of metalearned mdp transition dynamics spanning both joint state and reward networks ses as well as various formulations of a reward network potential and nonpotentialbased reward shaping parameterizations this paper includes experiments on transferring to different agent architectures and hyperparameter settings which seems to be a novel experimental setting for metalearning mdp dynamics weaknesses the motivation for learning ses and rns for singleton target environments is not convincing while there seem to be some marginal gains in sampleefficiency it would seem these gains are largely made irrelevant due to the additional training needed to train the environment in the first place moreover if ses and rns require first training the agent multiple times on synthetic environments why not just copy weights or use a form of policy distillation eg kickstarting schmitt et al 2018 rather than train the agent again further it seems the basic icm baseline already matches or outperforms the proposed rn approaches across experimental settings further weakening the argument for using the more computationally expensive method proposed in this work given the wide design space of possible ways of learning the state and reward transition dynamics for a target real environment it is unclear why only these two se and rn formulations were studied given that this is an empirical paper the value of the findings rests largely on a clean experimental design that provides a convincing recommendation based on comprehensive experiments the seemingly arbitrary choice for only considering the se and rn formulation does not provide such a convincing recommendation for using either of these formulations rns are effectively an ablation of ses where the state is not learned however the current experimental design does not neatly present these results as a clean ablation this is because the rn experiments also separately consider various kinds of potential and nonpotentialbased reward shaping parameterizations for the learned reward these parameterizations are not studied in combination with ses likewise there is no corresponding ablation for when the reward is not learned but only the state is learned ie a statetransition network or perhaps a stateaugmentation network it is not made clear why exactly ses and rns have to use different objectives when training the rn objective seems strange as it seems to be optimizing for sample efficiency rather than performance in contrast the se objective optimizes for performance this seems like a significant inconsistency between the training methology for the two types of models making it hard to compare the relative merits of ses and rns the reward difference term of the rn objective also caps the maximum performance improvement compared to training on the real environment the motivation for this objective design is not clear as not taking the max between the reward difference and 0 should encourage learning rns that lead to outperforming training on the real environment rather than simply matching it further it seems that there is not a consistent ranking in terms of which parameterization for the rns improves training performance on the real environment the most and similarly no consistent ranking between nontransfer and transfer scenarios for each environment meanwhile the se results on cartpole and acrobot would benefit from a direct comparison to the training curves for agents using the best hyperparameters trained on the real environment this would directly separate out the performance gains due to training on the se the comparison to previous work such as zheng et al 2020 seems quite handwavey zheng et al 2020 in particular is quite similar to this work in terms of analyzing the effect of learning an rn and also includes more comprehensive experimental results in a more complex setting requiring recurrent rns the novel contributions in relation to these prior works seems to be around analyzing the impact of varying agent hyperparameters on transferring to alternative agent settings and architectures during evaluation however these transfer results do not seem consistently strong for any one method proposed sampling agent hyperparameter settings from a diverse range seems crucial for the learned ses and rns to transfer to other agent configurations the paper does not discuss how sensitive the proposed methods are to the agent hyperparameter ranges used during training while this work proposes an interesting generalization of prior work on metalearning mdp dynamics the case made for these methods is weak and the experimental resultsespecially for sesare uncompelling given the additional compute required by these methods it seems that various forms of policy distillation or even training from scratch seem more efficient further it is not clear why only certain formulations ses and rns of metalearning neural transition dynamics were investigated while others like a stateonly neural transition network were not given these points i recommend this paper in its current form for rejection docsepthis paper introduces synthetic environments and synthetic rewards as a bilevel optimization problem the main idea is to find ses and res such that an rl agent trained under them will do well in a given known target environment without further training results in figure 2 the performance seems very unstable in particular the reward for individual agents seems to fluctuate widely and even the average starts drooping after the 50th iteration in the arcobot task it would be great to know what the source of this instability is conceptual there are repeated claims in the paper that the synthetic environment focuses the learning on the relevant states or learns representations of the target environment an informed representation of the target environment by modifying the state distributions to bias agents towards relevant states this claim is entirely unclear to me at the moment in particular there simply doesnt seem to be any reason that the se should have much in common with the ground truth environment the only condition is that optimal policies for the two environments match in particular there will be many environments in which the optimal policy is likely much easier to learn than in the original env for example you can imagine a state transition dynamic that is entirely random but provides a large reward for the optimal action in each of the given states another option is to learn environment dynamics where the optimal action in each state deterministically transitions to an arbitrary next state such that the agent will see all required states during training it would be great to provide any type of evidence that there is actually semantic meaning in the se that matches the underlying ground truth environment other comments in figure 2 it seems like the legend is switched ie dashed line should be the solved threshold a very interesting paper with early results in a potentially interesting direction it is currently unclear what this method allows to do that was previously impossible from a practical point of view but there seems to be promise id be open to raising my score if my concerns are addressed by the authors update based on the discussion phase and the rebuttal i have updated the score to a 6 i believe this paper has enough merit to be published at iclr if there is space it is an intriguing piece of work even though the practical utility is very unclear at this point
### Summary:
|
this paper proposes a new method for generating synthetic environments and reward networks for reinforcement learning tasks this happens as a nested process policies are learned in an inner loop and environments are evolved in an outer loop the environment representation is quite simple the parameters of an mdp similarly the reward networks are simply neural networks results show that the the learned environments and reward networks are reasonably good at decreasing policy training time by rl the proposed method appears to be simple and quite general and it would be interesting to see how it scales up to more complex environment representations the discussion around the paper centered on understanding various details of the method and on the quality of the results the reviewers generally agree that the paper is easy to read and vary in their assessment of the significance of the results it was pointed out that the generated environments are not necessarily similar to the base tasks but it was nowhere claimed in the paper that they were in fact it could be argued that the dissimilarity makes the method more interesting given the good results of policy training im happy to recommend the paper for poster acceptance if the results would have been more impressive it could have been accepted for a more prominent presentation form however i believe that the method can yield better results in the future with more sophisticated environment representations
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper provides a data set of srh images from a number of types of brain tumors as well as control tissue it also provides several benchmarks for accuracy of classification of these images at the individual patch slide and patient level the community needs more medical imaging data sets and this is one covers a large number of tumor types and patients the benchmarking seems of very high quality and is very thoroughly described the ground truth dataset itself was categorized based on a neural network this is not a problem in general but the description of exactly how this was done is by far the weakest part of the paper who were they manually checked by how much of the data was checked at each stage it is hard to evaluate any of the results if the ground truth is not certain and in a paper proposing a data set and that it should be used as a benchmark it is critical to understand this the data comes from a single institution and a single commercial instrument it is not clear to this reviewer whether one or many companies produce such instruments but if this data only helps purchasers of a single or a few commercial instruments the breadth of its utility seems limited to me there is not a current finalized data use agreement exactly how open opensrh will be cannot be determined until this is true the data is currently structured as either a single extremely large tarball 300gb or perpatient the latter is appreciated but without information about which patients have which tumor types it isnt clear how would would know which patients to subsample the metadatajson files are not available as far as i can tell except in the large tarball docsepthe authors introduce a public dataset of stimulated raman histology images of brain tumors opensrh the authors claim that this is the first dataset of its kind i cannot confirm or deny this because i am not familiar with previous literature in raman histology datasets this dataset includes over 1300 whole slide images from 307 patients the authors also establish classification and representation learning benchmarks on this dataset this is a rich dataset which includes raw acquired data patchwise tumor classifications for the whole slide images and processed patches ready for ml pipelines overall this is an excellent and exciting contribution to the field the authors provide strong motivation for their work specifically that the pathologic diagnosis of brain tumors is usually unknown until surgery whereas it is known prior to surgery for other cancers the authors also successfully motivate the use of stimulated raman histology over hematoxylin and eosinstained histology such as frozen sections that might be used for diagnosis during surgery another strength is in the description of stimulated raman histology and srh imaging moreover the data and code are all available to reviewers at the moment and to the public in the near future and the appendix provides meticulous documentation of the data and training there are very few weaknesses i can identify in this work the authors have done an excellent job one weakness i would like to point out is in the evaluation metrics the authors use accuracy and variations and mean average precision in addition to these i highly suggest including a metric that highlights false negatives false negatives would be unacceptable in a diagnostic task so the authors should provide benchmark evaluation of this and highlight the amount of false negatives in addition i would caution against using accuracy as an evaluation metric in this case because the labels are unbalanced based on the number of slides in figure 3 the authors should report metrics that properly handle unbalanced data like auroc and aupr or related metrics docsepthe medical problem being tackled in this paper seems exciting and novel given the reviewers knowledge about medicine which is not much the srh dataset is apparently unique and would allow realtime brain tumour segmentation and classification with significant consequences for patients this being said the reviewer is not convinced that this paper suits the neurips conferences or the neurips datasets benchmarks tracks objectives while the medical added value of the paper could be interesting from what the reviewer can only guess the interest for the machine learning community seems pretty limited the dataset is a medical imaging repository and the techniques being benchmarked are very classical computer vision models and contrastive learning methods it is not clear to the reviewer that this new dataset will foster particularly innovative discoveries on the machine learning front the medical problem is well stated and seems important the github repository is well conceived and userfriendly while the dataset is easy to access and well constructed the approach of the tumour segmentation and classification tasks is rigorous as mentioned above the reviewer does question the added value of the dataset to mls innovation docsepthe paper introduces opensrh a public dataset including 1300 clinical srh images from 300 brain tumor patients and a benchmark for two computer vision tasks multiclass histologic brain tumor classification and patchbased contrastive representation learning the proposed workflow demonstrated in the benchmarking and the dataset aims to accelerate the development of fast reliable and accessible intraoperative diagnosis practices for brain tumor surgeries the authors well documented the need for this dataset through their prior work and sound arguments regarding the specifications of an ideal system for brain tumor surgeries the paper included two welldocumented and explained benchmarks that exemplify their proposed workflow and the significance of the dataset additionally enough details are provided for reproducibility the paper recommends a workflow for classifying brain tumors using machine learning instead of existing resourceintensive workflows however the paper doesnt touch on the accuracy of their automated process compared to the existing anywhere in the paper i think a brief mention of this is needed as a current limitation if there is a dip in accuracy or a strong point if the accuracy is improved the paper didnt highlight or explicitly elaborate on their results in both tables 1 2 although i think the results are rich enough for a rich discussion and comparison of the results docsepoverall i have enjoyed this paper very much and believe that it provides an excellent contribution to the literatureneurips major points rapid intraoperative histopathological assessment of tumor samples seems incredibly desirablevaluable two aspects are highlighted increased speed savings of approx 30 minutes75 of time after a sample is taken and transferred to the slide and through digitization and aicomputer vision techniques a reduced dependence on human neuropathology expert opinion it would be important to test this with a preclinical prospective study what is the influence of ai generated labelsdiagnoses on surgical decision making what are benefitsharms compared to current standard of care when and how can overreliance on ai lead to detrimental outcomes etc the dataset together with the workflowcode from the authors will allow rapid development of these protocolsstudiestrials to demonstrate the clinical value of srh as an integral part of tumor surgery and given the open access upon request as it seems will allow other groups to increase the speed of adoption of this technology manuscript very clearly lays out the advance made by switching from frozen sections to srh imaging methods are well described and results well visualized the main findings can be readily extracted srh reduces the time from sample extraction to usable information by about 30 minutes 75 and using contrastive learning on a frozen computer vision model head significantly improves the classification accuracy of patchlevel inference to almost patientlevel accuracy of the simple acrosspatch aggregated data this second finding patchlevel inference matches simpleaggregate patientlevel inference might be highlighted in the abstract and discussion to make it stand out even more clearly data was collected for specific researchtechnology development interest making it a very strong contribution to the field data and code is or will be shared for further collaboration and development the only remaining question i had while reading occurred in lines 8687 could the ai approach also be combined with lightmicroscopic traditional histopathology it would seem important to separate the two contributions to the field srh as a speed up and ai as a reduction in pathology expert reliance for a more finegrained evaluation of the claims what would it take to train a similar ai on highresolution digitized traditional he slides and given how much a srh scanner costs and the lack of expert pathologist availability is that the greater or lesser bottleneck in current practice docsepthe paper introduces a new resource for brain optical imaging namely opensrh which is valuable for optimizing brain tumor surgery the paper clearly explain the motivation of opensrh which is easy enough to follow even without any prior knowledge in brain surgery or optical imaging the paper provides sufficient details regarding the data collection methods and the dataset statistics in addition the paper develop two benchmarks from the dataset ie histologic brain tumor classification and contrastive representation learning benchmarks the experiment protocol for both benchmark is welldescibed and sufficient analysis is presented from the results the published dataset opens up an interesting opportunity for optimizing brain tumor surgery through srh imaging the established benchmark motivates future work to address the domain gap on the existing pretraining data and aggregation methods to better model slidelevel and patientlevel prediction the task described in the paper is wellmotivated and wellwritten patientlevel prediction vs patientlevel prediction despite the poor aggregation method is already briefly discussed on 62 i would expect there would be simple experiment to validate this phenomenon further by for example trying different aggregation methods during inference
### Summary:
|
there are contrasting review rating ranging from 9 to 3 the reasonable concerns about this paper not been well validated for ml community the proposed datasets has been benchmarked with some classical and some relevant dl methods i have taken into account the expertise of reviewers and their concerns carefully i do agree with some reviewers opinion about this dataset will be importance to medical cv community i do agree with authors response the open questions for the ml community that opensrh may foster innovative discoveries include the following 1 domain adaptation between srh images and other histology images such as he images in the large scale tcga project 2 using multiple instance learning mil to avoid expensive dense patch annotations 3 different aggregation methods for patchbased training including clustering attention or mil 4 self supervised learning and comparing different augmentation strategies for srh images and 5 data efficient training of vit architectures using srh data even though the authors have not verified these innovative applications on the opensrh dataset i will go with acceptance opinion of few reviewers for this paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper explores the longterm occlusion problem in multiobject tracking and proposes to using trajectory forecasting methods to compensate the tracking losts the forecasting module is conducted in birdeyeview the method can advance stateoftheart trackers on the mot challenge dataset overall i like the idea in this paper i agree that most current multiobject trackers do not tackle longterm occlusions the solution in this paper is sound and makes sense to me however i feel that the work of this paper hasnt been finished yet it doesnt have supportive experiments i hope the authors can improve the draft with the following comments strengths 1 the paper is wellwritten and easy to follow 2 the paper tackled an existing problem in multiobject tracking and the solution is reasonable weaknesses and suggestions 1 it makes sense that longterm occlusion rarely occurs in mot challenge datasets however the paper should have supportive experimental results current results boost tracker by 01 hota cannot prove the effectiveness of the method i suggest the authors try to test the oracle of forecasting matching and show the advanced percentage of the method in this part of errors 2 how about doing experiment in autonomous driving datasets the dataset has camera parameters the authors discussed the limitations in the paper docsepthe paper investigates the exploitability of trajectory forecasting in multi target tracking first unmatched trajectories and unmatched detections are projected in birdeye view bev trajectories are then extended according to some model and tested for association projection in bev also introduces visibility constraints that further reduces the search space of true matchings the author validate their projection method against the true 3d position of pedestrians in motsynth then study the applicability and benefits of different trajectory forecasting models they show that the proper combination can improve the hota score and reduce id switches of 7 sota methods on mot17 validation set in the supplementary the authors detail how their projection method can be extended to moving sequences paper is well written and nicely structured the reading is fluent figures and tables are also helpful in the understanding of the key steps the analysis of the contributions is carried out with an appropriate level of rigour results are in favor of the proposed approach hota is a complex and composed measure that tries to unifyproject many aspects of tracking to a single scalar so the authors have to compensate by also using assa asspr as well as id switches why did the authors thought these metrics to be more appropriate than id recall a which measures the ability of a tracker to associate the same identity to a trajectory despite occlusions and interruptions isnt this exactly what the authors are trying to improve is the id recall in fig1 the one from a a performance measures and a data set for multitarget multicamera tracking from what i can see from tab1 most of the gain comes from a linear prediction method in 3d space other than that there is only a 05 hota left to gain from other trajectory forecasting methods in this perspective i think the paper is overstating the importance of trajectory forecasting methods in multi target tracking and not helping the reader draw the correct conclusions 3d tracking has been around since motchallenge 15 and kalman filtering even earlier and this seems to be the thing that explains 90 of the performance improvement 5411507154525071 for the same reason as above i would have liked to see a comparison on 3d mot 2015 in fig4 b we see a small difference between h and pixels l2 and a large difference between pixels l2 and pixels the only way the reviewer can explain the difference between pixels l2 and pixels is through the use of a threshold in different domain which seems to indicate that the threshold in bev is less tight than the threshold in pixel space as a matter of fact 2m seems like a very large margin to associate a trajectory and a detection how did the authors choose this threshold i think to remember that for both mota and idf1 the threshold for 3d tracking was 1m instead of 2m can be checked in the motchallenge evaluation kit lines 127129 simply applying trajectory prediction to mot is not trivially possible are misleading the community has been doing this for years and its also reported in tab1 kalman filter pixel with decent results nothing else to add docsepthis paper investigates the problem of longterm multiobject tracking this is a relatively underexplored problem that worths investigation since most existing tracking methods focus on shortterm tracklets usually shorter than 2s the main contribution of this paper is that they project the scene into birdeye views bev using homography transformation and then apply trajectory forecasting and tracking jointly in the bev space the original version described in the main text supports only videos captured from fixed views and an extending version described in the supplementary material steps further to support videos captured from moving cameras results on standard test sets suggest the proposed method improves tracking accuracy when combined existing trackers and refreshes stateoftheart when combined with the best performant tracker bytetrack originality good existing mot datasets and evluation metics focus more on shorttime tracking accuracy while many methods already perform fairly well on these metrics they always fail when targets lost for relatively long time this paper introduces trajectory forecasting in the bev space to handle longterm lost tracklets and shows promising results to my knowledge this is not wellexplored in previous literature so the originality is good quality good the entire pipeline is simple easy to understand and seems work well for longterm tracklets my biggest concern is about the forecasting module from table 1 it can be seen that applying kalman filter in the bev space already performs well in terms of both prediction and tracking and using advanced learning based methods seems do not bring too much gain this is somehow below my expectation since kalman filter is such a simple linear model perhaps the problem lies in that most motion in the considered dataset is linear and in this case comparing different forecasting modules in a dataset with more complicated nonlinear motion partterns may be helpful to validate the effectiveness of gan clarity good but can be improved in general the presentation is clear and easy to follow however it would be better if key experimental conclusions are highlighted more clearly especially for table 1 significance good the longterm tracking problem is absolutely an imporant problem that is not wellexplored due to limitation of existing datasets and metrics this paper sheds new light on a promising direction towards solving this problem combine forecasting and tracking in bev space the authors carefully discussed the limitation and and potential negative societal impact of their work in the main text docsepthis paper discovers that trajectory predictions for moving agents will significantly reduce this search space and explores the trajectory prediction can improve longterm tracking robustness of mot furthermore they show that their proposed method reasons mot in the birdeye space and generates a small yet diverse set of forecasts while accounting for their localization uncertainty therefore they manage to advance stateoftheart trackers on the public benchmarks strengths this idea is interesting and it will benefit the development of mot community considering occlusions are one of the main challenges in tracking this work proposes a novel way to overcome this concern this paper identifies several hurdles in the integration of trajectory prediction and mot besides there are several interesting and inspiring conclusions the presentation of this paper is fine and the organization is clear they achieve the stateoftheart performance on public benchmarks maybe it may inspire more following mot methods on this path weaknesses the framework seems too complex it includes five independent submodels eg depth estimation network and segmentation network all of these submodels require separate training which may degrades the robustness of the framework before the deep learning era there are a few works that already attempted to incorporate crowd motion prediction or crowd motion models eg social force to mot some of which are listed below itd be better to refer to these works g antoninisv martinezm bierlairejp thiran behavioral priors for detection and tracking of pedestrians in video sequences ijcv 2006 stefano pellegrini andreas ess luc van gool youll never walk alone modeling social behavior for multitarget tracking iccv 2009 kota yamaguchi alexander c berg luis e ortiz tamara l berg who are you with and where are you going cvpr 2011 wenxi liu antoni b chan rynson w h lau dinesh manocha leveraging longterm predictions and online learning in agentbased multiple person tracking tcsvt 2015 the major limitation of this work on the uncertainty of homography transformation has been discussed their proposed method partially account for these uncertainties via simple strategies considering this is a novel exploration along this direction i think the limitations have been sufficiently addressed the potential negative societal impact of their work was not dicussed
### Summary:
|
the paper initially had mixed reviews 4567 the main concerns of the reviewers were 1 can better show the improvement on longterm occlusions cbmw 2 lack of results on autonomous driving datasets w camera parameters cbmw 3 questions about the evaluation metrics used yuje tgjz 4 in tab 1 most of the hota gain comes from linear prediction in 3d space ie kalman filters yuje 5 comparison on 3d mot 2015 yuje 6 missing ablation study on association threshold yuje 7 what is the tracking efficiency tradeoff for forecasting xrjc 8 how to deal with moving cameras xrjc tgjz 9 complex pipeline requires training separate submodels tgjz 10 ablation study on the different view projection methods tgjz the authors wrote a response to address these concerns the reviewers were largely satisfied with the response reviewer yuje still had a concern about the message of the paper point 4 reviewers point a1 and responded the authors replied by assessing that working in bed is already trajectory forecasting i do not agree with that that is just 3d or metric tracking and metric tracking kalman filter which explain 90 of the contribution of the paper should not be advertised as novelty nor as trajectory forecasting this view that i am suggesting here clearly help the reader in understanding that trajectory forecasting is really of little help in mtt 05 hota which is the opposite of what the paper is claiming as i see it the paper has merits eg ways to go from image to bed in static as well as in moving sequences but that is not the story told by this paper the most interesting part being in the supplementary material nonetheless the final ratings were positive 5667 and the reviewers appreciated the problem solution to handle longterm occlusions and brings a promising direction for future research the ac agrees and recommends accept the authors should revise the paper according to the reviewers comments and the discussion
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper focuses on the problem of fairwashing detection in blackbox models to be specific fairwashing is a technique used to fool model explanation methods to produce deceptive outputs for unfair models in this paper the authors argue the importance of studying the faiwashing issue and propose a novel framework frauddetect to detect fairwashed models and present empirical results in addition theoretical analysis is provided to show that fairwashing is unlikely to be avoided and the robustness of the proposed detection method towards an informed adversary is also discussed strengths 1 this paper investigates a novel and important problem model explanation is a popular and promising approach for auditing a model with critical usage to avoid unfairness in automated decisionmaking but such model explanation approaches are also vulnerable to being attacked which has been overlooked the problem is also challenging a thorough study on it is needed 2 the paper provides a simple but effective method for detecting dishonest interpretation models that take advantage of the fairwashing technique 3 theoretical conclusion is provided to show that fairwashing is unlikely to be completely avoided empirical results are provided to validate the effectiveness of the proposed detection method and also its robustness towards an informed attack weaknesses 1 the logic of the paper is clear but the expressions in the paper are kind of obscure and not easy for readers to follow the motivation and the algorithm in the proposed detection model are simple and clear but the authors use a lot of long sentences to describe them which makes the simple things complicated it would be better if the authors could use more short and succinct sentences which will improve the presentation and readability of the paper 2 the hyperparameter delta the fairwashing detection threshold is a key parameter in this proposed detection algorithm more discussions on how to choose this hyperparameter in practical use are needed no potential negative societal impact docsepthe authors deal with the problem of detecting fairwashing a maliciously generated explanation such that the model looks fair even if it is actually unfair they investigated fairwashing in the global explanation and found that elimination of fairwashing is difficult and fairwashing can be detected using false and true positive rates then their proposed detection algorithm for fairwashing is to check the deviation of the falsetrue positivenegative rates between the original and explanation models using kl divergence to evaluate the robustness of their algorithm they introduce a fairwashing algorithm with a mechanism evading the proposed detection method the empirical evaluations demonstrate the high detectability and robustness of their proposed detection algorithm strengths the fairwashing is an urgent risk hence the detection of the fairwashing is well motivated to the best of my knowledge this is the first to propose a detection method for fairwashing weakness the authors make an impractical assumption about the defenders knowledge lacks some important references the authors deal with the interesting and demanding task of detecting fairwashing and most of the results are technically sound however i found a fundamental flaw in the problem setting by which they deal with the situation where there is no risk of fairwashing due to such a considerable flaw i recommend the rejection of this paper the problem setting looks weird the authors assume the detector can query the original model b in a blackbox way however if we can query b we can also calculate the unfairness score of b such as the difference of the conditional probabilities in the demographic parity by which we can confirm the fairness of the original model b hence under the assumption of having accessibility to b the defender easily detects the unfairness of the original model b in other words there is no risk of fairwashing in this context while this paper adequately validates the fact that their method can detect fairwashing in this context the detectability is obvious due to accessibility to b the statement of theorem 1 is not rigorously defined what mean by completely eliminating fairwashing also what mean by sufficient this paper lacks the comparison with the following studies k fukuchi et al faking fairness via stealthily biased sampling aaai2020 d slack et al fooling lime and shap adversarial attacks on post hoc explanation methods aies2020 both papers investigate the risk of deceiving fairness remarkably the first paper discusses the detectability of malicious modification for deceiving unfairness under situations where the detector can access the labeled benchmark dataset the accessibility to b makes the situation no fairwashing risk hence i recommend the authors reconsider the situation to be meaningful docsepthis paper presents a theoretical analysis of fair washing and demonstrates that completely reducing fair washing in certain cases is not possible in addition they present a simple technique that looks at the kl between surrogate model confusing matrices to detect fairwashing they evaluate their technique and find it helps detect fairwashing issues strengths interesting problem the empirical evaluation of fraud detect is comprehensive considering many different models and settings weaknesses the main weakness is that the theory is missing explicitness in several places and there are gaps in what the exact setting is specifically here are the main gaps i see the initial problem setup is not defined explicitly enough it seems like the setting is a classifier with a sensitive attribute but the specifics here are lacking for example in equation 1 is this a multiclass or binary class setting is the protected attributed discrete or continuous if its continuous is it allowed to take on multiple values these details are a bit hazy right now making it hard to figure out where the claims apply exactly what are the specifications of the interpretable model does this model mirror the predictions across multiple classes or just a single class the construction of interpretable model i is a bit hard to follow as well and i is introduced as just a simple interpretable model i i think what is exactly meant by the interpretable model needs to be stated more explicitly because right now it is a bit hazy and hard to follow this makes it difficult to follow the statement of theorem 1 where the theorem specifices and interpretable model without this really being defined while the empirical results about the detection algorithm are pretty complete it seems tricky to choose this threshold delta that determines whether fair washing is going on it would useful to provide more guidance here overall i think this paper needs to much more rigorously provide the problem setting and explicit definitions of what is going on the authors have done a sufficient job docsepthe paper characterizes the problem of fairwashing for interpretable models it proposes a fairwashed model detection method based on measuring the difference over subpopulations in truepositive and falsepositive rates of the interpretable model wrt the blackbox model it shows the methods sufficiency theoretically and effectiveness empirically even under the assumption of an informed attacker the evaluation is conducted on two interpretable models of four classification models on three widely used fairness datasets strengths 1the problem is wellmotivated fairwashing can be a practical threat in reality 2the theoretical analyses look correct 3the proposed method is comprehensively evaluated and shows decent performance for practical usage in particular the setting of an informed adversary strengthens the generalizability of the proposed method 4the paper is wellwritten wellorganized and easy to follow weaknesses 1the method currently only applies to one fairness definition however in practice other definitions like equality of opportunity are also commonly used they are well addressed
### Summary:
|
the reviewers were split about this paper on one hand they appreciated the motivation and the comprehensive experiments in the paper on the other they were concerned about the clarity of the paper even worried about a potential flaw i have decided to vote to accept given the clear and convincing author response i urge the authors to take all of the reviewers changes into account if not already done so once done this paper will be a nice addition to the conference
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors introduce prospect a database for proteomics research based on a massive collection of lcmsms datasets already available from the proteometools project the paper is well written and the package adds some functionality to the original database in particular there are some advantages over existing packages in terms of usability and the authors provide peak annotations from an expert annotation system which is publicly available reference 34 in the manuscript the application area is important and the authors produce a package which could make the use of proteometools data more readily available to the ml community the figures in the paper give a good summary of the database the primary weakness of the paper is the relatively limited original content the package is a fairly straightforward extension of the proteometools database which has been used in many studies of peptide lcmsms previously in my opinion it is not comparable to papers that have been accepted to this neurips track in the past more detailed comments 1 there is little guidance in the paper or package on how to do data splits furthermore the paper does not provide baseline methods for benchmarking 2 one of the tasks included in the package is retention time prediction section 41 makes it clear that existing methods already produce very accurate predictions so it isnt clear why this is a task worth highlighting 3 the authors acknowledge that extrinsic validation metrics based on downstream tasks such as fragment presence prediction or de novo sequencing are more meaningful however the package only suggests a few intrinsic validation metrics for retention time prediction and spectrum prediction the paper would have been stronger if the authors had included datasets for downstream tasks and some clearly defined benchmarks 4 the prediction method used to produce figure 8 is not included 5 no evaluation metric is provided for spectrum prediction when the peak identities are also predicted 6 further discussion on the meaning of y and b fragment ions and neutral losses would be helpful for nonexperts a description of the expert annotation system for losses used reference 34 would also be good 7 in section 32 it is not clear what this sentence means to keep complexity low and identification rate high docsepthis paper presents a newly annotated and integrated protein ms dataset based on the proteometools dataset here the authors have combined all of this data into a single resource with associated metadata the authors then present what they consider to be a sort of best practices for evaluating and presenting analyses on two different predicitions tasks retention time and msms spectrum the ultimate goal here is to provide an efficient resource dataset for training new ml models that can be used to make protein ms technology more accurate and sensitive the combination of metadata overview of the dataset composition and recommended metrics and visualizations is a strength and will help new comers into the world of proteomics analysis with ms much of the paper reads like a review paper which is also helpful for those new to the protein ms field the dataset is not new though has useful additions the number of benchmarks presented here are few to none docsepthis study curates and annotates the proteometools dataset a proteomics dataset that can be used to train models in predicting properties for mass spectrometry msms using experimental data the authors propose 2 tasks peptide retention time and spectrum prediction the authors also provide an api to download and extract relevant data and discuss metrics to evaluate the tasks proposed the paper highlights an area of research that is not as commonly represented in csml conferences they reference several papers broadly interested in msms predictive tasks that employ a variety of different architectures 1 this paper proposes tasks but does not train any models to verify the utility of their dataset or the validity of the tasks provided it is unclear what the benchmarks are and how the community can define improvement is based on the dataset it is also unclear how generalizable these tasks are to the broader protein space 2 it is unclear how they generated data section 31 discusses data generation but i could not find discussion on the experimental details of preparing the peptidesexperimental samples 3 their justification for using proteometools is that it is commonly used has high quality spectra and only uses synthetic peptides the former two points raise the question why massive data was not used which also has the same properties the latter point on why synthetic peptides is more attractive than human peptides is nonobvious to someone not wellversed in msms moreover it is not clear why this property is relevant in benchmarking models 4 their review focuses on sequencetospectra style predictions however spectra clustering using sequences as labels not inputs has been an established strategy frank et al 101021pr070361e gris et al 101038nmeth2343 101038nmeth3902 wang et al 101016jcels201808004 the authors should include this in their review as there are several strategies to design these models 5 no analysis is done on the sequence diversitycomposition of the input sequences how much variety is there in the sequence similarity of the peptides 6 authors prefiltered most modifications except for 2 from their dataset which biases benchmarks for the two included what fraction of the dataset contains sequences with these modifications 7 the retention time data plot seems to suggest there is substantial overlap between sequences of different lengths that have the same retention time however protein models can be quite sensitive to input length how does this benchmark consider models that may use trivial heuristics like input length to perform well on a predictive task 8 the authors annotate neutral loss in their dataset but do not explain why this may improve the quality of benchmarking 9 the authors seem to note that the retention time regression task is trivial to compute r2 099 from prior work if the task is this easy then why is it considered a benchmark the authors propose a timedelta but are unclear on what domain insights this could provide could the authors cite examples on how this refined resolution is beneficial docsepthis work is relevant in the field of proteomics analysis with mass spectrometry ms the current dataset proteometools consists of raw data hence current models to predict the retention time spectra given a sequence or both have been processing the data in many different ways this makes it hard to compare these models in this work the authors annotate the existing dataset proteometools and include useful annotations like the identity of the peptide binarized spectra values normalized intensity values retention time and metadata related to the experimental details involved in data acquisition by organizing the data in separate files and filetypes the authors provide an easy way to browse through the data the annotated dataset does not only lower the entry difficulty in training such models but also provides an standardized way to train and compare them the dataset is centered in the most commonly used dataset which is nice for consistency with previous work annotating it supposes a great novelty because it could help homogenize the training strategies and comparison between models as well as facilitating working with this kind of data the dataset creation availability and maintenance are clearly detailed which will facilitate the implementation of these benchmark to other models the dataset is readily accessible as well as code to use it the exploratory data analysis provides insights into what the dataset looks like and it is very clearly demonstrated in the article i cant find significant weaknesses a suggestion for future work would be to demonstrate the use of such a dataset in use for a machine learning model docsepthis paper presents a new deeplearning ready dataset for looking at protein mass spec data mass spec is a central instrument for much work across drug discovery and chemistrybiochemistry at large in particular proteomics is a growing area of research esp for pharmaceuticals peptides this dataset is curated and ready for benchmarking where they aggregate and align the data to a deep learning community dataset is a novel type of data modality important for drug discovery no current sota modeling or community efforts for proteomics well written very clear indicating translational skills between experts in mass spec and deep learning data has multiple tasks more detail on what retention time is scientifically make it more clear the size and scale of the datasets in terms of things deep learning practitioners care about num samples feature sizes etc provide a few more examples of what applications and impacts a model which does well on this task will have for example the models you trained have a certain performance what can you do with them and what impact will this have on the scientific community what about if your models were 50 better what would that impact do this is a good way to motivate not just a benchmark but why this benchmark really matters think if on casp we were able to do 50 better than af2 what impact would this have immediately on drug discovery research docsepthis paper presents a new annotated dataset for predicting proteomic properties from sequence data this task is useful for determining which amino acid molecules exist in a sample and to what abundance the authors describe existing machine learning models and datasets that are used for this task they provide a new standardized dataset for predicting multiple proteomic properties and present evaluation metrics for two specific tasks peptide retention time and msms spectrum prediction the application of machine learning methods for predicting proteomic properties using protein sequence data is well defined different stages of the process from data encoding to commonly used architectures are clearly described the authors clearly define two relevant machine learning tasks that this dataset could be used for this includes the experimental procedure for generating the data as well as the annotations the evaluation metrics are well suited for their specific tasks specific innovation compared to proteomic tools not well described proteometools is repeatedly cited as a base for the new prospect dataset however the authors do not clearly describe in what ways prospect is different from proteometools making their original contribution unclear they use the term neutral loss as one of their important contributions but dont define in the text what this refers to what other annotation was newly added to the data from proteometools and how much already exists this should be listed clearly extent of usability of prospect the authors claim that prospect would be beneficial for a number of different proteomic tasks but only go into details for two of them the authors should list what these other tasks are what kind of annotation or metadata they require and how prospect is able to facilitate this new capability when proteometools is not without this information it is difficult to asses the full impact of this dataset evaluation of models it would be useful if the authors had trained existing methods in the literature on the two selected tasks and compared performance using the defined metrics this would illustrate the utility of the metrics and also the difficulty of the task is it already solved or is there room for improvement in existing methods using this dataset how to existing approaches differ in their performance significance of prospect the paper focuses on two proteomic profiling tasks that are quite specific and also not well motivated are these tasks foundational in studying the proteome or do they comprehensively characterize it if not it should be clearly specified which other tasks in proteomics could benefit from prospect and what their significance is also further details on how exactly predicting proteomic properties could impact important applications such as the design of therapeutics would strengthen their paper overall this seems like a promising direction for a new dataset but the authors need to spend more time fully developing this resource and outlining its use and significance
### Summary:
|
reviews were split reviewers agreed on the importance of the domain but differed in how they valued the contribution over proteometools which the dataset builds upon a primary concern was lack of baselinesbenchmarking the rebuttal argues that baselines would not add much value since strong methods have been published in prior work however my interpretation of the reviewers ask is not to invent new heuristic baselines but to evaluate the prior methods on the newly prepared dataset the authors do add evaluation of one such model prosit to the revision evaluating more models in the same way could strengthen the contribution and could be part of the ongoing maintenance of the dataset code overall however the majority of reviewers felt the paper does provide significant value beyond proteometools and other prior work through the new annotations tooling and usage recommendations the revision adds an appendix g to clarify these contributions and it may be useful to move some of that text into the intro of the paper as i think many readers will want to know what are the main contributions beyond the base dataset i agree with the majority that there is significant value added and recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this manuscript contributes a neural architecture to classify arrhythmia type from ecg data the signal treated as 1d and the architecture performs joint segmentationclassification detecting the abnormal beats and then classifying them as a function of their origine it uses unets for segmentation and for classification cnn and one fullyconnected layer the unet segmentation generates weights that are considered as an attention map and multipled with the original time series after pooling on a window which amounts to smoothing compared to the prior art the central contribution put forward is the addition of the segmentation component of the architecture the work is light on theory and the contribution mostly resides on the empirical improvement however the evidence for this improvement is not rock solid as it is shown on a single dataset which has a rather small sample size also i fear that hyperparameters are not set fully independent of the final error measure how are hyperparameters such as learning rate or architecture parameters chosen given the procedure exposed in section 52 it seems to me that some of the architecture parameters kernel size where not chosen independently of the test set such choice will incur a positive bias with regards to the actual expected generalization error with n500 and an accuracy of 90 the p05 confidence interval of a binomial model is 5 hence the improvements observed by adding the segmentation on top of the classifier do not seem really significant docsepthe paper proposes a framework for the classification of arrhythmias in electrocardiogram ecg data the proposed approach performs segmentation and classification of the ecg signal the segmenter performs segmentation of the signal also called attention map even though the term segmentation is not quite correct this attentionmodulated signal is then classified to identify the origin of premature ventricular contraction pvc the proposed approach is evaluated on a dataset from a single machine consisting of 508 segments i am not sure what segments means in this context the results seem ok but it is not clear to me what level of performance is required in order to achieve a similar level of performance as an expert main concern is that the proposed approach seems rather adhoc the combination of segmentation or attention and classification in a joint fashion seems hardly new and while the results obtained are good there is no systematic evaluation how the method compares to other stateoftheart ecg classification methods another problem is that the writing in the paper is not always clear and it is often unclear what exactly the authors are doing as a result it is quite difficult to exactly assess what the authors have done or what they mean detailed comments what is the output of the classifier is this a binary label or a multiclass label the authors write the output of s has only 1 channel and we expand it channelwise so that it matches the channel dimension of the ecg signal what exactly is meant here in fig 1 it seems that the segmentation output has naturally 12 channels should the segmentation be identical for all channels we do not use the output of the segmenter l as the attention map directly but instead perform a pooling with large kernel size first why is this done what does large kernel mean where is the attention map in fig 1 how are the premature ventricular contraction pvc origin labels defined is that a single time point per channel or common for all channels or a time window docsepthis paper presents a method for segmentation and classification of ecg data applied to the task to segmenting and detecting premature ventricular contractions pvc the taks is semisupervised in the sense that segmentation labels are not required by labels for the pvc events classification are used the authors motivate this application quite well and detecting abnormalities in ecg signals is an important task of clinical relevance i can understand why segmentation labels may be very laborious to collect and unsupervised methods would be desirable the proposed approach builds upon unet and introduces some task specific changes however i would argue that this is primarily an application paper i dont mean that as a criticism necessarily i think that strong and well motivated applications of machine learning are important and informative however it would be helpful if the authors could discuss more about how their approach might generalize to other tasks both the detection of other types of arrythmias and other temporal segmentation and classification tasks my main comments regarding the paper are around the experimental evalutation the authors highlight that there are some published baselines for this task or at least similar related works eg moskalenko et al 2019 oh et al 2019 andor the authors could have applied classification on top of features extracted using pantompkins but that would be a more crude baseline while i recognize that these approaches might not enable unsuperivsed segmentation and so direct comparisons on that might be hard with the full approach they propose it might be possible to present a comparison of classification metrics on their own perhaps i am misunderstanding but it doesnt seem as though table 1 includes such a comparison rather the baselines are different from the previous published methods is that correct i would almost describe table 1 as ablation results rather than a comparison with other published baselines id like to know the authors response to that and if table 1 does show these results perhaps linking the rows to the previous approaches might be helpful or justifying why it isnt appropriate to show these comparisons i dont say this just because the authors should show better numbers but rather to ground the chose baselines in the context of previous work in this space building from the previous point i think this paper would be an excellent case for for showing transfer learning results it seems to me that physionet provides a large amount of available data for ecg classification a couple of question id like to hear the authors responses to 1 why did they not do any experiments on these public datasets is there a reason they are not appropriate do they not have the right labels are they not large enough do you need full 12 lead recordings i am not sure if they are avaiable on physionet datasets but i imagine so 2 even if training your method on your dataset is preferable it would seem natural to test it on a set from physionet perhaps even with a different type of arrythmia to see how much performance degrades this i think would be most informative both showing segmentation and classification results fig 3 is a nice illustration but it is quite difficult to read i might suggest reorganizing it i am not sure showing multiple leads is necessary and maybe limiting to two columns might help id encourage the authors to leverage supplementary material to show more examples as i do think these help finally physiological signals are notorious for having large individual variation id be interested to have the authors discuss more about this i couldnt find the information about how the trainvaltest splits were organized and whether this was person independent etc the following sentence in section 42 we apply fivefold crossvalidation with different classes evenly distributed between folds and the average performance is reported doesnt seem to mention that knowing more about the splits would be very helpful this is perhaps another reason that performing experiments on at least one physionet dataset would be helpful as the train val test splits could be released but i acknowledge that the authors say they will release their data which is gooddocsepthis paper proposes a deep neural network for premature ventricular contraction pvc differentiation and segmentation from electrocardiogram ecg signals the network is jointly trained as a segmenter and a classifier with a multitask learning manner differentiation is achieved by the classifier and segmentation is achieved by pooling for windowstyle attention from segmenters output quantitative experiments show better performance than baselines on differentiation tasks qualitative experiments show the effectiveness of segmentation tasks the results look interesting and it might have a broader impact on practical usage for ai models in the clinical environment however my concerns are 1 the topic seems too narrow for the computer science community more likely a paper of the biomedical engineering community or computing cardiology community the proposed method also lacks indepth technicaltheoretical analysis thus the paper novelty is limited 2 the related works include multitask learning and attention mechanisms but image segmentation works are also worth or even more investigating just a simple modification of image segmentation neural networks such as conv2d conv1d can make them suitable for ecg segmentation tasks 3 for the evaluation of segmentation only several cases of qualitative evaluations are not convincing at least a comprehensive user study by a community of cardiologists is needed some questions could you provide more details about data preprocessing which filters do you use what are the cutoff frequencies for highpass filter and lowpass filter in figure 3 are there duplicate attention maps in every column
### Summary:
|
this paper received 4 reviews with mixed initial ratings 5 6 4 4 the main concerns of r1 r4 and r2 who gave unfavorable scores included insufficient evaluation lack of experiments on public datasets small sample size an adhoc nature and overall limited novelty of the method a number of issues with the presentation in response to that the authors submitted a new revision and provided detailed answers to each of the reviews separately after having read the rebuttals the reviewers including r3 who initially gave a positive rating felt that this work overall lacks methodological novelty and does not meet the bar for iclr as a result the final recommendation is to reject
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper studies the problem of multitask offline rl and propose 2 methods bail and mbail each is an algorithmic variant based on the bail algorithm the overall goal is to improve performance on multiple tasks this setting concerns with a number of tasks coming from a mdp distribution where transition function is the same and reward function is different the authors propose a 2stage method first use the bail algorithm to boost the performance of a particular task with data from other tasks then distillation is used to distill policies from these tasks into one policy strengths the first stage of improving on each task before distilling seems to be an interesting novelty the problem of multitask batch rl is a direction that is not adequately studied weaknesses the contribution of the paper seems a bit lacking a large portion of bail and mbail are based on the multitask batch rl methods described in li et al 2019 the first stage in the proposed 2stage method is probably novel but im not sure if the rest of the technical content are significantly novel lack of comparison to previous methods in particularly while a majority of the work is based on the mbml algorithm and results from li et al 2019 the proposed method is not compared to mbml and the figures use a normalized yaxis making them look entirely different from what is in the li et al paper thus it becomes impossible to compare the two methods lack of reasoning on why the proposed method would work better than other previous methods this is partly due to lack of comparison to previous methods but the authors also provided very little theoreticalempirical analysis on why the proposed method would work well very little insight is provided at one point the authors mention the problem of performance drop after training bail for a while this is a very interesting problem but it is not investigated further in the paper why is bail a good base algorithm what are the advantages of using it together with your 2stage scheme authors should consider refine their writing and emphasize the unique advantages of such a design and give better insight on this design choice for example perhaps bail is simpler and faster than other methods if thats the case provide a comparison on computation complexity and provide discussions the paper is lacking in clarity of writing a number of things are unclear a large num of typos things are unclear authors should go over the paper carefully and fix all the small issues page 5 top we assume all tasks share the same transition function and the batch is collected via the same policy really unclear what the same policy means you use the same policy to collect data in different tasks page 8 table 1 really not sure what is the difference between bail single and the baseline single task bail i assume these are two different things what is more confusing is that the numbers in the table are mostly negative table 1 the difference between some of the reported performance numbers are quite small but i dont see where you report the number of seed used to get these performance and there is no std reported how statistically significant are these comparisons page 4 bottom by assuming the correct estimation of why is it that such an assumption is realistic a major issue here is lack of consistency and adequate comparison with prior work especially for li et al 2019 where the authors proposed and studied reward relabeling multitask distillation and use the triplet loss it becomes a bit unclear how significant the technical contributions this also affects the contribution on empirical results it is hard to evaluate their significance without a proper comparison to prior methods i recommend the authors try to design other analysis or ablations to showcase why the proposed design is a good choice and to further investigate some of the interesting observations such as the performance drop issue after training bail for a long time another major issue is the clarify of the paper authors should go through the paper slowly and refine the writing this is more than just minor issues on grammar and typos the writing in the paper should be very clear and make sense any claims should be made with supportive evidence etc the paper proposes some interesting ideas and the multitask batch rl setting is relatively not well explored but due to the large number of issues its hard to believe the paper is ready for publication in its current state docsepthe paper describes multiple task batch reinforcement learning method building upon a wellknown single task batch reinforcement method bail the method proposes two separate improvements namely bail introducing the idea of leveraging data across tasks with the same transition function and mbail policy distillation approach for multiple batch learning pros the method proposes a good new baseline method for multitask batch reinforcement learning which is the main justification for my score things to improve 1 there is a room for improvement of the experimental analysis see below 2 on the novelty the method heavily relies upon existing ideas especially mbail which combines policy distillation of rusu et al 2015 and triplet loss li et al 2019 which in the reviewers opinion there is nothing wrong about and good methods are often a good combination of existing ideas however it would be helpful if the authors could state explicitly for example in the introduction which new contributions have been made in this paper this would help readers see the particular novelties of this approach and build upon them improvement of the experimental analysis 1while it is shown that the bail method improves upon bail on a set of proposed tasks it still would not answer whether it would also improve the distilled policy of mbail to answer this question is it possible to evaluate an ablation of mbail with bail instead of bail as the backbone for distillation 2more insight should be given into why and when it works better in li et al 2019 for example they additionally experiment with umazegoalm humanoiddirm and halfcheetahvel as well as with walkerparam exhibiting different transition functions for each taskthe latter would be extremely interesting in the light of the ablation study proposed in point 1 so it might be possible to see the behaviour of different methods with different transition functions is it possible to give results on any of these tasks minor comments 1 page 3 section bailbest action imitation learning a minor comment on clarity of writing for someone browsing through some parts of the section it might look like some parts of bail are proposed by the authors especially given the algorithm below to do this we estimate a supremum of the optimal value function v which is referred to as the upper envelope 2 so it might be a good idea to somehow rephrase the parts of the text in this section referencing to we to improve clarity the reviewer understands that it is the background section so it describes the existing work but nevertheless update as the comments mine and other reviewers have not been addressed yet there are still open questions which unfortunately do not allow recommending acceptance as things stand at the moment recommending acceptance subject to convincing experimental analysis improvement and improved description of the novelty of hte approach docsepnote that the rest of the review will use offline rl and batch rl interchangeably building on top of bail chen et al 2019 this paper provides two algorithms for the multitask offline rl setting the bail algorithm assumes that we know the identity of the target task and that all tasks share the same transition function it uses sample transfer from nontarget task data to boost its performance on the target task the mbail algorithm builds on top of bail and mbml li et al 2019 where it distills the bail policies for each individual task into one master policy that performs decently on all tasks the authors evaluated the proposed methods on the antdir antgoal and halfcheetahvel environments from mujoco the authors compared mbail bail bail and contextual bcq mbail consistently performed the best out of the first three while contextual bcq performed better on the antgoal env the paper rightfully pointed out that singlebatch setting and a large number of interaction samples limited the use cases of offline rl in the real world the paper provided a viable solution to the multitask setting with some assumptions to the environment and the data collected the evolution from bail to bail to mbail is clear and the experiments showed clearly that mbail is the best out of the three there are several aspects that make me hesitant to recommend an acceptance assuming that there will be new sota batch rl algorithms in the future do the authors have plans to make this work more applicable to nonbail algorithms it is unclear from the paper how easy or hard that would be if it is hard then the community will benefit less from this piece of work section 32 is heavily inspired by mbml li et al 2019 where the authors made some modifications to make it applicable to an imitation learning based policy is it intentional that you do not compare your algorithm with li et al 2019 in the experiments section the major technical contribution in bail is that it proposed a way to use the inductive bias that the transition function remains constant for all tasks and the target task appeared in the batch dataset but with such assumptions it also limits its use case to a narrow set of dataset and environments clear and strong performance can sometimes justify having a narrow focus but as the experiment result shows mbail does not consistently perform better than contextual bcq albeit the latter has privileged info namely the identity of the task clarification questions in figure 3 does the number of epochs mean the same thing for bail and mbail eg for mbail i think it is the constant e in algorithm 2 what about bail it would be great to further clarify in 31 it states note that we assume all tasks share the same transition function and the batch is collected via the same policy is the batch here referring to all data or the batch of data per task suggestions would it be possible to list out the assumptions more explicitly i can see several important assumptions spread throughout the paper and its hard to keep track of them eg 1 data batches are labeled with their respective tasks 2 the tasks only differ by their reward function and they all share the same transition function 3 the target task is known for bail and is a part of the batch dataset for both bail and mbail in the meantime the introduction made it sound like the algorithm is quite general in the introduction on the second page you have several sentences that describe how your work differs from previous works and i think the writing there can be improved eg i first read in this work we use sample transfer the proposed algorithm is referred to as bail and then i read in this work we propose to first improve the learning performance on single tasks and then utilize the policy distillation to combine the learned policies into one single policy it is confusing because i wasnt sure whether you use sample transfer or you use policy distillation or both only later did i realize one refers to bail and the other refers to mbail typos section 2 fujimoto et al 2019 proposed an algorithm to learn policies with soft constrain to lie near the batch which alleviate the extrapolation problem fujimoto et al 2019 proposed an algorithm to learn policies with soft constraints to lie near the batch which alleviates the extrapolation problem section 2 and then learning a control policy based on these selection stateaction pairs these selected stateaction pairs although the paper showed that mbail consistently outperforms bail under the multitask setting i struggle to recommend acceptance because 1 the algorithm is closely tied to one specific offline rl algorithm which limits its use cases 2 there could be more baseline comparisons in the experiments section 3 the main technical contribution is specific to tasks where they all share the same transition function which further limits its use cases docsepthis work makes use of transfer learning to deal with the multibatch reinforcement learning problem it improves the bail algorithm on the multibatch setting and further extends it to a multitask model it empirically demonstrates the advantage in three simulated environments textbfstrengths the problem studied by this paper is interesting and the motivation is clearly expressed overall the method makes sense to me and achieves improvements when there are very few training samples for each batch textbfweaknesses this paper needs lots of polishment in writing especially for the second half some explanations are needed for the kl term in equation 6 equation 7 contains many errors and there are many typos in the paper eg for section 32 i think your method is named as multibatch bail not multibatch bail more descriptions are needed for each figure it looks like most of the components in the proposed algorithms are from previous works i think the authors should clarify their contributions the experimental results are not convincing with contextual bcq almost dominating the antgoal game i also recommend the authors to add more baselines eg cds 1 just for example since this one is quite recent but authors should consider some sota batch rl algorithms 1 conservative data sharing for multitask offline reinforcement learning yu et al this paper proposes two algorithms for the multibatch rl problem and experimental results demonstrate their advantage to some extent however the writing of this paper needs lots of improvements the technical contribution is limited and the final results are not solid enough to support the claims i feel this work is incomplete and recommend rejection
### Summary:
|
the main identified issues were the limited contribution and use cases poor writing and missing baseline comparisons and more needed experiments these issues were not addressed satisfactorily by the rebuttal and hence i believe the paper should be revised by the authors and undergo another review process at another conference i therefore recommend rejection
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this submission proposes a training objective for learning invariant representations under spurious correlated data settings especially when the nuisance data is missing main review the methodology described in this work is interesting with solid theoretical support and promising empirical performance which is consistent with their claim the originality is not that much as similar ideas punishing some distance between different domain demographics have long been explored in the literature some examples include 1 2 the problem of missing data is reasonable and could relates to the fundamental problem of causal inference the derivation of the mmd estimators is correct the numerical results are somewhat toy but still reasonable ps some references in related topics such as 3 and 4 are missing though 1 yaroslav ganin evgeniya ustinova hana ajakan pascal germain hugo larochelle franois laviolette mario marchand victor lempitsky domainadversarial training of neural networks 2 victor veitch alexander damour steve yadlowsky and jacob eisenstein counterfactual invariance to spurious correlations why and how to pass stress tests 3 zhang dinghuai kartik ahuja yilun xu yisen wang and aaron c courville can subnetwork structure be the key to outofdistribution generalization icml 2021 4 ahuja kartik ethan caballero dinghuai zhang yoshua bengio ioannis mitliagkas and irina rish invariance principle meets information bottleneck for outofdistribution generalization arxiv abs210606607 2021 significance the problem of outofdistribution generalization and spurious correlation is an important one this topic together with doubly robust estimators etc is closely related to clear community as causality lies in the center of ood generalization the methodology would potentially have impacts outside clear community clarity the writing is clear but some sentences can be improved if they are not too complicated scientific writing should have simple structure docsepsummary this paper proposes missingmmds a new objective for invariant prediction methods that measures nuisance model dependence under missingness authors show the consequences of naively enforcing independence on only the nuisanceobserved subset of data derive the missingmmd and show that it is consistent when one can predict either the missingness variable or the nuisance show that missingmmds improve test accuracy over using observedonly data and perform close to groundtruth estimation with no missingness main review the main contribution is to present missingmmds an optimization objective that extends the scope of recent invariant representation learning methods in practice practitioners are recommended exploring the two simpler proposed estimators in addition to the dr estimator and selecting the model based on the validation metric strength 1this is an overall wellwritten paper with good novelty and clearly described preliminaries 2the proposed method is effective as demonstrated by the experimental results 3the motivation of the proposal of missingmmds is reasonable to improve generalization on a range of test distributions it is necessary to handle missingness appropriately 4the analysis of extending these invariant prediction methods to handle missing data is not straightforward is thorough and insightful this difficulty stems from the invariant methods optimization objective which usually includes a measures of dependence concerns 1too few experiments experimental results on more datasets are needed to present better 2tables need described captions 3minor question000 001 in table 2 is the minimum one from three seeds is negativedocsepthe paper proposes a simple modification to the loss function of veitch et al 2021 when learning models invariant to spurious features to handle missing spurious features in the training data the paper is clearly written and easy to follow this review focuses only on its weaknesses all experiments are run with lambda 1 tradeoff between mmd objective and classprediction objective but in any practical problem practitioners will tune lambda for some missingness mechanisms eg z is revealed with constant uniform probability p missingmmd and regular mmd or mmd on observed data alone may induce the same ordering on hx but have different magnitudes lambda 1 may not be welltuned for one of them and may explain poorer downstream prediction performance prior work makes an identifiability assumption ie z is computable via some fixed function of xy lahoti et al 2020 the paper instead makes an instrumenttype assumption the only effect of z on its missingness delta is through the observable x figure 2a 2b show that identifiability can be violated while still satisfying the instrumenttype assumption however it is not clear whether these assumptions hold in practical datasets for experiments 42 43 the paper should benchmark eventual prediction quality against the distributionally robust baselines eg some of the works following lahoti et al 2020 the variance experiment of sec 44 will be particularly informative to understand the tradeoff between 1 fixing missing z by reweighting with g as in the paper 2 imputing potentially noisy z as done by wang et al 2020 3 using training samples with poor model fit to drive iterative erm procedures as in liu et al 2021
### Summary:
|
following existing work by veitch et al the paper considers the problem of learning from observed data x1 y1 z1 xn yn zn a predictor fx for a variable y while enforcing at least to a certain degree that fx independent z y the papers main contribution is in dealing with a situation where some of the zis are missing by modifying the independence measure here mmd accordingly it seems that the authors have not provided code during submission time i regard the topic of the paper interesting the reviewers mentioned many important points and the scores resulted in a borderline case with all reviewers leaning towards acceptance personally at several places i find the presentation of the paper not sufficiently clear and i hope that the authors put in some effort to improve on that aspect some points are highlighted below but there are more in the paper also i have asked another expert for their opinion on the paper they mentioned the following two points 1 its not obvious to me whether we should think of missigness as caused by x or a cause of x in the mimic example missingness is basically just a part of x in the latter case an obvious thing to do is just augment z to be tildezz missingness and then enforce the independence with respect to tildez id have liked to see this run as a baseline and 2 the experiments seem a bit weak in particular the results from these mmd regularizations are pretty sensitive to regularization strength and to a lesser degree kernel choice it seems important to vary these to show that the results hold up i recommend that the authors address these two points together with the other points mentioned by the reviewers additional points on presentation the document should be carefully checked for language mistakes and typos including usage of parentheses and punctuation fig 1 i find this figure unclear please clarify all terms and rewrite the caption for most of the presentation the nuisance is assumed binary apart from app g which i found a bit short it should be mentioned in the abstract that the main focus lies on binary nuisance variables in my opinion the term depends on should refer to statistical dependence fig 2 shows graphs in which sense are these generative processes also please comment on the directed cycle in fig 2b measures of dependence eg maximum mean discrepancy mmd please clarify in general mmd is a distance measure between distributions and specify the relation to hsic we work within the framework of veitch et al 2021 because their method applies to a more general version of the model in makar et al 2021 please clarify the presented method can extend to other graphs eg figure 2b please clarify for exposition let be distributed given just x rather than x y please clarify conditioning on delta 1 okay by ignorability please clarify please ensure that the statements are mathematical in that all terms are either defined or related to a quantifier this concerns eg almost all propositions for each x please clarify this is particularly bad when please clarify the notation using mmd is inconsistent imo the notation mmdx1 x2 is incorrect also the subscript changes regularly x is used for random variables and for real numbers ez 1x please fix the usage of the term estimator is at least confusing see eg app f imo the presentation of the proofs should be improved see eg f1 this includes notation and the usage of assumptions such as delta independent z given xy please be careful to distinguish between random variables and random numbers and between probability distributions and densities the references are not uptodate
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper study the problem of 6d pose estimation it adopts a coarserefine approach where it first predicts a coarse pose hypothesis and use a refiner based rendercompare to refine the pose it can be applied to objects not seen in the training phase strengths 1 the idea of using classifier to acquire coarse pose is simple and robust and it can handle symmetry implicitly 2 the experiments demonstrate the efficacy of the refiner module weaknesses 1 the efficacy of the coarse pose estimation model is not validated in table 1 the comparison between 8910 and 111213 is unfair for the proposed coarse module it would be stronger if there is a baseline with other pose initialization techniques that can handle novel objects and use the proposed refiner 2 since both the coarse mode and refiner is based on rendering how would rendering quality and the different lighting in the real world affect the performance docsepthis manuscript presented a method to estimate the 6d pose of objects unseen during training but only requires a cad model at testing it is achieved by learning two key components a pose hypothesis generator and classification that initializes some pose guesses and whether they can be handled by the refiner the second component the refiner follows the renderandcompare idea similar to the existing cadmodel based methods with a difference that it can refine a novel object unseen during training a large amount of training data is generated to gain such inductive bias strengths the paper is well written and the figures are presented nicely the motivation is clear and the application of this work is obvious for robotic manipulation the experiments are extensive including multiple public benchmarks the comparison baselines are also up to date the performance is strong considering that it didnt train on the testing objects to generalize to the testing object with a new coordinate frame definition provided in the cad model a novel and smart adaptation mechanism is designed by rendering multiple different vides to disambiguate weakness is the performance equal when applied to different types of objects or is there any failure modes on certain types of objects an illustration on this or discussion would be helpful itd be also interesting to have some detailed result decomposition on the objects to reflect their performance variations the object detection is assumed to be available while this is not the focus of this work id imagine the accuracy of the 2d detection can play a crucial role in addition the current manuscript uses maskrcnn which needs training and inference on the same categories of objects this contradicts with the claim of applying the method to outofclass objects moreover the maskrcnn is trained on the pbr data in this work meaning many of the training instances are exactly the same as the testing data in bop this gives a huge advantage over the comparison methods usually use natural detectors from the beginning itd be more interesting to see a complete pipeline by replacing with a classagnostic object detector or at least a maskrcnn that is trained on some general dataset eg ms coo or comparing with baselines that uses the same object detector to showcase the performance and compare while the current manuscript claims to estimate 6d pose of novel objects a cad model is still needed during testing the ease of accessibility to such a highquality cad model during testing especially with highfidelity texture is questionable in real world scenarios docsepa method is proposed with which the pose of novelunseen objects are estimated without requiring cad models at test time in contrast to the majority of methods that train with those models during inference the input is a region of interest from a detector as well as the cad model from which a coarse pose is estimated then refined using two separate networks the coarse pose estimation network is a binary classifier indicating if the synthetic rendering of the object can be successfully refined wrt the observation the input from a set of renderings with highest score is then the initial pose to start the refiner the refiner network regresses the translation and rotation to iteratively bring the model into alignment with the observation experiments on the bop core datasets as well as modelnet show the proposed approach achieves best performance for novel objects and competitive results compared to methods that know the object models apriori ie seen objects strengths the paper addresses a relevant problem in the robotics community and sufficiently motivates the design choices in the context of real limitations ie not always having access to object models before deployment the authors cover the literature well enabling them to position their work in the context of prior work estimating the pose of unseen objects is very challenging currently there exist some solutions but they have limitations that are addressed in the proposed approach therefore this paper offers novelty in terms of problem formulation and methodology the proposed approach achieves stateoftheart performance on bop and modelnet in comparison to methods that also deal with unseen objects additionally robotic grasping experiments are showcased in the supplementary video to demonstrate the realworld application weaknesses while the presented solution is fullylearned i do not see this is a critical argument as to why the method is better than those discusses on lines 3641 it is not a strong argument to say that nonlearned components are unfavourable because they depend on hyperparameters especially when the presented work also has its own set of tunable parameters eg m for the number of poses to render k for the number of iterations and even the range of sampled standard deviations for the refiner dataset a side not to this is that the method seems to be very sensitive to m here large m means potentially better initial poses at the cost of computation time it would be good to see an experiment to evaluate this tradeoff to complement the statements made in sec 44 i question whether the method itself contributes to the high accuracy or if a significant performance gain is attributed to the enormous dataset used to train the networks as table 3b reveals fewer object models lead to a performance drop and under 500 10000 the method no longer outperforms the competitors a better experiment would be to control for the dataset ie train all methods with the new data or an equivalent number of training samples in order to fairly compare the methods the authors themselves make the claim that their method performs well because of the dataset in comparison to others that have a limited dataset to train in lines 227229 while it is ok to declare the detection stage to be out of scope and sufficiently handled by prior work the experiments do not represent a realistic scenario of a fully unseen object pipeline since the detections for bop come from a network trained on the relevant objects hence they are known before a more suitable method in the context of the papers goals would be one of those mentioned on line 124 i suspect that the method could be sensitive to the initial detections thus it would be interesting to see an ablation on how the method performs wrt detection inaccuracies if the inaccuracy of zeroshot methods is below the tolerance of the proposed method then a full pipeline is in fact not yet achieved lastly it is not clear where the detections come from for the modelnet experiments and the demonstrations in the supplementary video table 1 is misleading since so few results are provided for the most relevant comparisons rows 47 one can see that ppf zephyr icp is the best on lmo thus it is questionable if the proposed method is the best overall without the missing results i understand the authors do not provide results on other datasets but code for their method is provided thus it seems feasible to generate results for this publication furthermore the authors state it is difficult to directly compare to osop due to the lack of a detector is it possible to use their detections to initialise your approach this would make the comparison much more fair as the masks generated by mask rcnn trained on the known objects is a huge advantage finally the demonstration of robotic grasping in the supplementary video is a good addition but it does not constitute a real experiment that evaluates the proposed methods performance no details are provided about these demonstrations so it is difficult to draw any serious conclusions
### Summary:
|
main strengths 1 the problem setting investigated target cad model given during test time only seems relevant to practical robotics applications 2 the proposed method binary classfier to select best coarse pose as initialisation renderandcompare to refine pose appears to be sensible 3 results are comprehensive and overall convincing enough main weaknesses 1 the premise of the settingmethod bounding box and detailed cad model of the target object are available during test time is not beyond questioning 2 comparisons against other methods are not thorough a few useful ablation tests are missing after the rebuttal and discussions all reviewers thought that the concerns have been satisfactorily addressed the paper has thus met the threshold to be accepted
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper presents a new approach to regression via classification problem utilizing a hybrid model between a neural network and a decision tree the paper is very well written and easy to follow it presents results on two very similar regression tasks and claims state of the art performance on both the paper however does not motivate its contributions sufficiently and does not provide enough experimental results to justify their method the authors could significantly improve the paper by spending more time motivating their work for example it is unclear why rvc is the best strategy for the tasks they study and what other tasks one should approach from a rvc standpoint the paper would also be significantly more compelling if the strategy was applied to more varied tasks furthermore the two baseline models used are 11 and 34 years old respectively and i do not believe they represent a thorough review of the potential approaches to this problem significant work could also be done to explore the effect of using different neural network structures for the nrt in this paper only a fairly simple 3 layer architecture is used section 44 is interesting and i believe the paper would be improved if more time was spent exploring the explanability of this new proposed model finally the scan method mentioned in the conclusion could have more emphasis placed on it in the text over all the paper is well written and easy to follow but is limited by its lack of well detailed motivation and insufficient baselines and applied tasks docsepsummary this paper presents a neural network based tree model for the regression via classification problem the paper is easy to follow but it failed to give motivations for the significance of this work i do not understand why regression via classification is any useful and what value it brings to the well studied regression problem with many different function approximators the paper neither explain why regression via classification is any useful nor does it motivates the need for the presented model the presented experiments are also not thorough there are stronger and simpler baselines for regression like random forests gradient boosted trees or kernel ridge regression which are not evaluated and compared i think this work do not pass the acceptance bar at iclr conference comments 1 i was not aware of this age and height estimation tasks ivectors are the standard features for speaker recognition can the authors please elaborate in a line or two why ivectors would be suitable for age and height estimation 2 the regressor function r simply gives out the mean value of the bin the authors could have provided on details on why this choice and how it affects mae 3 each node in the nrt is successively being trained on a lesser amount of data why do all the nodespecific neural networks need the same parameter size then 4 in conclusion the authors say in addition we proposed a scan method and a gradient method to optimize the tree the authors do not very clearly mention these two methods in the text neither are the results demonstrated in that way miscellaneous comments 1 this line seems incomplete in section 1 traditional methods for defining the partition t by prior knowledge such as equally probable intervals equal width intervals kmeans clustering etc 4 5 3 2 the notations used inside the nodes in figure 1 has not been defined in the paper 3 figure 2 and 3 axes dont have labels figure 3 caption says age but it is for heights 4 in section 44 figure 44 should be figure 4 and at one point this is visible in 44 should be this is visible in figure 4docsepsummary the paper presents a novel supervisedlearning method for regression using decision trees and neural nets the core idea is based on a 90s technique called regression via classification by first apply discretization of target response y by some clustering and apply any classification to those discretized values as class labels because realvalued y is onedimensional and ordered discretization means setting up any thresholds to give npartitions of training yis the proposed method tries to jointly learn these thresholds as well as node splitters of decision trees using neural nets because each node splitters are given by neural nets here probability outputs for binary classification are also available regarding these probabilities as probabilistic splitting at each node response y weighted by the path probabilities to leaves is the final prediction the learning is in a greedy manner as in standard tree learning because exact joint learning is computationally hard experiments on speaker profiling illustrate the performance improvements against standard nonlinear regression such as svr and regression trees comment this is a technically very interesting contribution but several points can be considered more carefully as below to be honest it would be unconvincing that the approach regression via classification rvc is still valid the proposed approach is an elaborate extension of this approach but if we want prediction performance for regression we would use some ensembles of regression trees such as random forest gbdt extratrees instead of a single cart or even we can directly use deep learning based regression the experiments against cart and svr would be too naive in the current context of supervised learning on the other hand single carts are well interpretable and can be a nice tool to get some interpretations of the given data but the proposed method seems to lose this type of interpretability because of introducing node splitters by neural nets so the merits of the proposed approach would be somewhat unclear in the context of tree learning we need to consider two things first of all node splitting by general binary splitters are called multivariate trees but interestingly this does not always bring the good prediction performance on current quite highdimensional data so i guess that both optimizing threshold for rvc and nonlinear node splitters cannot always bring the prediction performance limitations and conditions would need to be clarified more carefully second of all probabilistic consideration of decision trees such as eq4 is almost like socalled probabilistic decision trees also known as hierarchical mixtures of experts hme in machine learning see famous widelycited papers of jordan jacobs 1994 and bishop svensen 2003 this can bring joint learning of probabilistic node splitter gating networks and decision functions at leaves expert networks and is also known to bring the smoothing effect into discrete and unstable regression trees and hence the improved prediction performance so which of probabilistic consideration or rvc contributes to the observed improvement is unclear the target joint optimization of eq 3 is actually optimized by a number of heuristic ways and it is quite unclear how it is truly optimized in contrast hme learning is formulated as a joint optimization and solved by em in the case of jordan jacobs for example the experiments on single datasets of a very specific speaker profiling problem would be somewhat misleading probably for this specific problem there would be other existing methods on the other hand if this is for benchmarking purpose a regression by neural nets and tree ensemble random forest or something can be included as other baselines and also other types of regression problems can be tested
### Summary:
|
while the idea of revisiting regressionviaclassification is interesting the reviewers all agree that the paper lacks a proper motivating story for why this perspective is important furthermore the baselines are weak and there is additional relevant work that should be considered and discussed
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper targets richer and higherquality generation with vae two techniques are adopted to achieve the goal 1 bijective model to enrich data generation with flexible prior 2 presenting compressed variants of the input data ie self supervision as additional condition y for reconstruction the two techniques interact through a hierarchical sampling process ysim pyurightarrow zsim pzuy thus benefits vae generation with datadependent prior and condition generation the idea novel and reasonable the paper is clearly presented here are some of my concerns 1 the author specifically argues the transformation x rightarrow y to be nontrainable ie the mapping between x and y is deterministic but will modeling qxzy with discretized logistic distribution affect the generation quality since the likelihood is classicly assumed to be gaussian distributed 2 the hierarchical selfsupervised vae is presented here to show the model can adopt multiscaling information to benefit generation stepbystep however i am afraid in this way the inference would be much difficult since the flowbased bijective operation is hard to train already 3 is the conditional information eg the sketches also need in the test phase or unconditional generational setting is adopted here 4 it seems the experiments are not conducted on highquality datasets to me the presented results can not obviously demonstrate the achievements of the model 5 can you please explain the connection between your selfsupervised vae to the general conditional vae model in 1 1 sohn kihyuk honglak lee and xinchen yan learning structured output representation using deep conditional generative models advances in neural information processing systems 2015docsep summary the paper presents a selfsupervised variational autoencoder called selfvae the work proposes the use of downscaling and edge detection as simpler representations of the input images to be reconstructed the model should then learn to improve the low dimensional approximations to recover the higher dimensional ones in a hierarchical fashion quality clarity the paper is generally quite difficult to follow and the purpose contributions and experiments are not presented clearly enough the figures are not discussed in order and the paper often references figures that are far away there are a number of grammatical errors in the paper outcome the message of the paper generally was quite unclear and it could do with restructuring to assist readersdocsep pros bullet vaes can ignore some dimensions of the latent code enforcing the posterior distributions to consider desired factors of variations in the input can be fulfilled by either making it more structured ie quantization as in vqvae2 or introducing additional constraints this paper tackles this problem by applying the latter two selfsupervised tasks edge maps and downscaled versions of inputs bullet the idea of adding selfsupervised tasks to improve latent representation is very interesting when learning a more structured latent representation image superresolution or sketchtoimage networks are also trained cons bullet hierarchical selfsupervised tasks in section 34 multiple transformations explained however none of the experiments are conducted as a consecutive set of transformations does 3level downscale mean a single downscaling three times or generating from u using four different networks and match each of these levels with z if yes why does not the selfvaesketches model apply in a similar way hierarchically bullet how to trainbalance operations in figure 3 there are several modes of operations given how did you balance these modes during training bullet performance of selfsupervised tasks what is the effect of selfsupervised tasks performance on the quality of latent representations considering the literature in image superresolution and sketchtoimage did you use a pretrained auxiliary generator bullet what is re and kl in table are they the summation of both reconstruction rex rey and kl divergence klz klu terms in the loss eq2 the reason why previous methods rekl values were omitted should be stated similarly why were the fid scores on celeba and imagenet64 not given furthermore the stateoftheart fid scores on cifar10 is better than the methods compared in table 1 for instance some examples of fid scores on cifar10 are 189 in moml 1 293 in wpgan 2 293 in spectrally normalized gan 3 264 in adversarial score matching 4 and so on 1 httpsarxivorgpdf180611006pdf 2 httpsarxivorgpdf170608500pdf 3 httpsarxivorgpdf180205957pdf 4 httpsarxivorgpdf200905475pdf as the results on celeba and imagenet64 were not compared with previous literature it is difficult to understand whether the contribution wrt vanilla vae is due to the selfsupervised task or merely the use of an additional stochastic variable u and networks minor issue imagenette64 might cause confusion i did not see this dataset name before i suppose that it is imagenet resized to 64x64 as in pixelcnn paper references for all datasets should be added reasons for score overall i rate towards rejection even though the idea of bijective priors and doing this through selfsupervised tasks is a novel approach my major concern is that it is beyond the stateoftheart in cifar10 not compared to any other method on celeba and imagenet64 hopefully the authors address my concerns above in the rebuttal period docsepthis paper focuses on the task of generating highquality data with generative models to be specific the authors proposed a variant of variational autoencoder vae model named selfsupervised vae the intuition behind this model is that by breaking down the complex generation task into simplersmaller ones complex models can be trained steadily with the guidance from the simplerlevel task to his end a hierarchical generative model with multiplelevel latent variables is proposed in which lowerlevel latent variables are governed by lowerlevel data features the lowerlevel feature is generally obtained by a determined and discrete transformation like down scaling in addition to further the modeling capability a flowbased prior is proposed to fit the data distribution experiments were conducted to evaluate the performance of the proposed generative model strength 1 the idea of guiding the complex image generation with easer tasks is interesting and is maybe the right way to accomplish complex tasks 2 the elob directed in eq2 is intuitive and insightful it also provides me theoretical support for the fact that employing twolevel modeling and downscale transformation to generate a more vivid image is reasonable weakness 1 from a technical perspective the proposed method is just the combination of flowbased vae and auxiliary vae by using 3 auxiliary variables the authors infer one of them by a discrete and determined variational distribution qyx to simplify the training objective where the downscale image y plays an important role in this model my question is why not regard y as observed data and then model the joint distribution pxy 2 there are some mistakes in the derivation of eq 2 in appendix a4 during computing the entropy of qwx the authors expresses it as eqwxlog qwx eqzyxlog qzyx however the first term in rhs is completely wrong actually it should be eqzyxqyxlog qzyx it seems that the authors use an equation in many places that is eqzyxlog qzyx eqzyxquyqyxlog qzyx but this equation is not ture because the term qzyx insided the expectation is dependent on variable y besides in the choice of distribution pyu and pxzy they are set to be a mixture discrete logistic distribution for each image x or y are their pixels assumed to be iid if so you miss prodyj in y outside the sumi1i in the distribution definition 3 bijective prior realnvp is proposed in other works and here simply employing it should not be regard as a contribution of this paper moreover the authors only compare the effectiveness of different priors ie gaussian mixture gaussian and realnvp on vanilla vae and confirm the superiority of using an adaptive prior however i want to know what is the performance of selfsupervised vae if only using a standard gaussian prior 4 section 33 is not presented well and the idea behind the sentences is hard to follow what are the differences between these generation and reconstruction methods and what application scenarios are corresponding to them they are just simply listed without providing any analysis of the logic behind them 5 the experimental results cannot support the superiority of the proposed model in both of the quantity and quality comparisons from the generated images i cannot see too much difference between the selfvae and the vanilla vae model without to naming the more superior generative models like glow gans etcs also for the quantity comparison the model is only compared with the outdated vanilla vae in celeba and imagenet64 more recent generative models should be included here
### Summary:
|
reviewers appreciated the model and the ideas presented and found them very interesting the main reason for rejection is the extent of the empirical work unfortunately and i think what is a bad sign for the iclr community the authors could not do adequate empirical work due to their computational resources not belonging to an organisation with extensive computational resources myself i am in strong symparthy with the authors though i do not see any way this can be satisfactorily accounted for in reviewing several reviewers commented on the datasets the extent of evaluations and the comparisons made with prior work for instance the small cifar10 images are not ideal to demonstrate the technique and comparative results with the other data sets are limited the reviewers had a number of concerns on the theoretical work and these were well discussed by the authors in summary this is promising research but needs more empirical work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
update the authors have addressed many of my comments below as such i am increasing my score highlevel view im slightly negative on this paper the ideas within it seem novel to me but the experimental examination seems unnecessarily weak limited i think if the authors perform a more rigorous analysis of their system this will result in a strong tier1 publication in its current form i cannot argue for its acceptance at iclr summary this paper introduces an interesting concept of using blended abstract semantics to assist with program synthesis ps as i understand it ps is the field of research interested automatically building software programs using some external guidance often times in the form of a what could be considered as the intention being conveyed by the human to the machine intention here derives from the nomenclature created in the three pillars of machine programming paper gottschlich et al mapl 18 these intentions may be presented many ways for classical ps intention tends to be expressed in the form of input output examples a specific set of rules etc the blended part of the semantics seems to be a fusion of both i neural abstract semantics and ii blended abstract semantics in this paper the authors note that the recent prior work in program synthesis using traditional formal methods as well as nontraditional techniques such as machine learning are still in the early stages of exploration and can be improved upon in particular the authors attempt to achieve this by using a novel form of neurosymbolic representation a type of representation that includes both a neural network reasoning as well as more classical symbolic reasoning with their blended semantics approach based on the current knowledge i possess it seems like this is an interesting and novel way to approach the problem the authors evaluate their system in three experimental domains i tower building think tetris ii list processing think numerical list transformation 1 12 4 1 4 12 sort and iii stringediting think automatic spelling correction or acronymizing things like programming languages to pl and machine learning to ml overall they show their blended semantics approach outperforms other neural systems in some cases and not others 1 for the tower building examples they compare against what i would consider to be a fairly weak baseline of research systems only two of which are not their variants of their own blended semantics and the results favor the blended semantics variants by upwards of 29 just a guess by staring at figure 4 2 for the list processing they compare blended semantics against an rnn and neural semantics it seems blended semantics performs slightly better than both other systems by upwards of around 10 or so 3 for string editing they compare against more wellknown stateoftheart synthesizers like robustfill icml 2017 and repl neurips 2019 however they note that repl outperforms blended semantics by a fairly large margin for text editing in the best case it seems repl is 25 more correct than blended semantics in the worst case it seems repl is 60x better however its a bit hard to tell because the authors have overlaid the legend on top of the repl curve in figure 7 so one can only speculate about certain aspects there id kindly request that in the next version of the paper the graph legend in figure 7 be moved to a place such that its not blocking the readers view of the actual performance we are trying to analyze overall i dont think this is a bad paper but i think it has too many weaknesses in its current form for me to give it a positive rating i discuss some of these below highlevel concerns with respect to the empirical evaluation i agree that there are likely many important problems in the space of list processing and string editing things like sort for list processing is ubiquitous as is string editing for automatic spell checking so i have no problems with understanding the validity of these domains and their examination however the tower building experimental domain left me wanting more more in at least two dimensions first i wanted more stateoftheart systems to consider against i dont think a vanilla rnn is really sufficient as a reasonable baseline perhaps im missing something and the authors can explain to me why i might be wrong second i dont understand why i should care about tower building problems i dont feel the problem domain was properly motivated or maybe i just missed it lastly these experiments seem shallow to me compared to other program synthesis work ive seen at neurips and iclr lets take hoppity iclr 2020 which looked something like 300k javascript bugs and was able to synthesize solutions to around 10000 of them out of something like 36k keep in mind these were real bugs in a real world programming language i realize using an embedded dsl for classic ps has different types of limitations but just for the sake of comparison i found the experiments and the motivation of the hole example in the tower a bit underwhelming and yes i realize that armando solarlezamas sketch system also addressed holes but if you read his phd dissertation those holes were with actual code as i understood it not necessarily a single instead of a tetrislike game i also was pretty disappointed by the related work section armando solarlezama has done incredible work in machine programming for the last 15 years however hes not the only one publishing in the space five of the 21 references are of armandos work thats 24 of the citations coming from one person the authors also introduce the field of program synthesis from armandos phd dissertation on sketching please read the three pillars of machine programming who armando also coauthored and note how those others refer to the space of machine programming by citing an original work in the 1950s by ibm on the future of fortran this in my opinion is the paper to cite when you are saying this synthesizing programs from examples is a classic ai problem which has seen advances from the pro gramming languages community solarlezama 2008 a citation to a 2008 paper doesnt demonstrate at least to me anything classic about the problem but a 1950s paper which means weve been working on it for 70 years probably would moreover in the related work section theres a bunch of citations to work in the last 510 years which is great but most dont seem to provide any intuition about how these systems work except those on executionguided neural program synthesis id recommend this section be rewritten with greater care to provide at least a basic intuition about how each of these related works relate lowlevel concerns please define all of your acronyms before you use them i counted at least these rnn grn mdp mse dsl repl this is especially important because there are three disjoint communities being brought together here the programming languages community formal methods community and the machine learning community i think its unlikely except for experts in all three domains to implicitly know that dsl mean domain specific language and simultaneously knowing gru means gated recurrent unit docsepthis paper proposes an embedding mechanism for partial programs for search space exploration in exampledriven synthesis it executes a subexpression concretely whenever possible and applies neural module networks on vector representations otherwise the embeddings of partial programs and goal states are used for determining the next step towards expanding an unfilled hole this method is evaluated on three benchmark sets tower construction functional list processing and string editing the idea of using module networks to embed partial program states is nice and these embeddings are blended with concrete state representations this combination is shown to perform better than concrete executionguided synthesis or embedding program syntax through an rnn i am not clear about the scope of this work and whether it will generalize beyond these specific domains and bechmarks the paper presents concrete and abstract denotational semantics the examples and some benchmarks include loops however the semantics do not formalize the iterative semantics in the concrete case or the fixpoint semantics in the abstract case the training proceeds by imitation learning on the syntactic search space thus the proposed method does not seem to address the inductiveness of loop invariants a loop is probably treated as any other function how will this method then generalize to different loop bounds it is possible that the training set contains enough examples for it to memorize patterns necessary for the test set a more convincing evaluation would be to have benchmarks where training is restricted up to loop bounds say k and the test set contains programs with loop bounds m k the functional list processing benchmark uses higherorder functions the abstract interpretation of such programs requires reasoning about relational semantics how are these semantics formalized and encoded by neural networks in what ways can we expect the neural network to generalize the other problem is that of potential nontermination it is easy to synthesis a nonterminating program even with finite loop bounds for instance if the loop counter is decremented in the loop body and never hits the loop termination condition this is a problem that can make the synthesis procedure itself nonterminating if such a loop is synthesized and is executed concretely an abstract interpreter can also get into nontermination unless suitable operations such as widening are employed the paper should talk about this issue and how it can be mitigated the semantics are not fully formalized what is the initial state of a program are the values undefined or are they initialized to some default constants this has a bearing on what can be executed concretely and what cannot be the paper says buildcolumn1 can be executed concretely since all its arguments are concrete but doesnt it implicitly need the position of the cursor where the column should be built i suppose the initial state of the cursor is assumed to be set to 0 which makes this a concrete statement the discussion about comparison to abstract interpretation needs more clarity in particular how the results of abstract interpretation are used should be stated one might use a hoare style reasoning to check whether the invariant implies the postcondition outputs however here it is the opposite direction for one wants to check realizability of the partial program please cite the original paper on abstract interpretation abstract interpretation a unified lattice model for static analysis of programs by construction or approximation of fixpoints popl77 there is a typo in sec 3 y 9 should be y 7docsep summary this paper proposes a novel topdown program synthesis for programmingbyexample which combines concrete evaluation with neural embeddings the authors take inspiration from abstract execution which can execute partial programs by abstractly representing sets of possible execution states instead of handdesigning an abstract execution method however they propose a neural equivalent which instead embeds possible states into a feature vector while this approach has weaker guarantees than traditional abstract execution it is much more flexible and can be used as a powerful guiding function for executionbased topdown program search the paper is well written and was a pleasure to read the method appears to be novel and is motivated well and it shows strong results on a variety of program synthesis tasks including tasks that similar previous models cannot handle i do think that the authors should include some experiments with simpler tasks and more baselines to give a better sense of how their method compares to prior work in the settings where that prior work is applicable but overall i think this paper is good and deserves to be accepted detailed comments the paper describes three types of semantics concrete semantics where subexpressions of a functional program are evaluated to their concrete outputs according to the rules of a dsl neural semantics where subexpressions of a program are embedded into feature vectors and these feature vectors are transformed using dslbased neural modules ie a learned neural sum module is used to convert from embeddings of two integers to an embedding of their sum this allows processing inputs that have holes in them blended semantics which does a sort of partial execution any subexpression that can be concretely evaluated is evaluated then everything else is embedded and neural semantics are used to combine these concrete subexpressions with other partiallyspecified components both neural and blended semantics make it possible to execute programs with holes in them obtaining either concrete or neurallyembedded outputs the authors propose using this as input to a policy pi that is trained to generate programs in the training set when conditioned on the corresponding inputoutput examples they then additionally use this policy to train a value function that can be used to guide a search algorithm the experimental results cover three different domains tower construction list processing and string editing for the list processing domain the authors note that previous approaches such as pccoder cannot handle arbitrary lambda functions the proposed blendedsemantics model is able to work even in the presence of much more complex lambda functions and the authors provide strong experimental results on an extended listprocessing task with those complex lambdas however one thing that i think is missing is a comparison against pccoder in the more limited domain in which both methods are applicable the original domain without complex lambdas is there still value in using blended semantics in that domain or is pccoder better in the cases where it can be applied questions during iterative construction of partial programs do you allow the model to choose which hole to fill or does the construction process require holes to be filled in some specific order perhaps lefttoright what does it mean to embed a hole based on context is the dictionary of bindings passed to a neural network in some way its interesting that neural semantics can still be used even when there are concrete values have you considered using the concrete semantics as a regularization method for the neural semantics for instance by adding a loss term so that fx ynn approx fnnxnn y nn for endtoend training is the embed function trained jointly with the policy but then fixed when training the value function or is there a different embed function used for the value function its a bit unusual to use logistic loss when training a value function is this essentially a binary crossentropy objective trained with successfull rollouts as positive examples and unsuccessful ones as negative examples using null as the embedding of arguments to lambda expressions is an interesting choice does that mean that theres no way to tell which argument is which for lambdas with multiple arguments i also notice that appendix b2 seems inconsistent with appendix a in this regard stating that lambda arguments are embedded by name it would be interesting to see some example programs that the method is able to produce perhaps as part of the appendix im also curious whether you have any intuition for what the neural semantics mean for a program with holes have you done any analysis of this docsep positives 1 blended neural semantics is an elegant and intuitive construct 2 performance seems to be good for interesting and nontrivial dsls negatives 1 although the highlevel description of the approach is intuitive it is lacking many details the experimental evaluation is distressingly vague 2 comparison to baselines is poor although the main approach is compared to baselines introduced here remove blended semantics use just an rnn it is not compared to most prior approaches on the benchmarks of those prior approaches 1 i like this work and the main ideas are intuitive and well described however i found the paper disturbingly lacking in detail i see some of that detail in the appendix but information such as dataset size is essential in understanding results it cant be in the appendix only 1 also the paper seems to make some offhand value judgments pun intended about prior approaches without substantiation for example it claims that executionguided synthesis isnt compatible with loop constructs but no explanation is given for this similarly the paper says that it cant possibly compete with repl on an io synthesis task on the robustfill dsl but not explanation is given this makes for a jarring reading experience 1 page 1 paragraph 2 your introduction would be much more effective if you explained briefly the semantics for your dsl even though its relegated to the appendix otherwise your example doesnt make sense i dont know for instance what precisely buildcolumn1 means is 1 the height from the bottom or the height of a column addition can i stack columns your text for example seems to imply that buildcolumn1 has no place in a solution to the example problem because the column has height 1 sentence right after the sketch example however the goal state on figure 1 shows stacked columns of height 1 its even more confusing that your appendix description of the dsl implies there are learned functions presumably including buildcolumn which arent shown here all in all this introduction confused me more than it sold me your work 1 page 2 paragraph 1 somehow this text seems to suggest that higherorder functions and control flow are incompatible with approaches that dont know how to evaluate partial programs thats not true of course repl could do just fine in such a setting in principle but it would have to build the program bottom up with all of the arguments of a higherorder function before it composed them into an application of the higherorder function what you havent motivated yet is why we need to write unfinished code rather than bags of expressions from which to compose the finished program you havent given any intuition or evidence of that i suspect that the intuition comes from the fact that not all execution models can be interpolated along the lines of string manipulation tasks or shape composition tasks a la repl and therefore you cant just look at the output and expect to get good hints of the building blocks you need to compose greedily sometimes you need to choose a code idiom a loop before you choose what goes into the loop it would help your paper if you made an argument why that makes sense in many cases 1 page 4 whats the embedding function for concrete expressions 1 page 4 whats a precise definition of context in the examples youve shown value assignments to variables but that doesnt allow for having multiple holes in a program do you only allow one hole the examples show multiple holes which suggests there must be some positional encoding involved or different holes would embed to the same vector given the same context 1 page 4 are neural modules separate for every primitive operation even for learned functions how many are there 1 page 6 section 50 why exactly arent these domains covered by executionguided synthesis this is an oft stated limitation of prior approaches but no evidence has been given why wouldnt repl for instance fit the block world 1 page 6 how was test construction exactly combinations of motifs seen during training isnt very precise 1 page 6 the policy isnt a cost function you mean presumably the probability associated with the chosen action something else please explain 1 figure 4 please explain your x axis what are programs considered in total per problem after how much training how big are the datasets 1 page 7 it seems a pity that you dont also compare to deepcoder and flashfill on the same datasets rather than only extending the dsls for your setting it would be valuable to know how the approach works in those simpler dsls 1 section 53 3rd paragraph the string dsl is on the right not on the left in figure 5 1 section 53 we do not expect that our approach would outperform the repl why not explain
### Summary:
|
four expert reviewers after much discussion in which the authors seemed to do a pretty good job addressing a lot of the initial complaints unanimously voted to accept this paper everyone seemed to agree that the idea was interesting and it is indeed interesting there were generally complaints about benchmarking there always are for papers about program synthesis one complaint i have but that i didnt really see mentioned is that the system as described is pretty baroque i have a hard time imaging how youd scale something like this up to more complicated contexts and honestly im not sure even in some of the contexts where it was tested if it would really outperform a wellengineered topdown synthesizer maybe this is just an aesthetic preference that only i have and maybe ideas need to start out overly complicated before the most useful bits can be extracted from them and refined at any rate i do think that this paper gives a cool new research contribution and that people will want to read it so i am recommending acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper proposes to reformulate the qa task in squad as a retrieval task ie using question as query and paragraphs as candidate results to be ranked authors makes some modifications to elmo model to create better word embedding for the ranking task authors have mentioned and are aware of open domain qa methodologies eg drqa pros the general idea is interesting to reformulate any qa task as a ranking task cons the methodology and task are not clear authors have reformulated qa in squad as as ranking and never compared the results of the proposed model with other qa systems if authors want to solve a pure ranking problem why they do not compare their methods with other ranking methodsdatasets the novelty the novelty is not significant although modifications to elmo are interesting results why authors have not compared their work with drqa docsepthis paper tries to study retrieval methods for multiparagraph multidocument reading comprehension the basic approach is to embed the question and the paragraph and train a system to put the correct paragraph close to the question i had a very hard time following the details of the proposed approach for this however and i still dont really understand what the authors are proposing this paper is not ready for publication the exposition is not at all clear and needs substantial rewriting additionally the evaluation done in the paper is not welljustified i do not know what paragraphside means but i assume that means you are trying to retrieve the question given the paragraph why there were no standard baselines compared against like a simple ir system lucene and i expected to see actual impact of the retrieved results on downstream qa performance of a system like chen et als or clark and gardner 2018 even if you have a slightly better ranking of the retrieved paragraphs its not clear to me that this will improve performance if the downstream method is properly calibrated to handle multiple paragraphs see clark and gardner 2018 a few writing suggestions for the authors for next time this paper does not follow the typical flow of an academic paper it reads too much like a logbook of what you did presented chronologically instead of presenting the ideas in a coherent sequence part of this is just simple wording fixes eg avoid things like it was time to compute elmo representations this isnt a logbook also all of the shape comments and numerical details at the top of page 4 are out of place describe your method first in general terms then give experimental details like corpus size etc later i suggest reading the awardwinning papers at various conferences to get a sense of how these papers are typically structured and phrased section 2 a full page dedicated to the history of word embeddings is entirely unnecessary for this paper this is not a survey on word embeddings its much more useful to the reader to give pointers to multiple connection points between your work and the rest of the literature you could have given a paragraph to the most relevant embedding techniques a paragraph to the most relevant retrieval multiparagraph techniques eg clark and gardner 2018 which is very relevant along with chen et al triviaqa others and a paragraph to distance metric learning docsepthis paper proposed a retrieval model based on the residual network and evaluated the use of elmo word embedding withwithout idf weight the results showed that there are significant gain when adding the residual network on top of the word embedding pros this work set a strong baseline for the retrieving target paragraph for question answering on the squad dataset the experiments were sounds and leverage interesting points the use of word embedding itself as the feature representation didnt have as much impact to retrieval performance as the distance function the studied problem retrieval for answering question rather than getting the most relevant document worth more attention cons the motivation of using the word embedding and contextual word embedding over the tfidf feature wasnt clear results on using simple feature like tfidf maybe useful to give readers better judgement of the use of word embedding the choice of dataset squad over more retrieval based qa like trivialqa also wasnt strongly motivated also it would be nice to see how the qa result would be improve with better retrieval model another use of tfidfidf and embedding is to use tfidfidf to identify the related document and then use word embedding to resolve semantic ambiguity do you have theoreticalempirical reason why this shouldnt be considered comment on writing in section 31 the dimension of the tensor should reflect the meaning vocab size embedding size or the number of documents rather than numbers in section 31 since the weighting for each document is not shared it would be clearer to just use m and w for each document instead of m w in section 31 evaluation metrics eg recallk roc auc technical details for example tensor dimension optimizer hyperparameters should be moved to the experiment section
### Summary:
|
i have to agree with the reviewers here and unfortunately recommend a rejection the methodology and task are not clear authors have reformulated qa in squad as as ranking and never compared the results of the proposed model with other qa systems if authors want to solve a pure ranking problem why they do not compare their methods with other ranking methodsdatasets
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary the paper presents a method for learning network architectures for regression tasks the focus is on learning interpretable representations of networks by enforcing a concise structure made from simple functions and logical operators the method is evaluated on a very large number of regression tasks 99 problems and is found to yield very competitive performance quality the quality of the paper is high the method is described in detail and differences to previous work are clearly stated competing methods have been evaluated in a fair way with reasonable hyperparameter tuning it is very good to see a focus on interpretability the proposed method is computationally heavy as can be seen from figure 7 in the appendix but i see the interpretability as the main benefit of the method since many applications for which interpretability is key can bear the additional computational cost i would not consider this a major drawback however it would be fair to mention this point in the main paper clarity the paper reads well and is nicely structured the figures and illustrations are easy to read and understand originality the paper builds on a large corpus of previous research but the novelties are clearly outlined in section 3 however the presented method is very far from my own field of research so i find it difficult to judge exactly how novel it is significance the proposed method should be interesting to a wide crossdisciplinary audience and the paper is clearly solid work the focus on interpretability fits well with the current trends in machine learning however the method is far from my area of expertise so i find it difficult to judge the significance docsepthis paper introduces a genetic algorithm that maintains an archive of representations that are iteratively evolved and selected by comparing validation error each representation is constructed as a syntax tree consists of elements that are common in neural network architectures the experimental results showed that their algorithm is competitive to the stateoftheart while achieving much smaller model size comments 1 i think this paper lacks technical novelty im going to focus on experimental result in the following two questions 2 feat is a typical genetic algorithm that converges slowly in the appendix one can verify that feat converges at least 10x slower than xgboost can feat achieve lower error than xgboost when they use the same amount of time can the authors provide a convergence plot of their algorithm ie real time vs test error 3 from figure 3 it seems that the proposed algorithm is competitive to xgboost and the model size is much smaller than xgboost have the authors tried to postprocessing the model generated by xgboost hows the performance comparedocsepthe paper proposes a method for learning regression models through evolutionary algorithms that promise to be more interpretable than other models while achieving similar or higher performance the authors evaluate their approach on 99 datasets from openml demonstrating very promising performance the authors take a very interesting approach to modeling regression problems by constructing complex algebraic expressions from simple building blocks with genetic programming in particular they aim to keep the constructed expression as small as possible to be able to interpret it easier the evaluation is thorough and convincing demonstrating very good results the presented results show that the new method beats the performance of existing methods however as only very limited hyperparameter tuning for the other methods was performed it is unclear to what extent this will hold true in general as the main focus of the paper is on the increased interpretability of the learned models this is only a minor flaw though the interpretability of the final models is measured in terms of their size while this is a reasonable proxy that is easy to measure the question remains to what extent the models are really interpretable by humans this is definitely something that should be explored in future work as a smallsize model does not necessarily imply that humans can understand it easily especially as the generated algebraic expressions can be complex even for small trees the description of the proposed method could be improved in particular it was unclear to this reviewer why the features needed to be differentiable and what the benefit of this was ie why was this the most appropriate way of adjusting weights in summary the paper should be accepted
### Summary:
|
the reviewers all feel that the paper should be accepted to the conference the main strengths that they noted were the quality of writing the wide applicability of the proposed method and the strength of the empirical evaluation its nice to see experiments across a large number of problems 100 with corresponding code where baselines were hyperparameter tuned as well this helps to give some assurance that the method will generalize to new problems and datasets some weaknesses noted by the reviewers were computational cost the method is significantly slower than the baselines and they werent entirely convinced that having more concise representations would directly lead to the claimed interpretability of the approach nevertheless they found it would make for a solid contribution to the conference
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a privacy framework where a privatizer according to the utility and secret specified by users provides a sanitized version of the user data which lies in the same space as the original data such that a utility provider can run the exact algorithm it uses for unsanitized data on the sanitized data to provide utility without sacrificing user privacy the paper shows an information theoretic bound on the privacy loss and derives a loss function for the privatizer to use it then proposes an algorithm for the privatizer evaluates its performance on three scenarios the paper investigated on an interesting problem and proposed a nice solution for synthetic data generation however i think the proposed framework and how the example scenarios fit into the framework needs to be described clearer and more experimental evaluations would also help make the result more solid more detailed comments do the user and privatizer need to know what the machine learning task is when doing the sanitization is it ok for the privatizer to define utility in a different way as the machine learning task for example as a user i may want to hide my emotion but im ok with publishing my gender and age in this case can i use a privatizer which defines secret as gender and utility as gender age and will the synthetic data generated by such a privatizer be equally useful for a gender classifier or an age classifier it would be good if it is as we dont need to generate taskspecific synthetic data then i think it might be interesting to see the effect of the privatizer when utility and secrecy are correlated with a potentially different level of correlation its not clear to me where the privatizer comes into the picture in the subjectwithinsubject example it seems like users here are people whose face appear in front of the mobile device so they probably wont be able to privatize their face image yet the device wont be able to tell if users are in the consenting group without looking at their faces i think its better if more clarification on how each of the three scenarios fits into the proposed framework is provided can different user have different secret in the experiment it might be better to try different modelsalgorithms for the utility and secrecy inferring algorithm to demonstrate how the privatizer protects secrecy under different scenarios i think there might be some related work on the field of fairness and transparency where we sometimes want the machine learning models to learn without looking at some sensitive features it would be nice to add more related work on that side its better to give more intuition and explanation than formulas in section 3 there are a few typos eg page2 3rd paragraph last sentence out our equation 4 is q should be is q page 8 2nd paragraph 1st line figures in 5 figure 5 that need to be addressed texts in some figures like figure 2 and 3 might be enlarged docsepthis paper studies the problem of representing data records with potentially sensitive information about individuals in a privacypreserving fashion such that they can be later used for training learning models informally it is expected from the transformed output of data record one should be able to learn about a desired hidden variable but should not be able to learn anything about a sensitive hidden variable to that end the paper proposes a kl divergence based privacy notion and an algorithmic approach to learn a representation while balancing the utility privacy tradeoff i am excited about the choice of the problem but i have reservations about the treatment of privacy in the paper first kl divergence is a very weak average case notion privacy that can be easily broken second the algorithm that is outlined in the paper gives an empirical way to compute the representation while balancing the utilityprivacy tradeoff eq 6 however there is no formal privacy guarantee for the algorithm it is important to remember that unlike the utility privacy is a worstcase notion and should formally hold in all occasionsdocsepsecond update id like to thank the authors for their detailed response the authors have made changes that i believe improve the overall quality of the submission i now lean towards accepting the paper and have increased my rating from a 5 to a 6 most notably i they clarified that their secretdetection model was retrained on sanitized data in their experiments ii they added details about their experimental setup and the algorithms used for their experimental evaluation and iii they added experiments to the appendix of the submission that evaluated their framework on synthetic data i do however still have some concerns about how well the privacy guarantees of the proposed algorithm would hold up in practice against a motivated adversary since formal privacy guarantees appear to be relatively weak right now as a minor comment there may be a typo in equation 20 of section 72 the case u s 1 0 is handled twice whereas the case u s 0 0 is never handled at all first update i find the authors problem statement appealing but share concerns with reviewer 1 about the privacy guarantees offered by the proposed method and with reviewer 3 about need to clarify the experimental evaluation no author response was provided ive left my score for the paper unchanged note this update was posted a few days before the end of the rebuttal period the submission was subsequently updated summary the authors consider a problem related to deidentification where the goal is to perturb a dataset x in a way that makes it possible to infer some useful information u about each example in the dataset while obscuring some sensitive information s for example the authors consider the problem of perturbing pictures of peoples faces to obfuscate the subjects emotions while making it possible to infer their genders the concrete approach explored in the papers experimental evaluation ensures that an existing model trained model on the original dataset will continue to work when applied to the perturbed data on the theory side the authors derive informationtheoretic lower bounds on the extent to which one can disclose useful information about a dataset without leaking sensitive information and propose concrete minimization problems that can perturb the data to trade off between the two objectives on the practical side the authors evaluate the minimization setup on three different problems key comments im of two minds about this paper on the whole i found the problem statement compelling however i had serious reservations about the implementation first i had trouble understanding the experimental setup based on the limited information provided in section 5 and the results seem difficult to reproduce from the information in the paper second and more seriously the security guarantees provided in practice seem very weak at the very least the authors should check whether their perturbations are robust against an adversary who retrains their model from scratch on perturbed data this experiment would significantly strengthen the submission but would still leave open the possibility that a clever adversary could extract more sensitive information than expected from the perturbed data details pro 1 the idea of perturbing an input in order to optimize bounds on how much useful versus secret information is disclosed by the output seems intuitively appealing in that context the theory from sections 2 and 3 seems wellmotivated section 32 defining a trainable loss metric is especially wellmotivated it provides a concrete objective function which when minimized can obfuscate data in a way that trades off between utility and secrecy pro 2 the idea of perturbing a dataset in a way that allows existing useful algorithms to continue working without modifications seems like an interesting and novel contribution i found the following excerpt from the introduction especially compelling it is important to design collaborative systems where each user shares a sanitized version of their data with the service provider in such a way that userdefined nonsensitive tasks can be performed but userdefined sensitive ones cannot without the service provider requiring to change any data processing pipeline otherwise pro 3 the paper combines theoretical results with empirical case studies on three different problems based on visual inspection the outputs of the perturbation heuristics shown in section 5 figure 3 and figure 4 seem reasonable con 1 few details are provided about the experimental setup used in section 5 and it was difficult for me to understand how the theoretical results in section 4 were actually being applied theres typically a lot of work that goes into turning a theoretical objective function eg equation 10 in section 42 into a practical experimental setup this could be a major contribution of the paper but right now i feel like there arent enough details about the implementation for me to reproduce the experiments con 2 i had trouble understanding the motivation for the subject within subject case study in section 51 the authors describe the problem as follows imagine a subset of users wish to unlock their phone using facial identification while others opt instead to verify their right to access the phone using other methods in this setting we would wish the face identification service to work only on the consenting subset of users but to respect the privacy of the remaining users the proposed solution figure 3 applies minor perturbations to the pictures of consenting subjects while editing the photos of the nonconsenting users to leave only their silhouettes a simple baseline would be to remove the photos of the nonconsenting users from the dataset entirely the case study would greatly benefit from a discussion of why the baseline is insufficient its also perfectly reasonable to say that the section is meant as a way to check whether the objective function from section 4 can lead to reasonable behavior in practice but if so the intent should be clarified con 3 as far as i can tell the practical experiments in section 5 assume that the party who perturbs the dataset knows exactly what algorithm an attacker will use to infer secret information they also seem to assume that the attacker cannot switch to a different algorithm or even retrain an existing machinelearned model to try and counter the perturbation heuristics from the beginning of section 5 initially we assume that the secret algorithm is not specifically tailored to attack the proposed privatization but instead is a robust commonly used algorithm trained on raw data to infer the secret unless i missed something it seems like this assumption is used throughout the experimental evaluation to the authors credit the submission states this assumption explicitly in section 5 from a security perspective however this seems like a dangerous assumption to rely on as it leaves sanitized data vulnerable to attacks for example an attacker with knowledge of the perturbation algorithm can retrain the model they use to extract sensitive information using perturbed images in place of the original images in their training dataset my main practical concern is that the security guarantees provided by the submission seem fragile it may be much easier to build a perturbation algorithm that is resistant to a single known attack than to remove the sensitive information from the dataset entirely right now the empirical results in the submission seem to focus on the former
### Summary:
|
this paper addresses data sanitization using a kldivergencebased notion of privacy while an interesting goal the use of averagecase as opposed to worstcase privacy misses the point of privacy guarantees which must protect all individuals otherwise individuals with truly anomalous private values may be the only ones who opt for the highest levels of privacy yet this situation will itself leak some information about their private values
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper introduces a metalearning approach in hamiltonian neural networks to find the structure of the hamiltonian that can be adapted quickly to a new instance of a physical system the contribution is novel and the paper is well written the presentation is mostly clear however some improvements are needed strength clean methodology good performance on the task of fewshot learning on a new system nice visualization of the vector fields weaknesses no realworld problem the metalearner in particular hanil is clearly not adjusting the parameters of the hamiltonian as we would expect it like the physical parameters which would be more at the beginning of the network it only adjusts the readout layer in this way it is hard to understand what is really happening you need a dense sampling of the metaworld really you have to look at 10000 tasks a latent variable estimation model would have been a good baseline or alternative to the metalearning framework for instance a vae details below eq 3 consider calling beta not a learning rate because it is more a stepsize since nothing is really learned in the inner loop related work machine learning methods to perform symbolic regression directly such as learning equations for extrapolation and control sahoo et al icml 2018 might be good to add fig 3 font size is far too small lines are too thin fig 3 consider using a logscale for the mse plots post rebuttal update i read the response and commented on it the authors clarified my questions and updated the paper accordingly so i think my score of 7 is supporteddocsepthe paper presents a metalearning method for learning hamiltonian dynamic systems from data more specifically the novelty is incorporating hamiltonian neural networks hnns within known metalearning methods maml and anil in order to model new dynamical systems with previously known structures but unknown parameters from partially observed data the results from the experimental evaluation on three wellknown systems show that such an approach and in particular hnns w anil hanil leads to more accurate models of unseen dynamics compared to other benchmarks methods such as vanilla hnns and hnns w maml hamaml overall the paper is well written and the contributions are solid leading to reasonable improvements over recent work on modeling hamiltonian systems while offering better understanding of the underlying modeling problem however i have some concerns regarding the experimental setup as well as the discussion of related work and the motivation thereof which are reflected in my score first it is unclear to me how the metatraining is performed the authors state that 10k tasks are being sampled into the metatraining sets are these 10k sampled for each of the 3 systems leading to 30k in total or 10k in total what is the tasksystem distribution in this case is the metaleaning performed on tasks sampled from all 3 dynamic systems ie one metaoptimization for all 3 systems or this is done per system as fig1 suggests the choice of 10k seems a bit arbitrary how does this choice affects the overall performance an ablation study on the size of the metasets will further highlight the strengths of the proposed method second the authors chose two benchmarks which address modeling hamiltonian systems hnns by design ie vanilla hnns trained from scratch for a given task and pretrained hnns using the metasets recent work for instance symplectic recurrent neural networks chen et al 2020 hamiltonian graph nns sanchezgonzalez et al 2019 symplectic odenet zhong et al 2020 sympnets jin et al 2020 has shown improvements over the vanilla hnns both in terms of accuracy and stability although these methods are very briefly mentioned in the paper they are never indepth discussed nor considered it would be helpful if at least such a discussion is given as well as maybe discussing possible extensionseg srnn with anil next the authors should consider extending the related work section in few directions first since one of the major contributions is related to metalearning the authors should provide a relatedwork segment discussing related metalearning approaches which would further justify their design choices eg why wasnt reptile considered next in sec 41 the authors state that the most related are hnn but omit any further discussion regarding eg srnns or hgnns why srnns are not upthere with hnns imo such a discussion will further motivate the contributions of the paper i respectfully disagree with the last statement in sec 42 that in contrast to our work the existing methods of identifying the governing representations used the symbolic representation underlying the assumptions that the unknown physical laws are expressed by combinations of mathematically known expressions namely the methods in this study also implicitly build upon the very same physical laws which are combinations of known expressions the metatasks sampling is performed over systems with known structures therefore the very same assumptions still hold other comments figure 2 while taking a whole page is a bit unreadable and confusing to follow more elaborative discussion will help with conveying the message better can you clarify the difference in the results presented in table 1 and figure 3 more specifically in figure 3a the coordinates mse shows that hamaml is diverging while pretrained hnns seem more stable in table 1 on the other hand prehnn are worse than hamaml are these the same 10 new system but at different gradient steps or in sec 6 can you clarify this statement that is because the standard supervised learning scheme is not efficient for the model to learn appropriate shared representation across the systems what is a standard supervised scheme and why is not efficient the very next sentence in sec 6 states that naive nns also fail because are hard to grasp the continuous and conservative structure of the vector fields can you also clarify this post rebuttal update the authors addressed and clarified many of my concerns therefore i updated my score docsepthe authors combine hamiltonian neural networks hnn with the maml and anil neural learning approaches for metalearning the results are mostly consistent with what would be expected when combining them the paper is easy to follow and nice to read and seems to have sufficient implementation details for reproducibility my main criticism is that there is not really novel technical content the model is a trivial combination of mamlnil with hnns this could be ok in the following scenarios if results were given on wellknown important domains achieving state of the art currently all domains are very toy if the paper gave a hint of what realworld problem settings could benefit from this specific combination which does not seem to be part of the discussion if the paper did a very thorough empirical investigation the results are nice i really like figure 2 but otherwise none of the insights are really new but just the union of the conclusions of the mamlnil papers and the hnn paper but those are not really well covered probably the biggest value of the paper is that it brings together the hnn community often closer to natural sciences communities to the metalearning community often closer to datascience communities and this paper may be a gateway from people from the former one to learn more about metalearning but i am not if this is sufficient to grant acceptance i would not oppose an acceptance decision if the ac or other reviewers decide to accept it on these grounds in its current form additional comments i am a bit surprised hanil is so much better than hamaml considering naive anil is not so much better than naive maml comparatively specially in springmass and and pendulum point dynamics was the number of gradient steps during evaluation finetuned for each baseline separately could this be the reason why maml is comparatively worse this is hinted in while hamaml is slower than hanil to adapt the true vector fields because of the larger number of parameters to update in adaptation process but it would be nicer to see more supporting evidence or additional investigations all of the results in table 1 corresponds to the evaluation of the loss of the derivatives and not the error over an integrated trajectory right i find it a bit unintuitive that the naive network is on pair with the model on kepler but not on the other two domains i would expect most of the advantage of the hamiltonian model would come from the stability when integrating over longer trajectories but not so noticeable on the predictions of gradients at single points in fact in table 1 from the original hnns paper they precisely show that the differences between the baseline and the hnn are mostly wrt preservation of energy but that the losses on the derivatives are nearly identical it would be nice to see some experiments where the functional form of the hamiltonian itself rather than just hamiltonian parameters change across time the title introduction and related work somehow makes it look like different physical laws will be inferred during the meta learning stage however in practice the metalearning task covers the same physical laws than those seen during training with different parameters otherwise the title is highly misleading since it does not identify physical laws but to identify specific instances of the same physical law how were the trajectories from figure 3 chosen fully at random minor commentstypos have not affected my decision table 1 possibly bold font for best model or even better some bar plots would make it easier to visualize the data weird grammar with sufficient data to discover hamiltonian missing space partial differentiationsrudy are the numbers in the last row of the results table truly exactly the same for point dynamics and trajectories 033019 033018 033019 033018 or is this an editing error when observing 25shot point dynamics and 5shot trajectories the number of given samples in the phase space is the same and the same is true for observing 50shot point dynamics and 10shot trajectories would be good to refer to l5 again at this point fig 2 bc i would recommend to add some sort of labels indicating what each of the 4 rows on the left of the plot one model seems to be missing from figure 3a figure 3a caption mentions after 50 gradient steps but other places say 10 gradient steps not sure if this was changed for fig 3 or this is a typo
### Summary:
|
this paper proposes a metalearning approach for inferring the hamiltonian governing the dynamics of physical systems from observational data and using it to adapt to new systems from the same class of dynamics quickly the paper does this by effectively combining the previously published hamiltonian neural networks and mamlanil the reviewers agree that the paper is well written and the experiments are comprehensive however they also have reservations about the technical novelty of the proposed solution given that it appears to be combination of preexisting models saying this the authors were able to address a lot of the reviewers concerns during the discussion period hence i recommend this paper for acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper presents new training schemes and experiments for a matrixmultiplicative variant of cbow this variant is called a cmsm yessenalina and cardie 2011 asaadi and rudolph 2017 which swaps the bag of vectors to a product of square matrices for encoding context to incorporate word ordering it seems this model has not been trained successfully before at least with a simple approach due to the vanishing gradient problem the papers main contributions are an initialization scheme for context matrices to i n001 to counter the vanishing gradient problem and a modification of the cbow objective so that the target word is drawn uniformly at random from the context window rather than the center word both are shown to improve the quality of learned representations when evaluated as sentence embeddings concatenating cbow and cmsm architectures is additionally helpful i was not aware of the matrixmultiplicative variant of cbow previously so its possible that i dont have the expertise to judge the novelty of the approach but the idea is certainly sensible and the proposed strategies seem to work the main downside is that for all this work the improvements seem a little weak the averaged fasttext embeddings are clearly superior across the board though as the authors say its probably unfair to compare based on different training settings but this doesnt hurt the simplicity and effectiveness of the proposed method when compared against cbow baselines docsep the authors propose cmow an extension of the cbow model that allows the model to capture word order instead of each word being represented as a vector words are represented by matrices they extend the cbow objective to take into account word order by replacing the averaging of vectors to create the context with matrix multiplication a noncommutative operation this is the first time this model has been applied in a large scale unsupervised setting they are able to do this using their objective and an initialization strategy where the matrix embeddings are set to the identity matrix with some gaussian noise added the results of this paper are its main weakness i did enjoy reading the paper and it is nice to see some results using matrices as embeddings and matrix multiplication as a compositional function they include a nice analysis of how word order is captured by these cmow embeddings while cbow embeddings capture the word content but it doesnt seem to make much of a difference on the downstream tasks where cbow is better than cmow and close to the performance of the hybrid combination of cbow and cmow i think its clear that their model is able to capture word information to some extent but other models rnns etc can do this as well that admittedly are more expensive but also have better performance on downstream tasks i think a stronger motivation for their method besides an analysis of some phenomena it captures and a slight improvement on some downstream tasks when combined with cbow is needed though for acceptance could it be used in other settings besides these downstream transfer tasks pros introduced an efficient and stable approach for training cmsm models show that their model cmow is able to capture word order information show that cmow compliments cbow and a hybrid model leads to improved results on downstream tasks cons the results on the hybrid model are only slightly better than cbow cmow alone is mostly worse than cbowdocsepthe main contribution of this paper in practice seems to be a way to initialize the continuous matrix space model so that training actually converges followed by a slightly different contrastive loss function used to train these models the paper explores the pure matrix model and a mixed matrix vector model showing that both together improve on simpler methods on many benchmark tasks my main concern is that the chained matrix multiplication involved in this method is not substantially simpler than an rnn or lstm sentence encoding model and there are no comparisons of training and inference cost between the models proposed in this paper and conceptually simpler rnns and lstms the fastsent paper used here as a baseline does compare against some deep models but they choose far more complex baselines such as the nmt encoding which is trained on a very different loss function indeed the models proposed here do not seem to outperform fasttext and fastsent despite having fairly similar computational costs i think this paper could use a little more justification for when its appropriate to use the method proposed here versus more straightforward baselines
### Summary:
|
this paper presents cmowan unsupervised sentence representation learning method that treats sentences as the product of their word matrices this method is not entirely novel as the authors acknowledge but it has not been successfully applied to downstream tasks before this paper presents methods for successfully training it and shows results on the senteval benchmark suite for sentence representations and an associated set of analysis tasks all three reviewers agree that the results are unimpressive cmow is no better than the faster cbow baseline on most tasks and the combination of the two is only marginally better than cbow however cmow does show some real advantages on the analysis tasks no reviewer has any major correctness concerns that i can see as i see it this paper is borderline but narrowly worth accepting as a methods paper it presents weak results and its not likely that many practitioners will leap to use the method however the method is so appealingly simple and well known that there is some value in seeing this as an analysis paper that thoroughly evaluates it because it is so simple it will likely be of interest to researchers beyond just the nlp domain in which it is tested as cbowstyle models have been so iclr seems like an appropriate venue it seems like its in the communitys best interest to see a method like this be evaluated and since this paper appears to offer a thorough and sound evaluation i recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
review of iclr paper 1928 group equivariant generative adversarial networks this manuscript addresses the problem of artificially generating images for which the label should be unaffected by certain symmetries andor translations to address this problem the authors use the group equivariant convolutional neural networks of cohen welling 2016 as the generator andor discriminator in a generative adversarial network they conduct a sequence of experiments on smaller data setsfour real and one syntheticexhibiting varying levels of symmetry which together suggest that group equivariant gans may be more effective than traditional convolutional networks in the low data setting my assessment of this manuscript is that it sits right on the threshold separating acceptance and rejection on one hand the writing is remarkably clear of grammatical errors and the authors are evidently deeply familiar with gan literature on the other hand the papers novelty is minimal and the mathematical descriptions of some important parts of the manuscript are insufficient group equivariant networks were introduced in 2016 and in the original paper those authors demonstrated the efficacy and training efficiency of these networks in this manuscript the authors contribution can be summarized as placing this architecture in a gan and observing similar improvements which is unsurprising and in my assessment a marginal contribution ive summarized further concerns below section 21 the preliminaries section makes the fundamental concepts extremely hard to understand equivariant and invariant are not defined please provide a citation for the plane symmetry groups p4 and p4m the phrase an image is a function f on a rectangular grid mathbbz2 is ambiguous use the notation f mathbbz2 to mathbbr however you later mention that f has k channels in which case f k times mathbbz times mathbbz to mathbbr regardless im confused which operation defines your group you have sums and product and compositions so this should be clarified define a filter psi the g1 y y g on page 3 is confusing how can this work when g in p4 is a rotation and not a translation on the whole i find this section inadequate the cohen paper introduces these concepts well so i encourage you to stick closer to the mathematics introduced there we finally maxpool over the set of transformations to obtain the generated image x in mathbbz2 an image is a function not a pair of integers the groupequivariant discriminator receives an inputs x in mathbbz2 again an image is not a pair of integers table two is problematic for me your groupequivariant networks have less parameters and you are operating in a selfprofessed limited data environment shouldnt i expect that the networks with less parameters exhibit less overtraining what are the details of how you conducted the ablation study what layers andor features did you remove i am not convinced that in table 2 the potential for augmentation leaking means that the cnn with standard augmentation is inapplicable why dont you just let the fid evaluation measure its performance which should be higher if the network has captured symmetries incorrectly docsepthis paper presents a method for incorporating inductive symmetry priors into the network architectures of gans the authors propose replacing standard cnns with groupequivariant cnns in either the generator or discriminator or both the method is evaluated by comparing generated images to originals using frechet distance for a number of datasets the paper is well written and comprehensive its main contribution seems to be the adaptation of some common gan techniques to the groupequivariant case the experiments are fairly convincing although i would like to see more discussion about when to use the group convolution in the discriminator vs generator vs both as this seems to make a difference for different datasets i am not an expert and am unsure if frechet distance is the right metric here but it seems to be common at least the authors say they compare their method to one of the two main stateoftheart gan designs it would be interesting to see a comparison with other flavours of gan also given the empirical nature of the paper i think this would be a more convincing argument ie groupconvolutions can improve gans not just biggan it would also be interesting to see how the different methods perform given different amounts of training time in general i like this paper though i think the experiments could be more comprehensive as noted above that being said it is not immediately clear to me that this is enough of a contribution to be accepted since it is just applying existing methods in the groupequivariant case as ive said though i am not an expert in gans and i would be open to increasing my rating if more knowledgeable reviewers disagree on this point docsepshort summary of the paper this paper proposes a new type of architecture that incorporates groupequivarient layers into the discriminator and the generator of a gan this group shall be a group of symmetry of the signals to generate and in this paper the authors focus on the group generated by translations pi2rotations and reflexions they also employ linear upsampling and they show that with no specific adhoc trickshere classconditional batch norm spectral normalisation it is possible to train competitive gans with such priors the experiments are conducted on datasets which have exactly two symmetries translation or rototranslation like rotatedmnist or approximatively one like food101 or cifar10 the major contribution of this paper is experimental showing better perclass synthesis and claiming one needs less data thanks to those priors pros this is an interesting attempt to incorporate group structure in gans which can only benefit to the community various experiments are conducted which positively support the insights of the paper measured thanks to a visual fidelity metric cons i think the formalism could be more complete and rich several points are not addressed like dealing with more complicated groups that imply an inexact covariance but are still groups of variability currently i find it too elementary from the experimental points of view i thought small data meant to use at least an order of magnitude less samples than the original dataset for training gans at least it would have been nice to show a more complete plot in order to understand the tendency is a really limited sample setting rather than 3 data points for 33 66 and 100 of the data i would have appreciated to understand the methodology to pick the hyper parameters in a supervised context i understand that one performs a grid search on a validation set which is a subset of the training set here the way to pick an architecture is unclear to me for instance and in one word it should be clear if modifying an architecture to make it gconvolutional friendly specific remarksquestions i find quite strange to see no mallats citation whereas the first works on invariance in image processingcnns is from his group and since at least 2010 if not older httpswwwdiensfrdatapublicationspaperseusipco2010interconfpappdf furthermore i noticed some generative models based on a scattering transform httpsarxivorgabs180506621 httpsarxivorgabs180906367 yet they do not work substantially better than other architectures whereas the scattering transform employed there leads to no loss of information it indicates that group a framework might not be the ultimate key to generating images consequently i would substantially lower the claim to our knowledge we are the first to introduce group equivariance to gans and use geometric considerations in both generator and discriminator this leads to a second remark the groups of equivariance which are used correspond roughly to horizontal and vertical reflexions they are not natural groups to consider for image variabilities like small rotations or small deformations consequently it is natural that applying a data augmentation strategy based on those groups would hurt the performance again the groupa priori is quite limited its not a generic group which is used but a very small subset subgroup of the rotation this is probably due to aliasing but one can wonder how to extend this method to real group of variabilities deformations rotation scaling this should be discuss or at least tried even if it doesnt work it is written that the authors tried to learn a transformation from g to z2 that could be learned i think the formalism here is important it is a representation from g to l2z2 that one would like to design i think it would be also nice to develop more this aspect in the paper or not to mention it at all the number of samples used isnt really impressive 33 of the data isnt order of magnitude smaller than the original dataset thus i find the small data claims strange what happens if less data is used for instance why isnt the linear averaging along the group in the gequivariant architectures and why is it only wrt translations im slightly confuse by the combination of the maxpool and global average pool i was wondering if the authors tried to manipulate the orbits of the generated signal could one use the same displacement or rotation in the latent space to sample two rotated images from each other with the generator suggestions for improving the paper i believe a significant improvement could be obtain from a more careful formalism to handle complex groups of variability of images i would add slightly more elements about the experimental process for being more reproducible i think itd be okay to show some negative results postrebuttal ive read the rebuttal which answers to many points i raised ive reflected it in my scoredocsepsummary the submission concerns an application of group convolutions cohen welling 2016 to the image synthesis setting where images are produced by the generator of a gan the two gan components are augmented mainly by a straightforward replacement of regular convolutions by group convolutions in addition to some other training tricks of the trade gradient penalty spectral normalization experiments indicate somewhat lower fid scores on both synthetic and real settings the method is seen as useful especially for the low data regime case review despite the conceptual simplicity of the presented approach replace convolutions by group convolutions i found the method itself a novel combination of existing concepts i can also imagine that getting the such modified architectures to work in practice can be quite tricky hence the additional gan training measures that were adopted in the training setup overall however i see novelty and impact to be fairly limited as all insights come from empirical evaluation which is notoriously difficult for visual synthesis applications the authors provide both visual results as well as quantitative results mostly on the basis of fid measurements which aim to evaluate the quality of the images experiments seem sound and results are clearly presented although the authors claim in the discussion section that visual fidelity and sample complexity are meaningfully improved i miss an attempt at quantitative analysis of this claim either through algorithmic metrics or user studies there are also no statements on the tradeoff between sample fidelity training difficult and amount of training or inference time compute since one of the strengths of the approach may be in the limited data regime i would have liked to see stronger evidence of a major impact there i cant quite see such a trend in the numbers of table 1 overall i see the work performed and results achieved by the submission as good but a stronger verification of the main claims would make the submission even stronger for the main conference track update postrebuttal i thank the authors for addressing some of the concerns raised by the reviewers my stance remains also given the outcome of eg the 10 experiment verification of the main claims remains difficult my score already reflects that id be happy to see the submission accepted
### Summary:
|
this paper use group convolutional neural networks in both generators and discriminator of gans and demonstrates advantages of this approach when training with a relatively small sample size while the novelty is limited in the work as it simply applies gcnn for gans i believe this application is interesting and the authors have applied it to many gan image synthesis applications conditional generation pix2pix on various benchmarks which gives evidence of the potential of gcnns in generative modeling accept
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
strong points 1 this paper is smooth and wellmotivated label smoothing is a wellknown trick to improve the multiclass neural network this is the first work to theoretically understand the effect of label smoothing by analyzing the convergence speed 2 based on some reasonable assumption this paper proves that sgd with lsr can have faster convergence speed than sgd without lsr when delta is small and it will converge to a worse level of odelta when delta is large 3 by observing that lsr may have adverse effects during the later training epochs a simple twostage algorithm is proposed to take advantage of i fast convergence speed of sgd with lsr ii easy training and better convergence guarantee of sgd without lsr negative points 1 experiment is not enough i the authors mention several times that lsr may not work well for training teacher models however there is no evidence or analysis of whether the tsla method can help solve this problem ii in theorem 3 theta is chosen to be 11delta however it is not clear whether different theta will influence performance iii the experiments are tested based on sgd it is better to show some results on the other optimizers frequently used in deep learning such as adam or adagrad meanwhile it will be better to show that the bound in sgd can be generalized to the other stochastic optimizers 2 even though the convergence speed is theoretically analyzed it still does not explain why lsr can improve over sgd namely the better convergence point in other words these bounds do not explain the gap among baseline lsr and tsla in figure 1 3 lack of experiment setup little information is provided about the training techniques such as dropout batch normalization these techniques may have an influence on lsrdocsepsummarizing the paper label smoothing is a popular regularization method that is used for deep learning models with classification problems but has not been studied from a theoretical point of view extensively this paper studies label smoothing from a optimization view and shows the convergence behavior when sgd is used showing the improvement over the onehot label learning under certain conditions the paper then proposes a new method called twostage label smoothing algorithm tsla which simply turns off the smoother after a certain number of epochs according to the analysis tsla has improved convergence under certain conditions over the original label smoothing baseline the experiments show how the proposed tsla performs better than the label smoothing baseline for 3 image datasets strengths of the paper the papers motivation is clear to study a well known regularizer label smoothing from an optimization point of view it compares the convergence properties of sgd without label smoothing and sgd with label smoothing it further proposes an original method that first uses label smoothing but later turns it off which is shown to be empirically better with some convergence properties weaknesses and issues of the paper in the beginning of section 5 it discusses several papers that show the harm of label smoothing under problem settings such as fewshot learning knowledge distillation and transfer learning as the motivation for thinking of a two step strategy however this paper focuses on the original ordinary learning no transfer learning and not fewshot learning setup and the connection between these were not so clear the intuition behind why label smoothing may be harmful is given in page 5 but it would be nice to have some experiments to support these claims for example if optimizing all k loss functions with label smoothing is harder having more classes should lead to more harm when label smoothing is used for stanford dogs and cub2011 90 epochs looks like it is quite small the learning curves for accuracyloss still seems to be going updown respectively for 60 70 and 80 other comments in the experiments it would be better to have a validation dataset to choose the best hyperparameter the epoch number when lbr is dropped and report the test performance based on the chosen one im guessing it will still be better than the baselines since in most hyperparameter candidates the proposed method seems to be working better in page 8 op5 top5 in page 4 different difference after response thank you for providing the response i would like to add some comments to the response about q1 i agree that it is important to work on the original ordinary learning as the first paper in this direction but it makes the story of the paper inconsistent the adverse affect of lsr might only show up in other problem settings distillationfewshottransfer learning and not in ordinary learning and if that is the case it is unclear why the paper focuses on ordinary learning this is just a comment on the storymotivation of the paper about q2 although fig1 shows the benefits of the onehot label over label smoothed version in latter stages of training i believe it does not show how the number of k plays a role in the performance and the difficulty about q3 i agree that 90 epochs is sufficient for the lsr baseline and the current figures demonstrate the benefits of the proposed method but i was more interested in the difference of s in tslas from fig1 it still seems like tsla607080 are improving at the end of 90 epochs while tsla 304050 seem to have converged docsepthis paper analyzes the convergence of sgd with a biased gradient where the bias comes from label smoothing the paper positions itself as a theorectical work towards understanding the success of such smoothing trick but i feel the analysis does not add too much to the literature and its main results are somewhat trivialmisleading in particular given that the analysis hinges on a specific loss function ie eq 2 which is linear in terms of yi label smoothing quickly translates to additive terms that can be wellcontrolled separately from the unbiased gradient it is thus not surprising that standard analysis of nonconvex sgd goes through here with a new additive term appearing in the final gradient upper bound as shown in the paper for this reason i do not see quite technical novelty in this paper my another concern is that the role of delta is unclear by eq 4 this quantity is dataindependent and should be treated as only depending on the distribution of x haty now if we take a close look at the main result ie theorem 3 it is fairly a weak result saying that sgd converges to a point with constant gradient this is because the first case in theorem 3 is essentially ensuring convergence to epsilonstationary point only when epsilon geq omegasqrtdelta geq omega1 ie a point with constant gradient likewise the second case also boils down to the same guarantee as such the comparison in table 1 seems problematic since epsilon should be treated as a variable arbitrarily close to 0 in words only the first row of table 1 will make sense yet there are still two issues here i believe that the infinite iteration complexity of lsr is due to the drawback of theorem 3 as i just pointed out namely running lsr only gives you a point with constant gradient although tsla does converge to epsilonstationary point such performance guarantee is not due to your design of tsla but follows from standard sgd in fact simply running sgd from the very beginning already gives such guarantee thus the first row to compare tsla and lsr is somewhat misleading now the question boils down to why not simply running sgd in the first row of the table and main text it is argued that tsla ie lsr sgd has improved iteration complexity of odelta epsilon4 which is better than the o1epsilon4 of vanilla sgd i do not really agree with the conclusion because delta is independent of epsilon in fact it can be a dimensiondependent quantity and eg even blow up to od or so making the iteration complexity of tsla worse than sgd by a dimensiondependent factor to make the work qualify for a toptier venue authors need to either present a new algorithmanalysis with vanishing gradient ie an upper bound of the form odelta cdot epsilon or show hardness result say any algorithm that takes the label smoothing must incur a stationary point with gradient geq omegaepsilon delta updates after author response the authors basically posted their response at the last minute of the window which eliminates the possibility for further discussion while it is lengthy and pointtopoint i found it failed to clear up any of my concerns i am very disappointed that even after they recognized the misleading arguments in convergence analysis in theorem 3 of the revised version they are still claiming nabla fwr leq epsilon when delta oepsilon2 as convergence such conclusion will be extremely misleading if readers missed or did not carefully think about the condition on epsilon authors may want to refer to a calculus textbook for the rigorous definition of convergence regarding technical contribution authors acknowledged that their analysis does not guarantee convergence to stationary point since there is an additive odelta term in the gradient upper bound this immediately diminishes their theoretical contribution authors did not justify their technical novelty in fact if we carefully look at their analysis in the appendix it follows from standard sgd with a slight adjustment to the biased gradient induced by label smoothing which is acknowledged by the authors this is an additive gradient and thus is easily controlled it is also the main reason why in their main theorem the gradient upper bound suffers a nonvanishing odelta term authors argued that compared to running sgd from scratch the benefit of lsrsgd is the introduction of delta however this quantity itself is out of control note that even they were able to show deltao1 it is not strong enough here since this can easily be obtained by running sgd the only way that i see will save the paper is to show under distributional assumption of the data that delta o1d for example however i did not see how to make it happen and authors completely ignored such analysis in their response overall this is a paper playing tricks on its technical parts it is decorated with bunch of mathmatical analysis most of which is known and standard and the introduction of new insights is minimal i will be shocked if it gets accepted in iclr or equivalent conferencesdocsepoverview this paper investigates the effects of label smoothing regularization and proposes two contributions improved theoretical convergence guarantees for sgd with labelsmoothing and a practical twostage training strategy tsla that uses label smoothing during the first several epochs and disables it thereafter the performance gains from tsla are demonstrated both theoretically and in three image classification experiments all three use resnet18 momentum against reasonably tuned baselines the overall readability of the paper is decent there are numerous minor issues see below they are all trivial to fix score justification while this paper definitely has scientific merit i cannot recommend acceptance in its current form my main issue with the paper is with the claim that tsla can work with any stochastic gradient optimizer momentum adam etc see p5 second to last paragraph while i have no doubts that one can plug in any optimizer the theoretical guarantees were provided for sgd and more importantly the experimental results only cover sgdmomentum not to mention a single model architecture more complex optimizers eg adam and architectures eg transformer may introduce problems when switching between training phases therefore i suggest further supporting authors claim by conducting additional experiments in other popular lsr use cases for example one such use case is machine translation most recent models for this task use label smoothing to improve bleu eg 1 2 one can train a transformerbase model with adam optimizer for iwslt14 deen3 task in around the same time that it takes to train resnet18 on cifar see fairseq4 of course any other task would also support the claim provided it doesnt use the same model andor optimizer resolved another issue i have is a subjective one so it didnt affect my score it feels that the paper contains two independent lines of research with limited synergy for instance the theoretical study estimates label smoothing at convergence while tsla seemingly disregards this by disabling label smoothing during the final stages of training questions page 8 we use two different labels to smooth the onehot label the uniform distribution over all labels and the distribution predicted by an imagenet pretrained model which downloaded directly from pytorch resolved please clarify how exactly do you adapt the predictions from 1000 imagenet classes to cifar100 classes as far as i understand there are no pretrained models in pytorch itself did you mean torchvision or torch hub if so it would be great to state the exact model and library version to facilitate reproducibility alternatively publishing the source code of the proposed solution should also do the trick as long as it clearly describes all the requirements page 6 experiments the momentum parameter is fixed as 09 resolved momentum sgd and other popular algorithms maintain a set of statistics accumulated over training batches please clarify do these statistics carry over from stage 1 to stage 2 of tsla if not what happens to them at t1 page 78 the value of theta resolved as far as i understand you select theta by maximizing the performance of lsr baseline and then use the same theta for tsla if not please clarify please elaborate on how exactly you tuned theta is there some intuition why optimal theta for lsr would generalize to tsla if not how does tsla perform under exaggerated theta math definition 1 resolved the formulae use lowercase f suggesting that the gradientjacobian of the model output should be zero did you mean capital f assumption 2 resolved q1 neural networks typically have many global optima due to inherent invariances in the model does this assumption need to hold for every f equation 4 hat y vs yls i was wrong as i understood hat y is the common smoothing vector that is used to construct yls did you mean to use yls in the objective function note the same expression with hat y is also used later on pages 34 typos presentation page 2 last paragraph the examplelabel pairs are draw page 3 after remark 1 label y is draw from id recommend changing draw to drawn page 3 after remark 1 let y be a label introduced for smoothing label while the sentence is technically correct i would suggest paraphrasing page 3 remark 2 we do not require the stochastic gradient is unbiased suggestion require that is unbiased or require to be unbiased page 3 assumption 2 ie there is no very bad local optimum on the surface of objective function again technically correct but id recommend paraphrasing page 4 remark sample complexity for training a learning model from did you mean training a model wo learning or training a machine learning model page 4 the different between sgd with lsr and sgd without lsr did you mean the difference page 4 bottom miniimagenet this issue is of negligible importance but id recommend using one of miniimagenet miniimagenet or minimagenet page 4 bottom between input example and output logit maybe theres a missing the algorithm 2 should t1 be listed as an input along with theta eta1 eta2 page 5 it seems that training smoothed label in the late epochs training with smoothed labels page 5 it runs a stochastic algorithm aeg sgd with lsr in t1 iterations runs for t1 iterations page 6 lsr dose not converge lsr does not converge page 7 bottom cifra100 did you mean cifar100 page 7 tsla may not converges if it drops may not converge without s page 7 and divide them by 10 every 60 epochs suggested in as suggested in page 8 pretrained model which downloaded which iswas downloaded page 8 conclusion that tsla benefits by lsr in the first stage benefits from lsr page 8 which verifies our theoretical findings in sections 5 section 4 sectionsections formatting references 1 attention is all you need vaswani et al httpsarxivorgpdf170603762pdf 2 joint sourcetarget self attention with locality constraints iyer et al httpsarxivorgpdf190506596v1pdf 3 httpworkshop2014iwsltorgdownloadsproceedingpdf 4 see fairseq library examplestranslation is a tutorial for iwslt14 githubcompytorchfairseqtreemasterexamplestranslation edit my original score was 5 i have read the authors response and updated my score to 6 my score represents the experimental side of the paper i do not feel confident judging the impact of theoretical contributions
### Summary:
|
the paper considers ways to understand label smoothing methods which are widely used in many applications there is some theory on the performance of sgd with and without the methods of the paper but there is s significant gap in terms of how the theory offers insight into label smoothing there are some empirical results but they are insufficient and there is not much description of the experimental setup there was a diversity of reviews but after a discussion among reviewers it was felt that overall another iteration on improving the coherence and presentation of the paper will make it much better for the community
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose to formulate the neural network architecture as a collection of multivariate categorical distributions they further derive samplebased gradient estimators for both the stochastic architecture and the deterministic parameters which leads to a simple alternating algorithm for architecture search pros intuitions and formulations are easy to comprehend simpler to implement than most prior methods appealing results on cifar10 as compared to the stateoftheart cons limited technical novelty the approach is a straightforward extension of shirakawa et al 2018 the main algorithm is essentially the same except minor differences in gradient derivations lack of theoretical justifications it seems all the derivations at the beginning of section 2 assume the architecture is optimized wrt the training set however the authors ended up splitting the dataset into two parts in the experiments and optimize the architecture wrt a separate validation set instead this would invalidate all the previous derivations the method is a degenerated version of enas a closer look at eq 2 and 3 suggests the resulting iterative algorithm is almost the same as that in enas where the weights are optimized using gd wrt the training set and the architecture is optimized using the logderivative trick wrt the validation set the only distinction are i using a degenerated controllerpolicy formulated as categorical distributions ii using the validation loss instead of the validation accuracy as the reward according to eq 3 this is also empirically reflected in table 1 which shows the proposed pdas is similar to enas both in terms of efficiency and performance the mathematical resemblance with enas is not necessarily bad but the authors need to make it more explicit in the paper minor issues im not sure whether its a good practice to report the best test error among multiple runs in table 1 the method is not really parameterless as claimed in the introduction for example a suitable learning rate adaptation rule can be taskspecific thus requires manual tuningdesign the method also consists of some additional hyperparameters like the lambda in the utility transformdocsepthis paper proposes an architecture search technique in which the hyperparameters are modeled as categorical distribution and learned jointly with the nn the paper is written well i am not an expert of the literature in this domain so will not be able to judge the paper regarding where it is located in the related work field pros this is a very important line of research direction that aims to make dnns practical easy to deploy and costeffective for production pipelines the categorical distribution for hyperparameters makes sense and the derivation of the joint training seems original idea i liked the fact that you need to train the nn just twice the second one only to fine tune with optimized parameters two very different problems inpaintingencodingdecoding cnnclassification have been demonstrated existing experiments have been explained with enough detail except for minor points cons i speculate that there is a tradeoff between the number of different parameters and whether one training is good enough to learn the architecture distribution ie when you have huge networks and many parameters how well this method works i think the authors could provide some experimental study suggesting their users what a good use case of this algorithm is compared to other techniques in the literature in what type of network and complexity this search method works better than others ecae for inpainting seems to be working significantly better than the proposed technique regarding results i was expecting more insights into why this is the case as above at what type of a problem one should pick which algorithm if the 7hours vs 3days gpu difference negligible for a client should one pick ecae in theory there has been shown lambda samples equation 2 and 3 however the algorithm seems to be using just 2 if i didnt miss this is not discussed thoroughly i speculate that this parameter is essential as the categorical distribution gets a bigger search space also the reliability of the model and final performance how does it change concerning this parameterdocsepthis paper presents a joint optimization approach for the continuous weights and categorical structures of neural networks the idea is the standard stochastic relaxation of introducing a parametrised distribution over the categorical parameters and marginalising it the method then follows by alternating gradient descent on the weights and the parameters of the categorical distribution this exact approach was proposed in httpsarxivorgabs180107650 by shirakawa et al the only innovation in this work is that it uses categorical distributions with more than two values this is a minor innovation the experiments are however interesting as the paper compares to the latest hyperparameters optimization strategies for neural nets on simple tasks eg cifar10 and gets comparable results however given that this is the biggest contribution of the paper it would have been nice to see results in more complex tasks eg imagenet or translation i very much enjoyed the simplicity of the approach but the question of innovation is making wonder whether this paper makes the iclr bar of acceptance the paper is also hard to read because of many english typos
### Summary:
|
the paper presents an architecture search method which jointly optimises the architecture and its weights as noted by reviewers the method is very close to shirakawa et al with the main innovation being the use of categorical distributions to model the architecture this is a minor innovation and while the results are promising they are not strong enough to justify acceptance based on the results alone
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
in this work the authors use evolutionary strategies to train recurrent neural networks with hebbian plasticity rules they test the system on two tasks sequence prediction and a simple rl tasks that involve robot navigation the approach is compared against previous work that uses plasticity but without recurrent connections and other approaches such as lstms for the problems presented in this paper the proposed approach outperforms most methods used in the comparison the ideas behind the paper are interesting and investigating more biologicalinspired methods that could potentially solve problems without gradient information are exciting however the contributions of this particular paper could be further elaborated on and the particular test domains are all rather simple beyond the approach introduced by najarrorisi 2020 and soltoggio et al 2018 the main addition is to also use a recurrent network this by itself would be fine but then the approach is only applied to a simpler rl domain than what najarrorisi investigated previously did the authors also try the approach on a more complex domain with higher input space how did the approach perform there that would be a more convincing demonstration of the power of rnnsplasticity rules additional comments the visualization of what the weights learn are interesting what did the recurrent hidden states learn why is the robot in figure 3c navigating in these circles it would be great to show trajectories before and after learning is the distance signal used as input to the neural network or only used for the reward as in the point nagiation task in the original maml paper if used as a network input how well would it do without it in some cases it even produces better results compared with lstm despite the simpler model architecture im not sure i would agree that the architecture is necessarily simpler why is only wd mulitplied with the modulation signal in equation 5 recursionbased learning employs recurrent neural networks rnn lstm hochreiter schmidhuber 1997 and selfattention mishra et al 2018 chen et al 2021 layers as learners it would be great to have an exact definition of what the authors refer to as recursion figure 3 should mention what the different colors mean an interesting paper but currently the approach would need to be tested on more complex problems to more fully demonstrate the advantages of using an rnnplasticity architecture docsepthe authors applied evolutionary algorithms to metalearn plasticity rules for recurrent neural networks they show this approach performs better than alternative metalearning approaches on two artificial tasks sequence prediction and wheeled robot navigation strength the model appears to perform better than alternative models in the two tasks tested weakness in my opinion the main weakness of this work is the lack of intellectual insights or impressive empirical results the main contribution is to apply evolutionary algorithm to metalearn plasticity rules for rnns given that there are quite a few papers in this area that involve two of the above elements evolutionary algorithm plasticity rules rnns it is not too hard to combine the three given the somehow arbitrary nature of the artificial tasks studied by the authors it is difficult to tell whether the empirical results are impressive minor concerns intro plastic rules plasticity rules intro plastic rules aka hebbs rule hebbs rule is only one kind of plasticity rule in general the text can be edited to improve clarity should cite confavreux neurips 2020 the main challenge to this manuscript is the lack of intellectual insights except a new combination of several existing techniques and a lack of strong empirical results docsepthe authors explore the interesting although not novel topic of using derivativefree optimization evolutionary algorithms to optimize learning rules the authors propose a method eprnn evolutionary plastic recurrent neural networks that uses evolution strategies to learn a hebbian learning rule for a recurrent neural network strenghts optimizing learning rules is interesting and its a promising approach weaknesses the proposed method is not new the combination with recurrent neural networks may be but it is still of very limited novelty the language of the paper has many mistakes and is a bit informal in some parts researchers are increasingly obsessed etc the parallel biological neural networks is interesting but very weak the main issues with the paper are language and the minimal novelty of the proposed methods docsepthe authors put together their own flavor of metalearning using offtheshelf plasticity learning evolutionary computation and recurrent network experiments are conducted on two overlysimplistic customdesigned tasks and the results lack impact this paper proposes a new flavor of metalearning which is an interesting and promising research avenue currently trending the proposed approach particularly aims at joining plasticity learning evolutionary computation and recurrent neural networks more trending topics in a blend yet unpublished the state of the paper however is pretty rough and it would require considerable work before it can be considered for publication in a major venue the experimental setup particularly is inconclusive the tasks are overly simplistic custom but unmotivated and addressed with an unnecessarily complex system leaving to much room as to where positive or negative performances come from a specific component i would like to start by pointing out a few words or sentences taken from the paper that the authors should necessarily address just find them on the paper i am confident the improvement can be evinced easily from the context as a suggestion focus on quantitative rather than qualitative remarks and avoid claims unsustained in the literature if not straight wrong researchers are increasingly obsessed ann have achieved great success due to the strong capacity of handling large datasets gradient oracles anns usually need overconsumption of datasets in a mimic manner learning takes place in the feedforward pass recursion are found to be extremely sampleefficient the outer learning loop optimize the meta hint meta is an adjective metalearning is a type of learning not learning a meta model to specific task by utilizing the meta yep it was not a typo there are plenty more but i am confident that the authors can take it from here i strongly suggest to involve an english native speaker for general proofreading and edit as this work could gain significantly from an exposition targeted to an international audience it is found that evolution can be more efficient in cases of very long horizon in reinforcement learning citing salimans this conclusion is not in the paper as far as i remember would be more likely linked to the work by stanley on openendedness evolution meta learning aims at building learning machines that gain experience there are obviously bunches of more sophisticated neural structure each result is concluded from independent 3 runs here is some feedback on the exposition also these are all requirements it is unnecessary to italicize and underline the already title case words for the eprnn acronym nor to bold it 7 times in the abstract nor in the rest of the paper some vocabulary which is uncommon for a ml audience and should be explained before usage such as memristance and biological plasticity finally approaching the technical perspective there are serious claims that need to be backed by a scientific process figure 1 illustrates a simplification of natural evolution as learning loop the external loop is the hereditary passing of genetic material left the internal loop depicts the lifetime of an individual the implication in this doubleloop setup is that experience obtained by an individual through interacting with the environment is then passed genetically to its offspring which incidentally would require selfmodification of the dna to incorporate the changes during the individual lifetime please confirm whether you stand by this interpretation and provide citation to work done in verification and support to this hypothesis the list of contributions shows 3 points the first is a study that i could not find in he actual paper the second describes the proposed metalearning framework without clearly showing the contribution which as far as i understand is limited to having put these three pieces together without even supporting the case as to why it should be advantageous the third point is the fact that you run experiments unless it is meant to highlight how the tasks were custommade though the paper should highlight the relevance and importance of these two tasks as it is unclear in conclusion the list of contribution needs rewriting to correctly represent a list of actual contributions sustained in the paper reference to past work is incomplete if not even biased from a ml perspective as a consequence the arguments on this paper are often unnecessarily complex or simply incorrect again from the perspective of ml literature this is also a requirement for publication the literature review and table 1 should mention at the very least the work of quoc le and esteban real in automl as it is the most active and recognized work in metalearning of artificial neural networks evolutionary algorithms are referenced from zhang et al 2011 pointing to a lesser known paper with a dubious title the only evolutionary computation work that is utilized in multiple places is the openai es a single paper published on arxiv without peer review which itself lacks a correct representation of the longstanding field of evolutionary computation a few names to build a better awareness of the field include foundation work from koza and banzhaf then miikkulainen into stanley and absolutely hansen or even glasmachers even in something as fundamental as anns the concepts of network architecture recurrent connections sample efficiency gradientbased vs gradientfree etc are not as mysterious and vague as presented in this paper i believe a more deep understanding of the underlying concepts can help the authors constructing a more compelling and correct argument as a concluding note the authors should learn about neuroevolution of recurrent networks for continuous control tasks as the literature has been going strong for over 20 years adding plasticity rules is not original per se and wrapping into a metalearning framework only automates hyperparameter optimization itself minimal in modern evolutionary algorithms the proposed tasks are incredibly simple as a requirement to change my mind on my recommendation and to study the effective complexity of the tasks i propose the authors to motivate extensively their need to construct their own custom tasks as a baseline i also see it necessary to include results from random weight guessing of minimalsized networks for example the first task regression of a rescaled and translated sinusoidal with noise can likely be solved by literally guessing through subsequent random initializations the weights of a neural network composed of only one layer of one recurrent neuron or at worse one layer of 10 recurrent units followed by one feedforward predictor the architecture proposed in figure 6 of two layers of 64 relu sandwiching one layer of recurrentplastic neurons is overly complex highly inefficient and not motivated the second task is even easier a wheeled robot with power control on the wheels and a sensor estimating the distance from the goal this class of problems is typically solved by a linear controller as it requires no memory nor nonlinearities so i would expect to see the results of rwg on a network with just two feedforward neurons the ouput layer again with no hidden layer once again the proposed structure identical to the previous but for adding one extra 64neurons fullyconnected layer is entirely out of scale for the problem and unmotivated in the text while the introduction of yet another metalearning framework should be supported as the area is extremely promising and the combination of plasticity learning rules plus evolutionary computation plus recurrent networks shows great potential this paper is not in a shape proper for publication in a major conference and the results on the overlysimplistic custom benchmarks are far from conclusive
### Summary:
|
in this paper the authors demonstrate the use of metalearning in plastic recurrent neural networks with an evolutionary approach avoiding gradients they show that this approach can be used to develop networks that can solve problems like sequence prediction and simple navigation the reviews for this paper all had scores below the acceptance threshold 3533 the principal concerns were 1 the lack of novelty other papers have taken very similar approaches eg najarro risi 2020 or miconi et al 2019 and fundamentally this paper simply ties together different elements in one package 2 lack of demonstration of the approach beyond some very simple tasks 3 lack of connection to the related literature on neuroevolution and ml 4 general clarity and style of writing issues the authors responded to the reviewers but the responses did not convince the reviewers enough to increase their scores past threshold given this a reject decision was reached
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper presents an improved version of qtran 1 the design is based on new loss function design as well as new actionvalue estimator designs the paper claims superior perfromance gains compared to previous methods on the starcraft multiagent challenge smac environment strengths the ideas proposed to improve the previous qtran or might be applied to other marl algorithms as well seems novel and general the authors perform comprehensive ablation studies for different components they proposed the empirical performance on the smac benchmark is better and more stable across different runs the writing is clear and easy to follow weaknesses only one environment is evaluated which might not be that convincing it would be good to see more results on different benchmarks in figure 3 some results seem to be not converged yet since the metric is the win rate which is bounded it would be interesting to see given enough training steps whether all methods can actually converge to similar winning rate or inherently the proposed scheme can lead to better results minor issues typo page 6 for being selfish this for being selfish this considering all the aspects i tend to accept the paper in the current stage reference 1 qtran learning to factorize with transformation for cooperative multiagent reinforcement learning 2019 updates after reading the other reviews and the rebuttal i still maintain my current score the additional experiments on the converged results are good to me as im not very familiar with the performance in marl literatures i have decreased my confidence from 3 to 2 to reflect some of the concerns of other reviewers involving the performance for the baselinesdocsep summary this paper addresses the domain of cooperative multiagent learning with centralised learning and decentralised execution specifically it improves on the qtran algorithm a theoretically justified algorithm which previously had not produced strong learning performance with these improvements qtran outperforms baselines on the smac environments i recommend accepting this paper it delivers strong performance on a popular benchmark for complex cooperative multiagent learning smac while the algorithmic contribution is incremental it still delivers insight into how to improve the empirical performance of a theoretically interesting and welljustified algorithm positives the problem addressed cooperative multiagent environments with ctde is a widely studied and important one it is well set up in the paper including discussion of related algorithms the base algorithm qtran should theoretically perform well in a wider variety of environments than other algorithms for these problems so improving its performance is particularly valuable the improvements made to the algorithm are clear and well motivated section 31 in particular explains clearly the difference the modified loss is intended to make the empirical studies in the paper are strong they show that qtran outperforms several baselines in data efficiency and final performance across a variety of domains further a comprehensive ablation study shows that each of the improvements made to qtran is independently important in at least some domains negatives the algorithmic contribution of the paper is relatively minor since it provides fairly simple modifications to an existing algorithm experimentally it would be good to see experiments on similar domains to those addressed in the original qtran paper which are designed to probe the advantages qtran has over related algorithms this would demonstrate that qtran retains the benefits of qtran in nonmonotonic factorisable environments sources of reviewer uncertainty i am not knowledgeable enough in this domain to be certain of the coverage of the baselines and domains in the paper since the empirical performance of the algorithm is central to the paper this is importantdocsep summary and claims this work proposes a marl multiagent reinforcement learning algorithm in the marl setting multiple agents have to make choices based on independent information to maximize a common objective an existing algorithm in this space is qtran the authors propose several modifications to qtran changing the architecture adding two additional constraints to the loss function and also allowing gradients to flow from the qtran objective into the true actionvalue estimator the claims of the paper are 1 qtran achieves better performance than qtran 2 the modifications introduced stabilize training compared to qtran it took me some time to fully understand all of the components that go into qtran and i find the complexity of the overall algorithm pretty surprising especially considering that other algorithms in this space are as simple as add up all the q values of the individual agents i think the proposed changes consist of rather than directly training actionvalue networks a qmixlike hypernet approach is used the network that estimates the true combined actionvalue is implemented through what the authors call a semimonotonic mixing network which is the sum of a nonmonotonic regular hypernet and a monotonic hypernet as used in qmix this seems pretty arbitrary isnt the original idea behind qtran that this would accurately track the true values in qtran the separate network that aggregates the q values of the individual agents is trained to track the true actionvalue in qtran this is done through multiple hypernetworks the authors call these heads the loss function is modified to impose two additional constraints on the transforming value function gradients are now also backpropagated from the tracking loss into the true actionvalue estimator which makes it somewhat unclear what it is actually representing relation to prior work the paper is positioned sufficiently with respect to prior work ive noticed that there is a larger section on related work in the appendix im not sure what the purpose of moving the related work into the appendix is especially if some of the papers mentioned there are not actually related to the work presented in this paper i think it would be good to try to move as much as possible of that section into the main text leaving out prior work that is not sufficiently related some of the additions in qtran seem very similar to ideas proposed in qmix but this is not directly acknowledged as far as i can tell it would be good to point out which parts of the architecture come from qmix are the claims supported the experiments presented in the paper are reasonably thorough and show that qtran consistently outperforms qtran on the tasks that were tested smac starcraft multiagent challenge is a nontrivial benchmark so i would agree that claim 1 has been shown sufficiently but im not sure whether the ablation studies are thorough enough to really demonstrate that all of the components of the rather complex proposed algorithm are really needed the authors often make claims about improved stability and other properties of the algorithm throughout the paper but these are not supported by any empirical evidence if the authors want to claim that qtran outperforms qtran because of a specific mechanism then it would be good to provide some sort of empirical evidence or proof the proof in appendix a doesnt count since it doesnt make any statements about stability therefore i think that claim 2 is currently not well supported and it would be good to either support it better or soften the statements in the paper to make statements in the form of we believe that the algorithm has improved stability presentation and clarity the paper is reasonably clear and understandable there are some cases where incorrect grammar or word choice made a sentence difficult to understand for example the choice of affluent to describe a class of estimators it would be good to address cases like this to improve the clarity of the paper conclusions the main claim of the paper that qtran is an improvement over qtran has been demonstrated sufficiently but the high complexity of the approach with several additions to the algorithm feeling somewhat arbitrary and the fact that the paper merely presents an upgrade to qtran could be potential arguments against accepting it edit after author comments i have read the author comments and the latest paper revision the authors have noticeably improved the clarity of the paper in several places and adjusted their claims about the stability of the algorithm and the improved ablation studies are appreciated unfortunately after thinking it through very carefully and despite the author comments i have not been able to understand some aspects of the model for example why gradients from the tracking loss are backpropagated into the value function that is supposed to track the true actionvalues several parts of the architecture seem to have a complicated dual purpose which makes it difficult to understand what is going on and why the model is performing better i suspect that other readers might also encounter similar issues which makes it difficult for me to raise my rating ive decided to leave the rating at 6 marginal acceptdocsepthis paper provides good improvements that make qtran more practical and can be applied to problems other than matrix games since qtran is a significant improvement for valuebased multiagent reinforcement learning after qmix the practical implementation of qtran is expected for a long time in the community however after the publication of qtran especially in 2020 some other works have explored the question of how to extend qmix to the full igm function class due to these works qplex is the major concern given that weightedqmix has been compared in the experiments the expectation of the advanced version of qtran is higher than before i have several concerns regarding several core contributions of qtran qtran relies heavily on the true joint actionvalue function 11 however learning joint actionvalue functions is not an adorable choice in multiagent problems 12 to ease the training and representation of joint actionvalue functions the authors condition qjt on individual q values and use a semimonotonic structure however it is difficult to tell the contribution of the monotonic part it has been shown that monotonic functions can not represent some qvalues why should this part be included i expect that i can find the answer from ablation studies but on two out of three scenarios fcqtran is very similar to qtran the authors can provide a more serious discussion of this part to make their paper stronger about the training of qtrani i have two questions about the training of this value function 21 when training qtrani whether local utility function of agent j qj using the notation from the paper is updated 22 the training scheme is a midpoint between vdn and qmix which is similar to an attention mechanism that has been explored in multiagent value decomposition settings the formulation is quite different from previous papers dop wang et al 2020 and refil iqbal et al 2020 but based on the results from these previous work i think the multihead structure may not improve the performance although the authors use a matrix game to illustrate their idea which i appreciate i can hardly tell whether this example is specially designed i was expecting a convincing ablation study on smac but i do not find them sufficient 1 the authors did not record how many random seeds did they test and smac tasks are typically sensitive to random seeds 2 the gap between qtran and qtran is not significant if the authors can provide results with more random seeds on more maps i will consider revising my rating my last concern is about qplex as cited by the authors similar to qtran qplex provides full expressivity for the igm function class nevertheless the implementation of qplex seems to be much more lightweight than qtran since qplex has provided codes that can be freely tested on the smac benchmark i was wondering why the authors cited this paper but did not compare to it at least a detailed discussion of the differences can make the contribution of qtran clearer a minor concern about experiments it seems that the authors are using an older version of qmix in the latest version qmix can achieve a win rate pf 80 on mmm2 this fact is unknown for many because the journal version of qmix reports the same win rate as in this paper wang et al 2020 wang y han b wang t dong h and zhang c 2020 offpolicy multiagent decomposed policy gradients arxiv preprint arxiv200712322 iqbal et al 2020 iqbal s de witt cas peng b bhmer w whiteson s and sha f 2020 aiqmix attention and imagination for dynamic multiagent reinforcement learning arxiv preprint arxiv200604222 update thanks for the authors clarifications after a careful reevaluation of the paper i have many concerns about the performance of baselines on the starcraft ii benchmark tasks the reported performance is not consistent with those reported in the smac benchmark paper see figure 456 in 1 and qplex paper figure 5819 in 2 moreover i also evaluate the available github codes of baselines on my own which is consistent with 12 using results in 12 qtran significantly underperforms the baselines on the starcraft ii benchmark tasks moreover the paper claims that it uses the standard starcraft ii benchmark the latest version of sc2 and the default baseline codes due to these concerns i tend to lower my rating 1 samvelyan m rashid t de witt c s et al the starcraft multiagent challenge arxiv preprint arxiv190204043 2019 2 jianhao wang zhizhou ren terry liu yu yang and chongjie zhang qplex duplex dueling multiagent qlearning iclr submission httpsopenreviewnetforumidrcmk0xxiqv
### Summary:
|
this paper proposes practical improvements to theoretically well founded qtran which is a stateoftheart technique of cooperative multiagent reinforcement learning the improvements include new designs of loss function and actionvalue estimator which might be widely applicable beyond qtran however it is not obvious if the proposed improvements actually improves the performance of qtran and experimental evaluation is essential to this work after the discussion there remain some major concerns about the experimental results in particular the performance of baselines in the experiments is not consistent with those reported in the prior work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper describes an adversarial training approach that in addition to the commonly used robustness loss requires the network to extract similar representation distributions for clean and attacked data the proposed method is inspired by domain adaptation approaches that require a model to extract domain invariantagnostic features from two domains in the context of this paper the two domains are the clean and adversarially perturbed images and the network is required to extract domain invariant representation to achieve domain invariance the authors propose a domain classifier ie an adversarial network that discriminates the representations from clean and attacked images the feature extractor is then required to generate features that fool the domain classifier the authors then provide extensive experiments on smallscale benchmark datasets svhn cifar10 cifar 100 and mnist in the supplementary material to show the robustness of their proposed approach against the stateoftheart robustness methods under whitebox and blackbox attacks the authors show that their proposed method provides 1 higher accuracy on attacked data more robustness and 2 higher accuracy on clean data closing the gap between the performance on clean and attacked data in addition the paper provides insightful experiments on robustness to unforeseen adversaries robustness to unforeseen corruptions transfer learning and ablation studies strengths the idea is simple yet it leads to significantly more robust networks the paper is well written and it is easy to follow while the experiments are only carried out on smaller scale datasets they are thorough and they support the claims of the authors weaknesses i dont see major weaknesses in the paper below are some minor points dialawp comes out of the blue in table 3 for the sake of consistency i suggest adding it to tables 1 and 2 as well and providing the formulation for selfsufficiency the tsne plots in figure 5 for clean and perturbed distributions seem to have been calculated separately which means that we are effectively looking at two different embedding spaces when we look at a and b i suggest that the author append the clean and perturbed representations calculate the tsne embedding jointly and then plot them into their corresponding plots additional commentsquestions in your kl robustness loss you have mathcallrobklfrac1nsumi klgfxithetafgfxithetaf my understanding is that gf is your feature extractor and gfxi gfxiin mathbbrd are not probability vectors this is while klcdotcdot is a dissimilarity measure defined only for probability distributions could you comment on this also wouldnt a simple mse work fine here this might be a matter of style but it could be helpful to add equation numbers to your equations typos page 3 second paragraph belongs to the the family page 5 second to the last paragraph the initial learinnig overall assessment the paper is wellwritten and easy to follow while the main idea of learning domain invariant features is simple its use in the context of robustness against adversarial attacks seems to lead to a significant performance boost the experiments and in particular the ablation study section is insightful and aligned with the papers claims i think the paper is above average and therefore i would like to vote for its acceptance docsepin this paper dann is leveraged to generate domain invariant and robust feature representation the authors claim that the proposed method outperforms other methods when the target domain is the adversarial examples the paper is easy to follow and the idea is straightforward the experiment section is not comprehensive only a few methods are included in the comparison more recent sota methods are missing the paper is easy to follow and the idea is straightforward the experiment section is not comprehensive only a few methods are included in the comparison more recent sota methods are missing docsepthis paper proposes dial to learn domaininvariant representations for clean and adversarial examples to improve model robustness and clean accuracy the main idea is to treat the problem as a domain adaptation problem by considering the data shift between adversarial and clean distributions and then use the generative adversarial network gan principle to tackle this data shift pros 1 the paper is clearly written and easy to follow 2 the motivation behind is very intuitive 3 the paper conducts extensive experiments including multipleellpnorm adversarial perturbations and unseen corruptions cons 1 my biggest concern is the novelty of this paper though showing promising performance the idea of learning a feature extractor to minimize the distance between adversarial and clean distributionsdomains has been widely studied and adopted before in the domain adaptation da literature in this paper the author just simply introduced several da loss terms and used the gan framework to learn a more robust model the experimental results are persuasive however the approach is too simple and not novel enough 2 some minor problems i cannot find your paper in the autoattack leaderboard as you mentioned at the first line in page 7 i think the paper conducts extensive experiments to demonstrate the effectiveness of the proposed method including some interesting ones eg robustness against unseen perturbations transfer learning i like them however the novelty of this paper is insufficient and using the domain adaptation principle and learning invariant representation has been widely studied therefore i vote for rejection docsepthis paper proposes a domain invariant adversarial training dial method which learns the feature representation that is both robust and domain invariant apart from the label classifier the model is equipped with a domain classifier that constrains the model not to discriminate between natural examples and adversarial examples thus achieving a more robust feature representation extensive experiments on image classification benchmark the robustness compared to other stateoftheart methods this paper proposes a simple and effective adversarial learning method dial which brings the idea from domain adaptation for robust representation strengths 1 this paper is wellwritten and easy to follow 2 it conducts various experiments to demonstrate the effectiveness of the proposed method ranging from robustness to whitebox attacks blackbox attacks unforeseen adversaries unforeseen corruptions and transfer learning the experimental results are solid and technically sound weaknesses 1 from my point of view the novelty of the methodology is not enough as the domain classifier and the gradient reversal layer are the same with those methods in domain adaptation such as 1 2 to better understanding the reversalratio hyperparameter r can the authors provides the robustness under different values of r 1 yaroslav ganin and victor lempitsky unsupervised domain adaptation by backpropagation in international conference on machine learning pp 11801189 pmlr 2015 overall this paper proposes a simple and effective adversarial learning method dial for robust representation learning extensive experiments are conducted to demonstrate the effectiveness of the proposed method and provide solid results however the novelty of the paper is not significant as similar methodology exists in domain adaptation
### Summary:
|
the paper describes an adversarial training approach that in addition to the commonly used robustness loss requires the network to extract similar representation distributions for clean and attacked data the proposed method is inspired by domain adaptation approaches that require a model to extract domain invariantagnostic features from two domains although the experimental results are solid and technically sound the novelty of the methodology is not enough as the domain classifier and the gradient reversal layer are the same with those methods in domain adaptation such as unsupervised domain adaptation by backpropagation on the other hand more recent sota methods are missing and only smaller scale datasets are used for evaluation during the discussions the major concerns from three reviewers are novelty i totally agree that the simplicity of the method should be a virtue however the idea of domaininvariant representation learning is already established well and its application to adversarial training is quite intuitive to the community also the similar methodology already exists in domain adaptation according to the toptier conference culture in the ml community what most valuable is the novelty and insight not the performance in the end i think that this paper may not be ready for publication at iclr but the next version must be a strong paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
authors developed a method and training procedure for manifold clustering problems the proposed solution inspired from information theoretic methods namely maximum rate reduction authors also support their claims with empirical results manifold clustering is a very hard problem authors propose a solution for nonlinear manifold clustering the aim of the paper is to learn a representation space in which the manifolds are colinear the problem is quite interesting and there are practical applications however i have several concerns i believe two highly relevant papers are missing in the paper and these papers are 1 clusteringfriendly representation learning via instance discrimination and feature decorrelation iclr 2021 2 representation learning for clustering via building consensus arxiv may 2021 i am completely aware these papers dont consider manifold clustering however they can be used in this setting as they are i will lay out my concerns by referring to these two missing papers authors rightfully highlights the importance of assumptions and constraints in their paper one of the proposed constrained is orthogonality section 31 both a and b utilizes orthogonality 1 directly use originality in their loss and 2 uses random projections authors are trying to learn one representation space for different manifolds similar objective can be achieved by consensus loss of 2 the objective given in eq 3 is interesting in my understanding objective wants to condense each manifold as much as possible summation term and all manifolds should cover the largest available space first term i think these conditions will be satisfied if all manifolds are equidistant from each other this can be desired property however how realistic it is for example lets consider the following classes car bus motorbike bicycle lion mountain lion whale lion why should each manifold be equidistant from each other one would like to see semantic structure in the learned representation would you please explain how will eq 3 type loss will handle this issue the proposed loss depends on logdet please see eq 2 as far as i know to calculate such a quantity one needs to calculate the eigen values of the covz i am assuming that the authors are calculating this quantity for each batch since eigen values needs to get calculated by another algorithm eg gaussian elimination o does this mean the proposed loss cannot generalize to large batch sizes some of the selfsupervised learning methods are trained by using 8192 batch size o how can one compute logdet in multiple gpus does one need to do this computation on one gpu experimental results would you please compare your cifar10 cifar20 and stl10 results with 1 and 2 please note that the results shown in 1 and 2 may be using different architectures as far as i remember 1 use resnet34 and 2 use resnet18 both 1 and 2 are endtoend training methods especially 2 supplies better results for resnet18 setting would you please elaborate the advantage of multistage training given that endtoend are suppling similar results would you please extend your empirical studt to imagenet10 and imagenetdogs would you please supply some ablation studies eg impact of lambda parameter i am looking forward to author responses and i am willing to change my assesmentscore if the reponses isare satisfactory although authors proposed an interesting solution to a very hard problem i believe proposed method and experimental work needs to be improved especially taking in to account missing references and supplying a detailed discussion docsepthis work proposed a general manifold clustering algorithm called neural manifold clustering and embedding nmce which utilize maximum coding rate reduction mcr2 as the objective function and data augmentation to enforce constrains in the implementation stage given that even the toy experiment is difficult to optimize with the full nmce objective a multistage training procedure is applied with the first stage actually trying to optimize the total coding rate tcr which is another kind of selfsupervised learning objective claimed in this paper on synthetic and realworld datasets coil20 coil100 cifar10 cifar20 and stl10 nmce achieved comparable and sometimes better results compared to baseline methods or some alternative manifold clustering methods listed in the paper strengths originality the idea of combing maximum coding rate reduction with data argumentation techniques is interesting and seems reasonable in the experimental parts this work have results on a number of real data sets which can be considerate a fairly enough set up to support the effectiveness of a newly proposed algorithm weakness overall it seems nmce is like a relative less significant changes from the work from recent proposed maximum coding rate reduction mcr2 yu et al 2020 with some data argument techniques and there is no interesting theoretical analysis behind the proposed nmce work too from this paper alternative methods in the experimental results section 4 there are a number works are cited and compared to nmce but still lack some important references for example in table 1 for coil data set results from linear methods like ssc and some deep clustering methods are listed but there is a whole field of nonlinear manifold clustering is not mentioned here reference 1 and 2 listed below are two methods in the field of nonlinear manifold clustering and some indepth discussion or comparison to them are quite helpful for reviewer to evaluate the contribution from nmce indeed there is one survey paper about nonlinear manifold clustering is cited in section 2 but this is not enough to the toy example double spiral it seems nmce can get 100 accuracy as shown in appendix a however only one 100 number is kind of less clear as we can imagine that if keep increasing the scale of gaussian noise in the data argumentation step we should except this 100 to go down or maybe we can say the key is how to define a small amount of noise as stated in the paper and should be nice to see the performance from alternative methods on toy examples too to gave people more insights ie multilinear subspace clustering methods clearly not work well for this toy example as we know reference 1 souvenir r pless r manifold clustering the10th international conference on computer vision iccv 2005 2 dian gong xuemei zhao grard g medioni robust multiple manifold structure learning icml 2012 overall i feel this is a reasonable submission but seems not good enough for iclr publication so i gave marginally below the acceptance threshold docsepthe paper considers the problem of clustering a dataset corresponding to a sampling of a union of nonlinear manifolds where each cluster shall correspond to an individual manifold the proposed method leverages previous results on subspace clustering via neural networks and a penalty function based on concordant classification of samples and their augmented versions the paper provides an intuitive extension of the mcr2 embedding approach by regularizing it with a concordant data augmentation embedding constraint the authors reason that such a constraint is needed to restrict the large flexibility of neural networks however the paper is imprecise in many of the statements made a reference to additional geometric constraints that make the manifold clusters identifiable essentially reduces to proximity of samples and their noisy versions in the manifold embedding lclst is described as some objective function that will force f to cluster the dataset but again there is no practical example of such a function the concept of a constraint functional df is also never specified for a practical setting in the appendix the authors say the performance of mcr2 is rather poor which is expected based on our understanding this does not illuminate a reason for the reader or make it clear what ctrl brings in it is not clear why only a partial subset of the code that implements the experiments is provided with thee rest to be held for release upon publication also there are no descriptions of the computation cost of the different approaches compared including the proposed one the fact that the augmentationregularization used in the numerical experiments is tuned by hand raises further questions about the robustness of the proposed approach the authors should provide some discussion on this aspect of the implementation this can be done in the appendix if needed the training procedures used in section 42 appear to be adhoc can the authors comment on their reasoning additionally some exclusions in the numerical comparison seem arbitrary as well if a method in the literature addressed the same problem considered here why not compare against it if there are differences in approaches they can still be mentioned to provide a contrast why is there no numerical comparison in the experiments shown on the appendix there is a repeated typo eg in fig 2 and fig a3 captions principle components principal components while the extension of a supervised multimanifold embedding to the manifold clustering problem is intuitive and interesting there are several imprecise concepts underlying its description and the carefully crafted implementation muddles whether the proposed approach can have wider applicability docsepthe paper neural manifold clustering and embedding proposes methods for utilizing a deep neural network for simultaneously learning the clustering of samples and a nonlinear embedding into a lowdimensional latent space where each cluster representation is simple the main contributions of this paper are 1 proposing a novel loss function that combines the maximal codingrate reduction principle and selfsupervised view of how to use dataaugmentation in clustering 2 proposing an unsupervised method for solving the problem from scratch using a deep neural network as embedding operation with parameters learned to maximize codingrate reduction while preserving class membership across augmented samples 3 proposing a supervised method for finetuning the results of the above method by minimizing the codingrate reduction of each class while maximizing the overhaul codingrate reduction and preserving class membership across augmented samples paper strengths the paper suggests a loss function for learning manifold clustering and embedding by combining two ideas from the literature first building on a technical paper introducing the maximal codingrate reduction principle which suggests that to learn a subspacestructured representation one needs to maximize the difference between the coding rate of all clusters pooled and the sum of coding rates for each cluster second building on the insight shared with related works detailed in the clustering with data augmentation section that dataaugmentation provides an important signal for manifold embedding as preservation of class membership across augmented samples replaces preservation of local geometry from classic manifold learning while both ideas are not new per se it is interesting that the current paper outperforms both lines of work by using a combined loss function named nmce as the paper initials suggesting a fruitful synergy between those two ideas the optimization target allows using any deep network as a manifold embedding function in either an unsupervised or supervised manner the unsupervised method using a simpler loss function named tce nicely resembles recent selfsupervised methods while the supervised method is used as finetuning of the unsupervised method and seems to work well paper weaknesses the paper structure does not follow the logical structure presented in my summary while the combined loss function is well described and justified the two algorithms are presented as an afterthought it is not clear why the unsupervised method is needed while the supervised method is not presented at all restate the paper contribution to make the distinction between the loss function the unsupervised and the supervised methods present the unsupervised method by itself rather than as an empirical note in section 36 and elaborate on the comparison to selfsupervised methods currently in additional results appendix explain why the unsupervised method is needed my guess is this is due to zeroing out of the nmce objective if class assignments are randomly initialized the supervised method is not adequately described which seems like a major omission if i correctly filled in the blanks a training set was used for minimizing the nmce objective function using the groundtruth class assignment ie only the embedding is learned then clustering success is measured on a testset with some unknown initialization of class assignment this initialization may be important for the results if it is not random eg by using the training set and finding the nearest neighbor no motivation is given for the role of such work in the ml landscape the paper starts with a technical assumption a union of lowdimensional nonlinear manifolds rather than from an intuitive statement that 1 each sample comes from some class 2 classes representation has a meaningful structure conceptualized as a lowdimensional nonlinear geometry 3 it is natural to ask if classmembership and a lowdimensional geometry can be learned simultaneously important principles are hidden in technical definitions and the authors should aim to provide more intuition most notable are the mysterious references to mcr2 if gaussian coding rate is a quantification of variance maximum coding rate reduction mcr2 supports maximizing the total variance while minimizing variance within clusters similarly the constraint functional df is presented in section 32 as a completely generic method and only in section 35 do we understand how to implement it by enforcing similar clustering of dataaugmented samples i find the generic presentation unhelpful while the use of dataaugmentation to uncover manifold structure central to the paper the main method presented in the paper does not seem to work well on synthetic data sets and the authors do not share the details summarize the difference between ensc and the full loss functions and explain why for simple tasks the features can be directly clustered with standard linear sc techniques such as ensc also provide the poor result of the full algorithm smaller issues 1 the vogue statement this work shows that many constraints can be implemented by data augmentation should be restated through the lens of selfsupervised learning eg manifold local structure can be learned by enforcing constant classassignment on dataaugmented samples 2 highlight the difference from the related work in the clustering with data augmentation section and state in advance that current work outperforms them 3 the statement we need augmentations that perturbs style information but preserves content information so that the clustering will be based on content but not style seems weird as this is almost the definition of data augmentation 4 the statement this indicates that nmce can truly leverage the nonlinear processing capability of deep networks seems false in the context of section 41 where the tcr objective is used and not the full nmce objective 5 in section 421 the role of stage 2 reinitialize the last linear projection layer and add a cluster assignment layer freeze parameters in the backbone network and train the two new layers with full nmce objective is not clear if i understand correctly you dont show results of stage 2 without stage 3 and you dont justify that stage 2 is needed you should provide some evidence stage 2 is helpful or if not skip it for the sake of simplicity 6 i dont understand the discussion which makes most of section 421 about using representations at different levels of the embedding network and with or without averaging it seems the authors had several options and made their choice so this section and table 2 can go to the additional results appendix to justify the choice the paper combines two previous lines of work into a novel loss function which is then used in both supervised and unsupervised manners and achieves stateoftheart results it is currently below the acceptance threshold because the presented algorithms description and justification are lacking docsepthis paper enhances the performance of maximum coding rate reduction mcr2 objective in subspace feature learning by incorporating a strategy of adding a manifold learning inducing constraint aiming at a novel method for general purpose manifold clustering they claim that the new model significantly outperforms autoencoderbased deep subspace clustering and achieves stateoftheart performance on several important benchmarks fundamentally the work is based on the existing maximum coding rate reduction mcr2 objective yu et al 2020 through their analysis to achieve the goal of manifold learning in unsupervised fashion in principle the authors emphasize the importance and necessity of geometryawareness constraint to be added upon the mcr2 objective the constraint is surprisingly implemented by the simple data augmentation for example of adding small noises to the given data and then apply the intuition that the learnt feature embedding of augmented data points should be similar if they are generated from the same point as the geometric constraint i think this is a very interesting and useful way to enforce the local structures in the data the authors show a toy example with their code for the performance of the proposed principle although no code for reproducing large experiments is provided it is hard to say what performance could be given the high complexity of data while a simple dataaugmentation strategy is used although the past contrastive learning objective works well in practice of course there is much theoretical gaurantee for the introduced constraint strategy the paper can be further improved by increasing readability for example i dont think definition 1 is a good way to introduce manifold clustering and embedding task a definition should be used for the rigorous concise and accurate mathematical concepts for example the perpendicular may be written as a definition can you give some discussion on the dimensions d1d2dn what is their role in the algorithm should they be known or can be learned from the algorithm otherwise there is no need to present them it seems the algorithm has nothing to do with them you may add how covz is handled in the algorithm what is the algorithm complexity given that the model parameters are inside latent variable z the proposed method builds upon the existing existing maximum coding rate reduction mcr2 objective with a data augmentation as a constraint to enforce manifold learning it seems that the performance is satisfactory comparing with other stateoftheart methods
### Summary:
|
the paper received a majority voting of rejection although the author response successfully convinced one reviewer to increase hisher score from 5 to 6 i have read all the materials of this paper including manuscript appendix comments and response based on collected information from all reviewers and my personal judgement i can make the recommendation on this paper rejection here are the comments that i summarized which include my opinion and evidence presentation the presentation of this paper needs huge efforts to further improve several reviewers and i suffered from difficulties to understand the motivation and challenges of this paper it seems that section 35 is the novelty part of this paper but i failed to catch their points contribution two contributions points were claimed in this paper 1 the combination of data augmentation and mcr2 without knowing the challenges in this paper it is difficult to evaluate this point based on my current understanding the presentation heavily affects my understanding this point is very incremental 2 the proposed method achieved stateoftheart performance this point is problematic i will explain below related work the authors failed to notice a huge body of manifold learning work and contrastive clustering work some stateoftheart methods are not included for comparisons experimental evaluation 1 lack of stateoftheart methods 2 no standard deviation 3 the experimental results are incomplete and 4 it seems that the proposed method only achieved high performance on cifar10 and cifar20 i am not the person who requests the authors achieve the best performance on all the datasets everyone knows no algorithm always wins but the authors should provide some analyses on the inferior performance for better understanding the model no objection from reviewers was raised to again this recommendation
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
1 the new setting is potentially very useful since the computation cost of retraining a big model is big and how to reuse these published big models is an interesting problem 2 the results are better than the baseline methods on real and synthetic data 3 the method to generalize the task embedding is interesting the method learns to handle the worstcase of perturbed task embeddings 1 the finetuning in section 51 according to my understanding finetuning should be done on pretrained models rather than randomly initialized models some clarifications are needed here authors may consider doing experiments with finetuning baseline as such best performance with single pretrained model and average performance of all pretrained models 2 the improvement brought by the proposed method is kind of minor with 13 and 15 which seems to suggest that the vanilla fusion model is enough can you provide results on the synthetic dataset without dro 1 the evaluation dataset as far as im concerned the proposed method needs lots of pretrained models since it needs to use distribution information if the number of tasks is very small the distribution information can be very weak and can be difficult to use some published big models eg gpt or bert do you have some possible solutions or do we need metalearning here since the number of pretrained models can be very small eg 1 or 2 2 the hyperparameter of the ball selecting delta can be challenging could you please provide more details docsepthe idea of using dro to fuse multiple pretrained models is novel my main concern is how dro helps with the generalization and adaptation 1 specifying the constraints and setting the parameters is usually essential in dro which would affect how tight the adversarial player is constrained the delta parameter in the constraints seems to be missing in 7 and 8 which may be due to the primal dual transformation but it is not intuitive that the delta term does not affect the dual formulation at all 2 in general how dro encourages the diversity is not clear dro in theory looks at the worst case whether the worst case indicates better performance depends on the constraints and inductive bias but according to the experiments the gamma parameter does not affect the result a lot this is kind of contradictory with the motivation of introducing dro since the main contribution is the application of dro in meta learning the paper probably should focus more on the intuition behind this idea and why it would work the paper kind of assumes the readers to have knowledge about dro and only refers to previous work about the details but the details are essential to justification of the proposed method one important ablation study is probably varying different target task difficulties compared to the source tasks this would be useful to further test whether parameters in dro makes a difference docsep1 the ideas in this paper are novel and meaningful dfl2l meta learns from pretraining models of multiple tasks without using their data and labels this is different from and more challenging than the general meta learning setting existing works pay little attention to this setting on the other hand it is good that dfl2l inherits offline and online settings from taskbased data meta learning 2 the empirical study shows the advantages of the proposed method compared with other meta learning methods and model fusion methods the results of regression and fewshot classification task show the effectiveness of the proposed method in addition the authors show ablation experiments on the hyperparameter tuning and the usage distributed robust optimization 3 the paper is well written and all the figures are well illustrated 1 the reason for using distributed robust optimization dro framework needs to be further clarified we suggest adding an appropriate exploration of dro application to classical metalearning methods or evidence that dro is more suitable to dfl2l than from databased meta learning 2 the method is still a strong baseline worth reporting after removing the dro module as an effective solution to a new problem the method without dro module deserves more empirical research 3 the task embedding method is somewhat ambiguous for example how to obtain the embedding vector from the task in a principled way some descriptions in the experiment are not clear either 4 the usage of other baseline methods under dfl2l setting is not clear many methods for comparison are designed under different settings so it is not trivial to adopt them under the setting of dfl2l 5 more visualization results on model embedding and hard modelembedding mined by dro module should be added please refer to q4 docsep1this paper proposes a new setting of datafree metalearning which is important for realworld applications 2 for the datafree metalearning tasks this paper proposes a distributionally robust optimization framework to learn a blackbox model to fuse and compress all the pretrained models into a single network 3 the proposed method is sufficient the paper provides offline online and testing algorithms 4 the proposed methods are evaluated on cifarfs and miniimagenet and achieve very good improvement no obvious weakness please see q3 for details
### Summary:
|
meta review dfl2l meta learns from pretraining models of multiple tasks without using their data and labels all reviewers think that the ideas in this paper are novel and meaningful the setting is different from and more challenging than the general meta learning setting existing works pay little attention to this setting on the other hand it is good that dfl2l inherits offline and online settings from taskbased data meta learning r2 main concern is how dro helps with the generalization and adaptation the authors dispell the concerns of r2 through rebuttal and r2 has improved the score i recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
in the paper the authors study stable architectures for rnns on the theoretical side the authors present a series of conditions such that a weight matrix of an rnn is contractive on the modeling side the authors propose rnn architectures that have contractive weight matrices the proposed methods are evaluated on benchmark datasets including sequential mnist permuted mnist and sequential cifar10 the theoretical results seem interesting although not very surprising however i think there is disconnection between the theoretical results and the proposed model not all theorems are relevant to the proposed model for the proposed model section 31 i might have missed something but i fail to fully understand the model the presentation could be improved to help the readers for example the mentioning of subnetworks at the beginning of section 31 is not definedexplained its unclear to me how the subnetworks are combined i can only infer from the recursive construction in the title and figure 2 that the resulting weight matrix is a block matrix furthermore the idea of parametrizing orthogonal weight matrices by exponentiating skewsymmetric matrices is not novel and has been explored in exprnn 1 the writing of the introduction section could also be improved instead of discussing alphago and modules in evolution the reader might benefit from a more thorough literature review of the rnn trainability and longterm dependence 1 lezcanocasado mario and david martnezrubio cheap orthogonal constraints in neural networks a simple parametrization of the orthogonal and unitary group international conference on machine learning pmlr 2019 the theoretical results in the paper seem interesting however the presentation of the proposed model is not clear and the model itself does not seem novel overall i think the paper needs improvement to meet the acceptance threshold docsepthe authors studied contraction properties of continuoustime recurrent neural networks they further showed that a network of provably stable rnns net of nets can be trained to reach competitive performance on several benchmarks including sequential cifar10 even when only connections between modules are trained strength how to assemble a network of rnns is an interesting problem the theorems on contraction properties are helpful to people thinking about provably stable rnns weaknesses 1 section 321 is pretty dry to read reporting results from many individual axb networks seem unnecessarily 2 the authors showed performance comparison with other types of networks in table 1 i think it would be quite informative to show performance of networks where everything is kept the same except that the rnns are no longer provably stable 3 it would also be good to know what happens if all connection weights are trained not just the connections between modules does the performance actually decrease despite having more parameters 4 the provably stable part is kind of separated from the training modular network part how closely are they related is having stable rnn modules particularly important for sparsely connected modular networks overall this is an interesting paper that takes a less common approach to rnns provable stability and netofnets the results are at places more difficult to read but overall it is clear i want to add that i cannot evaluate whether the mathematical derivations are correct docsepthis paper is primarily a theoretical contribution to the construction of assemblies of recurrent neural networks we know that combinations of learned modular components can be powerful and far more tractable than learning bespoke models from scratch particularly in applied domains eg alphago yet so far we have no theoretical guarantees that these combinations will actually remain stable this paper develops the theory behind provablystable combinations of rnns using weight constraints and feedback mechanisms then using fixed rnns generated according to these constraints leaving the connections between them as antisymmetric learnable parameters the authors show that their sparse combination network is able to achieve sota performance on sequential image classification benchmarks with far fewer learned parameters and the previous stability guarantee strengths i thought that the empirical results were rather convincing for what is primarily a theoretical contribution the authors first thoroughly investigate various permutations of their modular sparse combination network framework rnns vs size of each using absolute value weight constraints and do another investigation of their alternative svd weight constraint network which doesnt perform as well or train as quickly most importantly they then show that they can best sota algorithms on some of the common albeit easier benchmarks in the field even under and perhaps because of these constraints the theoretical contribution is quite powerful there has been a lot of recent work in networks with many individual recurrent components such as the aforementioned alphago or the more general recurrent independent mechanisms rims framework but for the most part they rely on intuitive explanations and empirical results over theoretical guarantees clearly specialized rnn modules can be quite powerful but rnns are notoriously unstable and difficult to learn and learning such models endtoend is tricky if we can apply these constraint conditions and still achieve good performance which seems like it could be realistic particularly in the absolute value constraint case then we can develop sets of useful modules and mixandmatch to the task in question this paper doesnt answer all of the intermediate questions but the stability analysis is a key step the proofs in the appendix are welldone and easytofollow given a sufficient math background weaknesses this paper is very dense and difficult to follow it took me a few reads to really understand the value of network stability and how its achieved in this case the appendix is a mandatory read as are some of the references none of the use cases are particularly intuitive i think i would have liked to see a graphical representation of the sparse combo network rather than the weight matrices in figure 2 some pseudocode for the algorithms tossed in the appendix and maybe an example case of an unstable network assembly diverging i also feel like my familiarity with alphago and other methods gave me more of an insight into how this would help in practice than the actual paper did as much as i liked the empirical results that were provided theyre all of a kind sequential image prediction i would have liked to see at least one application in a different domain nlp rl continuous control etc overall i would accept this paper although it was difficult to follow and required a lot of consultation with the literature i do ultimately think that this is a direction that dl algorithms are going in and that the theoretical and practical results from this work could be quite powerful to make the paper better i would like to see some results in a different domain and more effort towards improving the readability too often valuable theoretical works go underutilized because theyre difficult to understand or dont seem relevant to the empiricists and engineers who could build on them docsepthe submission proposes new theorems showing the stability of a class of rnns further by combining these rnns into hierarchical and feedback superstructures the submission achieve sota performance on a number of tasks theoretical results the theorems 15 constitute an evolutionary step in the understanding the conditions of stability the authors also show a counterexample for the common belief that linear contraction leads a sufficient condition for nonlinear stability these results are then partly used to construct provably stable rnn combinations experimental claims the experimental section while interesting seems to lack a main takeaway also some very dubious choice of reporting in the table the authors claim that for the result shown on the tab 1 they run the permmnist trial 4 times and the results fall between 9665 and 9694 i was frankly shocked to find that they choose to only report 9694 on tab1 running the same experiment multiple times and only reporting the best case scenario is not good practice and leads to misunderstanding at best i would recommend that the authors report mean variance if one were to only report the best case one could get much better performance than is achievable on average by running the experiment many many times some claims are backed up by only a single data point for example the claim that increased modularity benefits performance to some point is only backed up by the fact that 44x8 performs better than 22x16 in sec 321 to draw a significant conclusion and demonstrate a trend the authors can perhaps look at 50x7 and 39x9 in general more data points and error bars would help convince the reader that the conclusions are real and not flukes the results of performance vs network size and performance vs modularity for sec 31 and 32 are opposite each other in 31 increased size makes the network better monotonically but in 32 it is inverted u shape similarly in 32 modularity makes the performance better monotonically but it is inverted u in 31 what is the conclusion to be drawn here in general the experiment section is a little hard to read and can use a summary of the main conclusions and clearly demarcated paragraphs and sections of the experiment that demonstrate each point clarity and other minor points the paper is at times very clear and at times very confusing for example the discussion of stability and contraction are clear but then in the paragraph at the top of page 3 the authors use the symbol g for two different thing in the same paragraph theorem 7 should be slightly reworded so that it is clear that the first inequality is a condition and not a statement this is rather obvious in hindsight but for a new reader it is very confusing for denoting multiplication i would suggest using times instead of x eg 22x16 etc why is there a section 321 instead of just 32 i would suggest an overall reread of the paper to maximize readability the paper constitutes an evolutionary step in understanding and designing stable rnns the theoretical results are novel and noteworthy unfortunately the experimental results lack a clear conclusion and at times do not follow best practices ie reporting only the best run out of many
### Summary:
|
in the context of recurrent neural networks the motivation of the paper is to explore the space between fully trained models and almost not trained models eg echo state networks using a formal approach in fact a modular approach has proven to be very successful in many practical applications and in addition brain seems to adopt this strategy as well the addressed theoretical issue is stability of the network ie the network implements a contraction map specifically it is assumed that a network is composed of a set of subnetworks that meet by construction some stability condition and the problem is to design a mixing weight matrix interconnecting the latent spaces of the subnetworks able to give stability guarantees during and after training some novel stability conditions are proposed as well as two different approaches to design a successful mixing weight matrix the original submitted paper was not easy to read and after revision major problems with presentation have been resolved although the current version looks more like an ordered collection of resultsstatements than a smooth and integrated flow of discourse the revision has also addressed some concerns by reviewers on the role of size and sparsity of the modules as well as the sensitivity of the stabilization condition on the mixing weight matrix has been experimentally assessed obtaining interesting results overall the paper reports interesting results however the novelty of the contribution seems to be a bit weak eg stability conditions on recurrent networks although different from the reported ones were already presented in literature also the idea of exploiting in one of the proposed models the fact that the matrix exponential of a skewsymmetric matrix is orthogonal to maintain the convergence condition during training is not novel moreover the experimental assessment does not provide a direct comparison under the same architecturallearning setting of the novel stability results versus the ones already presented in literature empirical results are obtained on simple tasks using datasets with sequences of identical length and relatively small networks which limits a bit the scope of the assessment as well as it is not clear if the observed improvements where obtained are statistically significant especially when compared with results obtained by networks with the same order of parameters the quality of the assessment would increase significantly by considering datasets with sequences of different lengths and involving more challenging tasks that do require larger networks
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper augments the css and ova loss functions used for learning to defer such that they no longer underfit the base model particularly in problems with many possible labels this is done through the introduction of two posthoc estimators which can used to take a calibrated model where there is zero cost in consulting an expert and adapt it for the setting where there is a positive cost for deferring to an expert there are both theoretical and experimental results to support the contributions originality the particular algorithms introduced in the paper appear to be novel posthoc procedures such as this one do occur for uncertainty calibration but this is a novel application of that approach technical quality the technical quality is very good with the experiments wellthought out and providing very compelling results since this work is meant to used on top of css or ova it really needed to be compared with those methods for accuracy or loss the proofs for calibration of the loss function were also clear some of the results are left a little openended when calibrating with a rejector loss there are many to choose from and there is not much insight as to why one loss should be preferred over another clarity the entire presentation of this paper is written very clearly and easy to follow both in the main presentation and in the appendix it was easy to understand what was being done in each experiments and the proofs was easy to follow as well significance this work is very similar to the loss of verma and nalisnick and while it is presented as a generalisation of that work its not immediately clear that its a significant contribution by itself i nevertheless suspect the posthoc methods introduced in the paper would be useful for others to build on top of the authors do not adequately address the limitations and societal impact of the work while i dont think there is significant ethical or negative societal risk with this work the paper would have been strengthened by some discussion of the limitations of the presented methods docsepthis paper tackles the problem of jointly learning a classifier and a rejector where the rejector can choose whether the classifier or the human should predict this problem is denoted as learning to defer and the authors tackle the problem when the cost of deferring to the expert is a constant c0 plus the misclassification error of the expert the authors show that previous stateoftheart approaches fail when c00 by making a connection to label smoothing they then propose two training strategies for the rejector that combine previous surrogate approaches with new machinery they evaluate their method on synthetic and semisynthetic data cifar10 with simulated experts and show that they outperform previous methods originality the paper has novel analysis of previous surrogate approaches for learning to defer l2d when the cost of deferral has an additional penalty c0 the paper proposes a new method to handle this case which leverages previous methods namely ova and adds an extra term in the loss function to form it this new surrogate has limited novelty and doesnt address the case where c0 is potentially a function of x a more realistic extension the related work is very well cited quality i checked the proofs for the theoretical results and the claims in the papers and they are sound the empirical results seem reasonable but no code was provided to verify the results furthermore there are more baselines in the literature that should have been performed most notably the confidence baseline where one trains 1 a classifier using cross entropy 2 a model for expert error and then 3 defers if classifier error is higher than expert error c0 one important concern is that the baselines of mozannar and sontag and that of ova optimize for the cost sensitive loss while the results show in the experiments accuracy without the penalty c0 furthermore the paper only has experiments with synthetic human experts which is not sufficient to back up claims for humanai performance clarity please make section titles and the title have each word start with a uppercase letter the paper is well written and easy to follow significance i think the paper makes an important contribution in showing that previous surrogate approaches may underfit when c00 however one concern is that the paper doesnt show that the method optimizes for the objective in equation 1 but only that the baselines underfit the target i think if the authors can answer the questions below in the positive then my overall evaluation may be raised substantially the authors address the limitations adequately docsep the paper addresses the problem of learning to defer in which a machine learning model learns when to defer the decision on an instance to an expert based on an estimate of the experts cost existing stateoftheart solutions are conceived for settings in which the expert cost is simply her probability of error while realworld settings typically involve a fixed cost for querying the expert in the latter setting existing solutions underperform because of an oversmoothing of the prediction probabilities the proposed solution is a couple of posthoc strategies that prevent oversmoothing and achieve high performance in the setting involving fixed costs pros the paper identifies a structural limitation of recent solutions that seriously affects their applicability to realworld scenarios the proposed solution is theoretically grounded and easy to implement a careful experimental evaluation confirms the substantial advantage of the proposed posthoc approaches wrt sota solutions cons the solution is a posthoc strategy in which the base model and the expert cost estimator in the thresholding case is trained in a simpler setting zero fixed cost and the aggregate model is adjusted posthoc on the desired fixed cost c0 this twostep strategy is usually suboptimal as the model is not optimized for its working conditions there is not much discussion on the limitations of the proposed solution my suggestion is to add something on the limitations of a twostep strategy possibly hinting at future directions or open problems for a fully joint learning setting that also prevents oversmoothing after rebuttal the authors better highlighted how the suboptimality of the two step procedure is for threshold correction or could be for rejector training quantified and possibly addressed
### Summary:
|
this paper proposed a novel framework for learning to defer l2d which decides when to defer the decision on an instance to an expert based on an estimate of the experts cost the key results include identifying the failure mode of existing l2d approaches a novel postdoc estimation procedure for model calibration and thorough experiments showing that the proposed algorithm works well when compared to sota during the rebuttal phase the authors included additional experiments which make the empirical performance more convincing eg baselines posthoc thresholding and additional baseline results as suggested by reviewer xmuy other than several clarity issues there were no critical concerns in the reviews there are valuable suggestions in the reviews including improving the clarity when introducing key concepts and notations in the main text and providing details of the experimental setting and results the authors are strongly encouraged to address the concerns raised in the reviews when preparing a revision of this paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work is progressing towards solving the universal domain adaptation unida problem where we dont assume any prior knowledge about the target classes the proposed algorithm named uniot utilizes optimal transport ot to handle two main issues i common class detection and ii private class discovery the common class detection is run by establishing partial alignment between the target features and the source prototypes while the private class discovery is conducted through aligning the target features with the learnable target prototypes both are achieved by solving the ot optimization problem from the empirical evaluations on the office officehome visda and domainnet the proposed algorithm uniot performs significantly better than the existing unida methods indicated by the hscores under varying ratios of common classes uniot shows a certain level of robustness compared to dance and uan strength the ot optimization approach to addressing the unida problem by encouraging common class detection and private class discovery is novel and technically sound the performance gain over the existing methods is substantial along the line of unida the manuscript is wellwritten weakness would be great if information of both training and inference elapsed time is provided especially at the inference stage as it requires to run an additional ot optimization it is implicitly stated in the conclusion that the current approach requires relatively high memory usage and will find a more memoryefficient way docsepthis paper proposes an optimal transport framework to jointly consider common class detection and private class discovery for unida the key point is automatically discovering the intrinsic difference between common and private classes based on the statistical information of the assignment matrix strengths this paper addresses universal domain adaptation the partial alignment for common class detection and representation learning for private class discovery based on uot and ot models have achieved relatively high classification accuracy the method of experimental comparison is comprehensive weaknesses the proposition of this method to automatically discover private specific classes lacks meaning in practice since we do not have the labels of these classes we cannot use these data but it brings more complex computing resource consumption the fundamental goal of unida is to improve the precision and recall rate of common classes identifying private classes is only a process so it can be regarded as a class the new metric h3score is the same problem yes docsepin this work the authors proposed a unified domain adaptation framework supported by optimal transport in order to transfer among those domains having different categories specifically the authors detect those common classes by predicting the statistics of an unbalanced ot objective next the private class discovery is achieved by doing local alignments in the target domain pros overall this paper is well written the authors provided detailed information about how they designed the training process step by step the authors have provided enough quantitative results in order to demonstrate the effectiveness of their method cons the proposed method is relatively adhoc the proposed method serves as discriminative losses for supervised learning problems and improves performance in the target domain missing discussion on batch optimal transport the authors mainly implemented their framework using batch samples however it is known that minibatch optimal transport leads to nonoptimal coupling so there should be some discussions on how batch optimal transport affects this method qualitative illustrations will be very helpful in illustrating this methods performance for example the authors did a great job in creating figure 2 to give a highlevel illustration of the framework it would be great if the authors could give an example of a batch of data not only the ot couplings but also the corresponding source and target samplespolytopes no negative societal impact docsepthis paper proposes a novel framework named uniot which jointly considers common class detection and private class discovery in universal domain adaptation unida uniot leverages unbalanced optimal transport uot to detect common classes in the target domain without any predefined threshold values in addition uniot learns an otbased representation learning for private class discovery by considering both global discrimination of clusters and local consistency of samples furthermore the authors introduce a new evaluation metric for the unida problem finally experiments show that uniot outperforms several baselines on unida in this paper the authors raise two interesting problems of unida that previous works do not consider carefully their method to tackle the aforementioned issues is novel to the best of my knowledge experimental results back the efficiency of the proposal in unida the authors also conducted adequate ablation studies to verify the importance of different components in their framework overall this paper is wellstructured and easy to follow the main concerns include the clarity of the paper in both methodology and experiment parts the authors specified one limitation of their method in section 5
### Summary:
|
this work tackles universal domain adaptation a challenging problem that is usually encountered in real practice the proposed uniot algorithm utilizes optimal transport to enable common class detection and private class discovery it is interesting to see that ot can be extended to simultaneously solve these two problems reviewers were at the borderline in their preliminary opinions but after rebuttal and reconsideration most reviewers acknowledged that their concerns were addressed and improved their final rating substantially ac considered the paper itself as well as all reviewing threads and concluded that the paper has put forward a nice unified ot framework for the challenging universal domain adaptation problem yielding promising empirical performance while introducing some nice technical benefits such as the removal of the tedious weight thresholding for outlier class discovery thus the paper is recommended for acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
paper summary this paper addresses a problem of modelagnostic metalearning maml and most of its variationsextensions learning only a single parameter initialization for the entire task distribution which might not be effective when the task distribution is too diverse inspired by gradient boosting which aims to train a new learner to predict the residuals of the previously predicted result for each step this paper proposes a meta gradient boosting framework specifically the proposed framework represents the metalearned global initialization as a base learner consisting of the first few weak learners which is responsible for acquiring sharable transferable knowledge by learning across all tasks or task modes then many gradientboosting modules learn to capture taskspecific information to fit diverse distributions more effectively this paper evaluates the proposed framework and the baselines including maml multimodal maml and leo on both the fewshot regression task and the fewshot image classification task to create diverse task distributions it combines different function families linear sinusoidal etc for the regression and merges multiple datasets omniglot miniimagenet etc for the classification the experiments show that the proposed metagradient boosting framework with 5 gradient boosting modules achieves better or competitive results compared to the baselines ablation studies justify some design choices of the proposed framework including the architecture of the weak learner the number of gradientboosting modules the updating strategy for the gradient boosting modules the boosting rate etc i believe this work studies an important problem and proposes an interesting framework yet i have a few concerns regarding experimental setup and results which prevent me from accepting this paper see below for details paper strengths motivation the motivation for addressing the issue of modelagnostic metalearning maml learning only a single parameter initialization for the entire task distribution which might not be effective when the task distribution is too diverse is convincing novelty as far as i am concerned the idea of utilizing the intuition of gradient boosting is novel and interesting this paper presents a reasonable way to implement this idea technical contribution ablation study ablation studies provide insights that help understand the proposed framework and justify many design choices this includes the architecture of the weak learner the number of gradient boosting modules the updating strategy for the gradient boosting modules the boosting rate etc clarity the writing is very clear and the figures illustrate the ideas well also the organization of the paper is easy to follow experimental results the description of the experimental setup is comprehensive the presentation of the experimental results is clear the proposed metagradient boosting framework outperforms or at least performs competitively compared to the representative baselines this paper studies a variety ie different numbers of task modes of settings which provides great insights paper weaknesses baselines i believe this paper ignores many important baselines as a result the experimental conclusions are less convincing i list some of the baselines and brief reasons why i believe they should be included as below hierarchically structured metalearning hsml is designed to perform soft clustering on tasks it would be interesting to see if hsml can handle the task distributions considered in this paper also it shows superior performance compared to maml and a workshop version of multimodal maml so it is a stronger baseline to be included probalistic maml bayesian maml both of these two methods consider outperforming maml intuitively this probabilistic schema should deal with multimodal task distributions better since it inherently learns to handle uncertainty it would be great to compare against at least one of them protomaml presented in the metadataset paper triantafillou et al in iclr 2020 is designed to deal with multiple datasets combined and therefore should be able to perform well on the setup considered in this paper protomaml shows the strongest performance compared to maml based methods and even outperforms some metricbased metalearning methods showing the proposed framework can outperform or perform competitively compared to protomaml would make this paper much stronger metricbased metalearning methods this paper does not include comparisons against any metricbased metalearning methods such as matching networks prototypical networks relation networks tadam etc yet the stateoftheart results on the fewshot image classification are mostly achieved by metricbased metalearning methods therefore i believe it would be essential to include representative metricbased metalearning methods maml with a comparable number of parameters since the proposed metagradient boosting framework has more parameters than the vanilla maml it is possible that the performance gain comes from the larger capacity it would important to include a maml baseline that has a comparable number of parameters to justify this rl experiments as far as i am concerned the fewshot regression task is more a task for detailed analysis for research rather than a task with a wide range of applications and the stateoftheart results of the fewshot image classification task have been mostly achieved by metricbased metalearning methods therefore i am mainly interested in the modelagnostic metalearning line of work because of its potential in reinforcement learning where the ability to adapt to unseen scenarios is crucial yet this paper does not include any experiments on rl without any reasons which makes the paper less convincing to me mgb1 vs maml it seems that the proposed metagradient boosting model with one gradientboosting module outperforms the vanilla maml by a significant margin on the regression task yet only performs similarly to the vanilla maml on the classification can the authors give some intuition about why this is the case metadataset to evaluate if the proposed framework and the baselines can deal with diverse task distributions on the fewshot image classification task this paper combines four different fewshot learning datasets to produce a 4mode classification task yet the metadataset has been proposed for this purpose also the metadataset paper provides a comprehensive comparison of recent metalearning methods therefore i believe this paper should evaluate the proposed framework on the metadataset and see if it outperforms the baselines stddev of the classification task it seems that the performance gap between the baselines and the proposed framework is insignificant on the image classification task in this case it would be important to also provide the standard deviation of each task after rebuttal i am not satisfied with the response from the authors i can only hardly recognize the effort made by the authors during the rebuttal most of my points were only briefly discussed in the response without revising the paper the suggested baselines were merely briefly discussed but not added to the comparison the results of the metadataset which in my opinion is the most suitable dataset for the purpose are still not included in the revised paper the stddev of the classification task is not still not provided making it hard to justify the performance gain why is rl left to future work i have read the reviews from other reviewers with the little revision from the authors i have decided to keep my original rating and would not recommend this paper to be accepteddocsepthe paper proposed a method to incorporate neural networks into gradient boosting for metalearning with a limited number of samples in each task the simulated experiments where the generated tasks contain four continuous function demonstrated that the proposed model outperformed maml and its related variants and on the classification task it performed similarity to a previously proposed method mmaml in general i am very interested in seeing neural networks being combined with boosting algorithms but i do have quite a few questions 1 the paper claimed that the learning and updating strategy proposed in the method ensured a weak base learner i have to say that i am not satisfied by the claim one can simply construct a linearlyseparable dataset which doesnt contribute to the overall tasks and it only takes few updates for a twolayer neural network to converge then the learner itself is not weak anymore in addition it adds noise to other tasks in the task set 2 normally when a weak learner is learnt a 1d line search is conducted to find the optimal contribution of this particular learner although shrinkage is usually applied in practice to anneal the contribution of new learners as a regularisation method to avoid overfitting the paper seemed to have ignored the linesearch part which can be done efficiently in 1d whilst only the shrinkage was applied in the modelling and i was wondering if there was a specific reason for that 3 in terms of the application of the proposed algorithm i think it is rather limited my understanding is that although the proposed algorithm worked better than maml and its variants on simulated data the algorithm at the same time involves many hyperparameters including 1 the design of the base feature extractor 2 the design of gradient boost modules 3 the boosting rates and the annealing of them 4 number of local update steps 5 number of global update steps and many other hyperparams regarding to the training of neural networks the first two hyperparameters or designs are critical in the success of the gradient boosting algorithm as weak learners are needed for learning so that the gradient is informative for the subsequent learners tuning those hyperparameters could be a timeconsuming and also nontrivial task itself already compared to the maml algorithm itself or generally multitask learning approaches for meta learning this algorithm doesnt seem to be outstanding 4 a fair comparison imo could be to use the same feature extractor for producing vector representations of samples and then directly apply gradient boosting with trees for meta learning as mentioned in the paper the authors also agreed that the feature extractors themselves have a huge impact on the final performance therefore it might be a good practice to check how well gradient boosting is able to handle metalearning on top of extracted featuresdocsepthis study is presented clearly and the core idea is interesting however the presented novelty is limited to a globally for all tasks and locally taskspecific learning paradigm using a framework inspired by badirli et al 2020 the authors have presented experimental results for both regression and classification setups which are interesting in my opinion the paper has relatively high quality and can be interesting for the iclr community one question regarding figure 4 a you have mentioned that adding more weak learners causes difficulties in capturing multimode patterns but from this figure one can see that it is not completely true for the early epochs before 400 comparing the two weak learners versus one weak learner case and it seems more like fluctuations how would you explain it docsepthe authors propose a metagradient boosting framework that uses a base learner to learn shared information across tasks and gradientboosted modules to capture taskspecific information the proposed approach is applied to various regression and classification tasks to me the motivation of using a gradient boosting framework remained unclear throughout the paper it is stated in the last paragraph of the introduction that recent research with achieving low training errors however why this is important to the metalearning problem remains unclear in section 3 the model is not explained clearly what is a weak learner in the introduction it is mentioned that gradient boosting aims to build a new learner towardswe call the learner for each step as weak learner what is a step here the first few learners are regarded as the base learner for learning the shared information across tasks this was also not evident how it is designed to achieve this given that i have no prior on working with gradient boosting directly i found the motivation and the method quite hard to follow section 31 second sentence where k is the number of adaptionswhat is adaptions what is the definition of a gradient boost how is thetak defined in equation 1 sec 32 the first sentence what does it mean it is mentioned here that a firstorder maml type approach is taken to learn the base learner but how the two goal of using gradient boost modules to ensure taskspecific information is achieved is not clear figure 23 and 4 the axes and fonts are unreadable in results the authors state that the results show that incorporating task identities can significantly improve the performance of multimode learning it was not clear to me how this is true first how is task identity even abstracted or assigned here what features of the approach are able to get this information is not clear the gain classification tasks also seems marginal compared to the other approaches the key emphasis of the authors is on the importance of the approach on multimode distributions it is not made clear why the approach would be suited to this setting this is supported numerically only in the case of regression experiments
### Summary:
|
the paper proposes a metagradient boosting framework to tackle the modelagnostic metalearning problem the idea is to use a base learn that learns shared information across tasks and gradient boosted modules to capture taskspecific modules the experiments show that the proposed metagradient boosting framework with 5 gradient boosting modules achieves better or competitive results compared to the baselines however there were several issues that the author feedback did not addressed properly for instance r2 were not satisfied by discussing briefly the suggested baselines without adding the comparison or r1 pointed out that the claim the learning and updating strategy proposed in the method ensured a weak base learner because clear separable datasets could convergence quickly and weak is not anymore applicable besides these two specific concerns the reviewers expected a large revision of the paper due to several cons about the paper all reviewers agreed a mayor revision is needed before acceptance therefore i recommend rejection
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper mainly experimented on go it leverages fast approximate nearest neighbor techniques in order to retrieve relevant data from a set of tens of millions of expert demonstration states this information provides a significant boost to prediction accuracy and gameplay performance over simply using these demonstrations as training trajectories providing a compelling demonstration of the value of largescale retrieval in reinforcement learning agents 1 the paper is well organized and easy to follow 2 the motivation of this paper is really interesting it show that the retrieval can help rl for training it also shows that retrieval can be effectively combined with modelbased search the benefits from retrieval and search are synergistic with increasing numbers of retrieved neighbors and increasing simulations both leading to performance increases none docsepthe goal of the paper is to enable rl agent to retrieve large amount of information from sources using nearest neighbor queries over experience datasets as testbench the authors rely on the game go where the agent can retrieve information from an auxiliary source of 50m boardstate observations the work studies the tradeoff between leveraging such external information repositories and storing all necessary information into the network as weight to optimize a policy in an offline rl setting an extension for online rl is left open for future work to retrieve information the model first precomputes embeddings for all possible observations based on hidden activations of a surrogate model which are compressed by pca the embeddings are used as query and key for nn search over an external dataset lastly the model output is then generated using the nn results and the current observation strengths in my opinion augmentation of rl agents with any kind of external data is a great idea since it can deliver benefits for realworld use cases for instance by altering the external data an operator can have direct and immediate influence on a deployed rl policy it would be interesting to further study how the auxiliary data influences the behavior of the rl policy evaluation against a human competitor and analysis over different retrieval data size demonstrate a strong benefit main concern evaluation does not support some of the claims of the paper the authors make the claims that potential auxiliary sources can be diverse eg text video and that the approach is versatile regarding the type of auxiliary data for training however in the experiments they only use trajectories as auxiliary data to be fair other kinds of external data should be included in the evaluation to assess if this versatility claim really and how general the approach is for estimating the impact of the retrieval the paper only compares a retrievalenabled model against retrievaldisabled model with the same architecture however how do we know that the improvements are not only due to an increased parameter space the evaluation would benefit from a comparison where only the number of parameters in the retrievaldisabled model are increased to study the claim that a retrievalaugmented network can utilise more of its capacity for computation instead of having to amortise all relevant information into its network weights minor concerns a lot is left for future experiments such as the online rl setup or learning of query vectors indicating that the paper is still preliminary yes docsepthis work investigates the impact of integrating a largescale nearestneighbors retrieval mechanism into a muzeroinspired modelbased rl agent with application to playing the game of go in particular a large dataset of 50m go board states is first embedded by a pretrained network during training an input board state representing the current state of the game is itself embedded by the same network and used to perform a fast approximate nearestneighbor nn lookup across the retrieval dataset the topk nns are along with relevant metadata given as input to the modelbased agent who may use the context of these games to both improve its prior policy as well as inform its rollouts during montecarlo tree search a number of experiments show that this retrievalbased agent rba outperforms related baselines trained without retrieval and that interestingly the rba agent can even adapt without additional training to new retrieval datasets strengths research on semiparametric approaches that enable agents to reuse training data beyond what they can distill into their model weights is an exciting direction that is underdeveloped in the reinforcement learning area differing from past work in this direction this work integrates retrieved experience using a simple but highly generic mechanism that can in principle be extended to retrieving information beyond training data eg an embodied agent might retrieve youtube videos related to completing some household task the empirical results presented are limited in scope but are convincing in that there is clear evidence that the rba agent makes effective use of the retrieved data and that this leads to measurable performance improvements finally the paper is wellwritten and all details are well explained weaknesses while i am generally positive about this work there are some weaknesses detailed below that keep me from a more enthusiastic recommendation why not use alphazero as noted on line 189 the training dataset used in this work is generated by an alphazerostyle agent which raises a somewhat unfair question if alphazerolevel performance is what were attempting to achieve why not just use the alphazero agent ie what is to be gained from training a model to imitate a model that we already have i assume the rebuttal to this is that this dataset choice was purely practical and that anyone looking to apply these ideas to a new domain where no such expert is available should instead use the likely humangenerated expert demonstrations they have available my worry is then that the use of this alphazerogenerated dataset may not be a fair standin for multimodal errorrich humangenerated expert datasets because of this the performance of the retrievalbased agent using this artificially generated dataset may be a bestcase and others may find achieving such gains more difficult while the results from sec 34 with the augmented retrieval dataset provides some evidence that the above is not the case i do believe that this work would be much stronger if it had used a humangenerated expert dataset or if there were additional results applying this method to other domains grand aims but limited scope this paper especially the title and abstract suggests a grand scope reinforcement learning augmented with retrieval towards a vision in which an agent can flexibly draw on diverse and largescale information sources as one reads deeper into the paper however the scope is iteratively narrowed until we are left with an agent that is trained to play 9x9 go with expert supervision in an offpolicy manner with a unimodal dataset being used for retrieval namely the training dataset this is a failure in expectation management while i still find the work interesting it is difficult to not feel let down going from the grand vision to the reality this would work be significantly stronger if any step was made along 1 using onpolicy training data 2 using multimodel data or at the very least as noted above not artificially generated expert trajectories 3 showing applicability to other games environments linebyline comments fig 1 i find the use of lines to denote gradient flow to be a little confusing i would suggest instead only showing the forward computation and using a symbol along these lines to denote a stopgradient often represented as a line break eg lines 142143 endtoend learning of the query vector seems challenging can you expand on how you intend to accomplish this lines 198199 to confirm does this mean that all observations from a particular game will be in the same portion of the dataset split ie if ot is in split 1 then ot1 will also be lines 224225 is it possible that the gap between the retrievalbased method and the baseline can be closed simply by doing more mcts rollouts if so how many are needed to do so i believe fig 10 begins to get at this question but not completely it would be interesting to know how many rollouts approximately equal one retrieved neighbor fig 4 if you evaluate the alphazero agent on this dataset what is its top1 accuracy lines 271273 this is quite surprising to me doesnt this suggest that the majority of your gains over the baseline are actually not related to retrieval at testtime i would be interested in seeing more analysis of this lines 275291 while i am not an expert in this domain it seems as though this related work is missing of how people use retrievalbased ideas outside of rl eg a quick search on google scholar shows a large number of works related to building nlp models with access to largescale knowledge bases there is some note of limitations throughout the paper but no dedicated section for their discussion i believe there is room to more emphasize some of the weaknesses ive addressed above assuming i have not identified them incorrectly i dont see any direct potential for negative societal impact from this work docsepthe authors propose a method for retrieving relevant training experience and then making decisions using that experience as context they run experiments in the game of go showing that when they equalize model sizes their model looking at retrieval information performs better than a baseline method that doesnt see retrieval information specifically the retrieval information helps predict values figure 4 and helps win the game figure 5 strengths the motivation of this work conditioning on prior experience is creative i am not aware of any stateoftheart algorithms that condition on prior experience so it would be neat to see that this helps in offline rl as far as i can tell the authors propose a method that can easily be generalized to other domains if the authors method helps in diverse settings that would be significant weaknesses my major issue with this paper is that i dont see the significance of the authors experimental setup the authors motivate their work via offline rl but then they study the game of go go is precisely the type of domain where we dont need offline rl because we can easily simulate the environment dynamics if the authors want to demonstrate the relevance of their method for offline rl they should use an offline rl benchmark such as d4rl or rl unplugged this would allow them to compare against other offline rl algorithms in suitable offline rl environments as it stands the authors only compare against a baseline that they themselves came up with however there is little analysis to justify that this is a good baseline why did the authors not directly use muzero unplugged 36 in the authors list of references as a baseline is this not possible i am also unconvinced that the proposed method does that much better than the baseline figure 5 would be the most important figure in the paper showing that the proposed method helps win the game however if i understand correctly the model size on the xaxis of figure 5 is an unfair comparison because it is leaving out the parameter count of gphi which is needed for retrieval it also seems that the largest model size only shows a 5 increase in winrate over the baseline this doesnt seem very impressive to me as the opponent is a fixed program with strong amateur level of play and im also not confident in the baseline finally i think the title of the work is overclaiming because it is too broad since the authors only evaluate in go i think the title should make this clear eg largescale retrieval for reinforcement learning in the game of go minor issues typos figure 2 shows dr and describes it as having keys k i believe the keys k are generated using gphi but there is no indication of this in the figure this confuses me i request title and figure changes update after author response my rating remains borderline reject if this paper gets accepted anyway then i request that 1 the title be changed to be more modest eg retrieval for reinforcement learning in 9x9 go and 2 figures 4 and 5 be modified to include the parameter count of the retrieval network while the title of the work makes it seem like a generalpurpose rl study the experiments are only in the game of go the authors motivate the work via offline rl but go is a setting where it is very easy to simulate environment experience so we dont need offline rl in go
### Summary:
|
this paper uses nearest neighbor methods to retrieve and exploit information from similar games during planning whilst playing the game of go although the method is extensible to other environments which support muzerostyle agents the reviewers found this approach interesting and ultimately worth publishing although there was a range of scores however the emerging consensus during the review phase seemed to lean towards acceptance a recommendation i am happy to support from having read the discussion
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the benchmark provides the first two datasets to examine multimodal robustness in videotext settings through algorithmic perturbations to youcook2 and msrvtt the work offers a lens to understand how both textual and visual alterations affect retrieval after introducing the benchmark the work continues to explain how existing multimodal models perform on the task and analyzes early trends that may be caused by the modeling choices this work provides a formulaic and wellreasoned approach to creating a suite of tests to benchmark multimodal videotext models the perturbations are both numerous and common sense offering for some very granular analysis if desired i can easily see this becoming a commonplace task to benchmark future models similar to the spirit of imagenetv2 for classification robustness the broader community would definitely enjoy the utility provided by the dataset since it is built from popular existing datasets the framework is extensible and likely to be maintained for the foreseeable future it poses no ethical risks major while the paper gives a helpful early analysis of the existing multimodal models it struggles to give the proper nuance to some of its conclusions although the appendix includes a section on limitations it focuses more on future directions that are imperative to multimodal understanding rather than inherent limitations of the work and of its methods the exploration gives some very interesting initial findings and trends that could have important impact on how we design models in the future however the limited number of models studied n4 unique architectures should strongly bound the claims made minor what does the pm mean in the tables standard deviation 95 confidence interval while absolute robustness gammaa is defined relative robustness gammar is not what does it signify while helpful the figures are challenging to read at times in figure 2 the pairing of black on purple makes the textures nearly impossible to read are there alternative texturescolorings that can be used what are the floating dashes in figure 5 what is the yaxis representing in figures 5 and 7 could it be so that the text on the right side of figure 5 matches the ordering of that on the xaxis in line 138 swaptext and is any should become swaptext is a in line 266 if significantly is not referring to the statistical usage i recommend a different word docsepthe paper proposes two texttovideo retrieval datasets and conducts comprehensive robustness experiments to study the robustness of multimodal models against distribution shifts the authors consider 90 visual and 35 textual perturbations in total the paper also draws some interesting conclusions based on the experiments under different settings 1 the paper proposes a multimodal benchmark with two largescale datasets which is useful for research in both modalities as well as multimodality 1 the perturbation category is clear and comprehensive all types of perturbation strategies are concisely summarized 1 the paper points out the weakness in several multimodal models and draws some conclusions useful for model design 1 the evaluation metrics are not clearly explained many terms used in the paper are not clear for example in line 196 absolute retrieval drop and relative retrieval drop are not introduced in tables 2 3 and 4 the meaning of the relative robustness score is not explained its better to introduce the metrics before using them in the experiments 1 some conclusions do not match the experimental results correct me if im wrong for example i dont think the conclusion on line 228 finetuned models are slightly more robust on machinebased and natural test perturbation while being 68 more robust on synthetic perturbations is correct based on the results 1 some captions do not match the numbers in the table for example in table 4 twobranch encoders are only more robust on youcook2 dataset while being less robust on msrvtt dataset 1 the figure does not match the introductions in the paper in appendix figure 1b some perturbations from changechar should belong to machinebased text perturbations also in the paper we have the synthetic type while in the figure we only have natural and machinebased types docsepthis paper proposed two largescale benchmark datasets by applying 90 visual and 35 different textual perturbations to the existing retrieval datasets msrvtt and youcook2 the benchmark is evaluated using several sota baselines of different input features and training strategies and two proposed metrics the evaluation is followed by a comprehensive and careful analysis of the robustness of multimodal models under realworld distribution shifts and several interesting observations on how pretraining architecture etc affect multimodal robustness 1 the paper provides organized and detailed descriptions of different visual and text perturbations applied in creating the benchmark and the types of perturbations are relatively comprehensive 2 the choice of sota baselines is properly justified with detailed descriptions and comparisons of model architectures objectives and training strategies 3 empirical results are carefully analyzed with highlighted conclusions which are easy to follow major concerns from the reviewer include 1 the scope of multimodal robustness evaluation provided by the two proposed datasets is fairly limited only evaluated on the texttovideo retrieval task with very little mention of videoqa towards the end of the paper furthermore given this evaluation scope either the contribution is limited or the generalization of the conclusions becomes less convincing 2 the proposed metrics are not novel and the reviewer believes they are highly similar to the proposed metrics in the following paper liang et al multibench multiscale benchmarks for multimodal representation learning neurips 2021 datasets and benchmarks track which does not appear in the related work and the paper should clarify the different contribution note that the absolute robustness metric is the same while the reviewer does not find any formal definition of the relative robustness metric throughout the paper 3 only observations but their interpretations are provided in the paper it will be more interesting to provide more insights into the key observations that are less intuitive for example how do crossattention and pretraining relate such that crossattention and the twobranch approach are equally robust in the pretrained case moreover since both crossattention and pretraining affect model robustness does the comparison experiment on the claim that pretraining helps robustness have any confounding factor 4 what realworld distribution shifts will the temporal perturbations in particular box jumbling correspond to the drop in performance caused by random shuffling of frame segments sounds reasonable to models that utilize more temporal dependencies instead of a robustness issue 5 please consider fixing typos eg line 176 and confusing notations what is i in the subscript line 197 docsepthe submission show a fact that the robustness of some approaches of multimodal against realworld perturbations has not been studied and they perform the first robustness study about the aspect after experiments they found some models and encoders are more robust on some conditions i think these contribution will act as benchmark for the future research in the robust multimodal learning 1originalitythe submission is of full oringinality it points out a novel but important and realistic idea to find out the robustess against realistical distribution shift the submission perform lots of experiments and create two new datasets and some perturbation 2qualitythe submission is high of quality the team performed enough of experiments adopt reasonable metrics to evaluate the results 3claritythe organization of the paper is nice for example the abstract and introduction are simply but clearly clarifing the shortness of current research of multimodal and the significance of their work and the other parts of the paper represents their workload very well 4significanceas mentioned in the originalitythe submission not only perform the first extensive experiment in the robustness of multimodal approach against realiatic distribution shift but also create two related datasets and define several perturbations which will contribute to the future related research 1there are some problems about the figurethe figure 4as far as i am concerned i prefer dark color representing big value to plain color representing big value because i think the former is more intuitive 2the figure 6 looks like embedded into the text and it influences the overall layout of the paper 3table 5 has the same problems as figure 6 and table5 lacks the description docsep the paper brings up a study on the robustness of multimodal models which uses text and video it proposes a benchmark of videocaption pairs with different types and intensities of perturbations for each of the modalities this benchmark is built using two well known videocaption datasets by adding perturbations on the two modalities and evaluating on the perturbed sets the paper compares four different models and uses the texttovideo retrieval score to calculate the robustness of these models depending on the perturbation this work proposes a metric to measure the robustness based on the retrieval scores the paper approaches an important topic on the robustness of multimodal models this is the first paper that tests the robustness on videotext cases this is an interesting topic for the community since it can also help understand the relevance of each modality and the robustness of methods when the train and test dataset distributions are mismatched for either modality the variation of the different perturbations proposed is wide and could potentially give an idea of the robustness of the methods on realworld perturbations the models used for the study are relevant in the field the analysis based on type of perturbation model type and training scheme is an interesting way to compare the robustness of models under different circumstances 1 the paper states 3 different contributions contribution 1 analyze the robustness of multimodal models against different realworld distribution shifts and 3 provide insights including comparison of different model architectures training procedures and effect of various perturbations on model performance corresponds to the same contribution ie provide an analysis of the robustness of 4 different models on two videotext datasets under several perturbations and different training schemes this should be stated as a single contribution 2 the paper claims to study the effect of realworld perturbations on multimodal models in what real case a video would contain a jumbling or box jumbling perturbation to my intuition these perturbations would show the relevance of the temporal information for selected models but are not found in the realworld the droptext replaces words with unk tokens which creates captions that do not appear in the realworld additionally i question the relevance of removing all nouns and verbs from the text this perturbation destroys all semantic information in the captions leading to a wrong matching with any video 3 there is a typo in line 197 with the subscript of the retrieval score the absolute robustness metric is defined in line 199 while the relative robustness gammar used for most of the tables is not defined in the paper the absolute robustness is only used in table 5 for the videoqa in the main paper given that the relative robustness definition is not presented it is hard to make conclusions from the scores presented in the tables the absolute robustness score does not as the name states show the significance of the drop with respect to the original performance of every method for example if the original clean score is low a small drop in performance high absolute robustness due to perturbations would represent a low relative robustness absolute and relative metrics should be used together when comparing robustness 4 based on figure 2 the overall robustness comparison with respect to visual text and both modalities perturbations why do you think coot is not robust to text perturbations but more robust when both video and text are affected 5 the paper has several claims in the summaries of each section a twobranch approach is typically more robust word2vec is more robust on semanticchanging bert is more robust for nonsemantic changing perturbations however to make these claims it is required to test a bigger sample of methods or more generic and equally trained methods than just one which uses word2vec vs 3 bertbased methods and one crossattention vs 3 twobranch approaches 6 figures 5 6 and 7 are very hard to understand the black lines are not well explained only after line 293 one can know they correspond to the clean scores which are necessary for understanding the figures plots are missing axis titles and explanation of the content as a general remark the presentation of the figures needs to be improved docsepthis paper proposes a new benchmark for videotext robustness evaluation for this the authors propose to reuse the popular msrvtt and youcook2 benchmarks by applying many kinds of noises to and evaluating models by their textvideo retrieval and videoqa performances the paper addresses an important topic of evaluating multimodal models with regards to their robustness against perturbations the paper is wellwritten and easy to understand the paper evaluates a set of common pretrained models and provides some overarching results that are informative for developing multimodal models that are robust temporal perturbations for video models i focus on this temporal aspect because it is in my opinion the most important one that contrasts prior robustness work such as that from geirhos et al to this work therefore getting this right is crucial for providing a contribution to the community effect of perturbation might well be dataset dependent eg if a caption reads a man jumping from left to right changing the temporal order might well change the semantics of the video this might be more acute for datasets like somethingsomething but an analysis of these actions which are asymmetric with time would be needed for this to indeed properly measure robustness a good reference here might also be the arrow of time paper by wei et al cvpr 2018 do we instead want temporal unrobustness with methods such as clip working extremely well on solving many tasks simply by looking at a single frame should strong videomodels that truly look at motion instead of scenes be unrobust instead it would therefore be very interesting to analyse the robustness and downstream performances against this temporal robustness to see if there are patterns additionally the work from buch et al cvpr 2022 might provide additional information about where the temporal aspects are indeed necessary finer grained analysis of temporal perturbations as outlined above temporal reversal might very well make perfect sense for many videos eg ball bouncing while not for others eg car crashing therefore mixing the robustness results of this experiments with other perturbations such as frame shuffling might blur some informative signal that could be extracted from the experiments what about more recent multimodal works that are trained on ht100m like gdt patrick et al iccv 2021 fit bain et al iccv 2021 brave recasens et al iccv 2021 for the first two works models are also publicly available i believe and are also relevant related work i believe running further evaluations should not take too much effort but cementing the results with further evidence especially with more recent models the milnce model the authors use is from 2020 would strengthen the paper
### Summary:
|
this paper proposes new benchmarks for probing videotext models robustness that include a multitude of visualtextual perturbations all of the reviewers have acknowledged the usefulness and effort put into the benchmark construction and presented analysis some of their concerns centered around lacking metric definitions limited scope videotext detachment from realworld scenarios issues with figuretable presentation etc after rather extensive discussions many of the concerns have been resolved presently 4 out of 6 reviewers argue for acceptance 3 of them strongly the remaining 2 reviewers maintain their opinion on the limitations of the presented work most importantly somewhat inconclusive takeaways limited number of models compared initially 5 the authors have added one more mostly one task videotext retrieval i believe having 6 models is acceptable for the proposed study upon carefully examining the claimsarguments i encourage the authors to scale down their claimsnarrative and perhaps even rename the paper to more explicitly acknowledge the emphasis on videotext retrieval but i still think the analysis as such is valuable and thus recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposed the first publically available largescale dataset of imagetext pairs it consists of 58 billion pairs in total and the authors successfully reproduced the clip models of different sizes using the proposed dataset the toxic offensive and pornographic content are tagged rather than removed to encourage research in elds such as dataset curation the data bias and limitation is well discussed and notified in the paper extensive experiments validate the effectiveness of the proposed paper a good paper it is very happy to see such a big dataset 5 billion imagetext pairs opensourced since this can significantly boost the development of languagevision pretraining techniques the imagetext pairs in the dataset are divided according to the languages and each part is on a large scale this provides new opportunities for multilingual and potentially lowresource language researchers the pairs are welltagged with toxic offensive and pornographic this encourages research in elds such as dataset curation several research works have been conducted based on a prereleased subset laion400m which proves the value of the proposed dataset the potential bias caused by such a large dataset is well discussed in the paper 1 figure 3 is missing 2 the data is filtered by vitb32 in the postprocessing part is there any chance that the data would be filtered by a much larger model like vitl besides as the author claimed that the clip model in itself introduces a bias towards laion are there any other techniques in the future to avoid this bias and even measure the effect brought by the bias it would be appreciated if the authors can discuss it docsepthe authors propose a massive 5 billion instance dataset comprised of imagetext pairs this functions as a novel exceptionally large dataset for the research community and is the only opensourced one of its size on top of its large size laion 5b demonstrates novel data aggregation techniques sophisticated curation pipelines and extends several key analyses by providing both a reproducible process and pipeline as well as open sourcing all their data and code laion 5b exhibits a high quality standard of reproducibility and visibility furthermore laion 5b provides access to a large open source implementation of clip by collecting images using clip as a filter and adequately screening for ethical and moral hazard while providing important disclosures about this datasets usage laion continues the work of other papers demonstrating that important data quality analysis can be performed at large scales 1 the dataset describes and utilizes a very well defined aggregation methodology 2 the paper and dataset employ a useful approach of cosine similarity filtering employing clip to compute these scores 3 the authors apply several safeguards to prevent unsavory and biased information from affecting the dataset the application of these safeguards does not in itself resolve all issues but this is a step in the correct direction 4 the authors evaluate the models trained on their dataset using a variety of tasks and provide detailed information in the supplementary material regarding these experiments 5 ethical concerns have discussion that is equal to that of its predecessor papers 6 allowing access to all data enables independent probing and review of the dataset itself 7 technical limitations are well outlined and clearly explained 8 the supplementary materials webpage blog services and figures are a true highlight this meets the highwatermark for excellent documentation and supplemental information this paper is certainly significant within the field however there are some important weaknesses to mention 1 investigation of other model architectures and their scaling to a dataset of this size would be useful to include 2 the inclusion of fewshot learning or a linear probe for downstream evaluation tasks would allow for more comparisons to other datasets 3 glide and generative works could have been attached in the supplementary or explained in greater detail 4 while there is a reasonable amount of ethical review it would be nice to see a greater probe into the nature of the content throughout laion 5b there could have been more manual analyses performed furthermore while the authors attempt to mitigate the risk of potential harms in their dataset the certainty with which they assert their mitigations could lead to confusion this point is continued in the ethics section docsepthis paper created a new open dataset laion5b that contains over 585 billion imagetext pairs of which 232 billion contain english language besides this work provided a replicable pipeline for the data curation process based on common crawl including data acquisition distributed processing of common crawl distributed downloading of images and postprocessing to validate the usefulness of laion5b in training and analyzing the largescale visionandlanguage models this work trained the clip models of various scales on a subset of the dataset and showed that they match the strong zeroshot and robustness performance of the original clip counterparts trained on closed curated data finally the safety ethical concerns and data biases were thoroughly discussed 1 the proposed laion5b is the largest openly available dataset for training visionandlanguage models 2 the data collection and curation process has been comprehensively discussed to support community investigation and further improvement 3 experiments support that clip models trained on laion400m match the performance of original clip models trained on the private dataset 4 the dataset will add much value to the multimodal learning community regarding 1 being able to train largescale models 2 publishing pretrained models for many downstream tasks and 3 being able to audit and refine a dataset of this magnitude 1 why does the clip model with vitl14 trained on laion400m perform largely worse than the original clip model with vitl14 compared to the cases with vitb32 and vitb16 2 in the abstract the authors claimed that we show successful replication and finetuning of foundational models like clip and glide using the dataset however i didnt see the experiments of replication or finetuning on glide models 3 figure 2 is difficult to interpret maybe add more detailed caption descriptions and also refine the whole diagram to make it more accessible 4 what are the wat files in common crawl 5 in the related work section i suggest putting the reference numbers before the full stop at the end of a sentence docsepthis paper introduces the laion5b dataset an ultralargescale collection of imagetext pairs designed for visionlanguage research the dataset contains additional metadata about each imagetext pair including language information resolution and multiple content flags the metadata enables researchers to filter the dataset into smaller subdatasets for example by language laion2ben or resolution laionhighresolution the contribution of the paper is clear the dataset is significantly larger than any previous imagetext dataset openly available to the public from the perspective of this reviewer this is a significant and valuable contribution there is little doubt that over the past five years the increase in dataset scale has been an important factor in the large improvements of visionlanguage models it seems likely that the trend of increasing model size dataset size and training time will continue as it has in nlp regarding explicitquestionable content the authors leaned on the side of tagging and including data rather than removing it from the dataset this decision enables researchers and practitioners to decide upon their own dataset filters giving them the greatest amount of flexibility when constructing their data subset on the other hand the inclusion of a very large amount of explicit content could lead to some users training models on such content without intending to do so many organizations especially those subject to european privacy laws will inevitably have to refilter that data in order to ensure that it complies with relevant policies and regulations further ethicalprivacy issues are discussed in the ethics section below the two primary strengths of this dataset are its size and its openness with regard to size there are no other datasets of similar or even close to similar size that are available to the community collecting a dataset of this scale requires a significant engineering effort as emphasized throughout the paper all aspects of the data downloading processing and tagging pipeline are opensource these could be used to reproduceupdate the dataset and to collect specialized datasets in the future in general the openness of the project is a refreshing change from other largescale visionlanguage accessible only by api this paper also makes a few minor contributions which should not go unmentioned the img2dataset tool is already widely used the watermark detection model is useful to the community the vtab dataset is a slightly larger version of vtab the primary weakness of the paper is that there should be a more indepth analysis of the dataset contents and filtering methodology it is not reasonable to expect the authors to train a huge number of models on the dataset but it would be appreciated if they provided more datasetlevel statistics especially with regard to the nsfw content for example one piece of analysis that could improve the paper would be a human review of a small random subset of images perhaps on the order of 10000 images this human review would help us assess the accuracy of the clipbased nsfw tagging as well as the other models such as the watermark tagger it may also identify some forms of harmful content which are not caught by the current filtering methodology relative to the high cost of data processing and model training this would probably not be a large cost for the project overall a second example would be a more indepth exploration of the different types of images contained in the dataset for example how many of these images are stock photos how many are memes how many are advertisements etc i would not expect exact numbers but a broad estimate would still be valuable a third example would be a linguisticnlp analysis of the captions for example how many of the captions are full sentences how many contain proper nouns how many contain numbers how many contain locations etc some aspects of the data pipeline could also be enhanced to make the data more accessible and more widelyused for example the paper emphasizes in multiple places that it is designed for use primarily in an academic setting however as the paper also mentions training models on data of this scale requires extremely large computational resources which are generally not available to academia with the exception of a few supercomputers such as juwels based on this it seems likely that in practice the dataset will be primarily used in industry and industrial research settings to facilitate use of the data in academic settings it may be valuable to provide some tools for researchers with fewer computationalstorage resources for example perhaps one could provide a utility for dataset streaming or a small but very highquality subset of the data for academic usecases docsepthe main contribution is a new largescale opensource 585 billion imagetext pair dataset named laion5b the dataset is a folloup to laion400m both datasets were crawled from the web and filtered with clip with the goal to keep only pairs where images and text are related to each other the authors describe the collection process and properties of the resulting dataset next they explain and show usage examples for laion5b and laion400m both new contributions by the authors subdataset generation and contributions from other works clip reproduction generative visionlanguage tasks the main experiments in this work are reproductions of various clip models on the laion400m and laion2ben datasets and their comparison to the original clip model trained on the private webimagetext dataset clip wit the authors evaluate the models on zeroshot top1 classification accuracy on imagenet and 4 datasets with distribution shifts compared to imagenet eg objectnet was collected to intentionally show objects from new viewpoints on new backgrounds quote from httpsobjectnetdev additionally the authors scale up to vtab benchmark with 19 tasks to vtab with 35 tasks and evaluate on this benchmark vtab and vtab contain natural tasks classical vision like cifar specialized tasks remote sensing like areal images or medical images and structured tasks which require counting objects or predicting their orientation zeroshot classification evaluation is done using predefined text prompts for each class and selecting the class where the prompt has the highest predicted similarity to the image experimental results show 1 it is possible to match the performance of the original clip vit model when training on laion datasets 2 the model trained on 2b images improves over 400m images 3 zeroshot performance of models does not seem to saturate yet when increasing compute so further upscaling may improve results further finally there is an extensive discussion on technical limitations safety ethics and biases the authors choose to flag images that are predicted to be nsfw instead of removing them and remove only illegal images opensourcing a dataset of this scale will enable many researchers to work on visiontext problems this is especially important since the latest sota models in many visiontext fields are all models pretrained on huge amounts of imagetext or videotext pairs in this context another strength is to not only opensource the dataset but also the collection process and trained models the creation of the vtab benchmark is not discussed in detail in the paper but as a superset of vtab it can be assumed that vtab is a better estimate of zeroshot classification performance in various tasks than vtab a single model was pretrained on laion2ben and shows the usefulness of the dataset over laion400m this is an achievement given the scale of model datasets and compute required 400 a100 gpus for 88 hours downstream experiments and their analysis are detailed the ethical implications are well discussed and the authors explain their choices on releasing a dataset containing potentially harmful content in a convincing manner the benefits of the work outweigh the risks in table 2 the original clip vit model is compared with the reproduction on laion the authors state in the introduction numerous groups have further increased clips generalization ability my question would be why there is no comparison to these improved clip models in this table 2 but only a comparison to the original model the reasons for choosing only this model and no others should be stated explicitly so that the reader can assert that the comparison and claims are fair and valid in chapter 52 all models except one are trained on laion400m and only one model is trained on the new laion2ben the reasons on why the models are not trained on laion2ben eg too high compute cost should be stated explicitly in table 2 especially the experiment of model l14 pretrained on laion2ben is missing as there the original clip wit model significantly outperforms the clip laion400m model and the best performance of the original l14 clip wit is never reached the new vtab benchmark is mentioned briefly but lacking details what are the reasons for creating it why is it more useful than vtab how are the text prompts exactly created how does the downstream evaluation pipeline look exactly unless i missed it the benchmark and evaluation code is not public yet at least the supplemental should contain these details in a separate chapter as well as an outlook on whenhow the benchmark and evaluation code will be made public since the opensourcedness of this work is its main strength docsep creation and distribution of a dataset composed of 58b pairs of images and texts in several languages in order to propose to the research community an artifact that has not been shared by scientific publications that have shown the usefulness of such a dataset sharing of different features related to this dataset facilitating its use web interface allowing to search in it sharing of several nearest neighbor indices detection scores for watermark nsfw and toxic content detection experiments showing that the constituted dataset allows to reach similar performances as the ones reported by 34 the authors have built and made available to the research community the largest dataset of image and text pairs moreover this dataset is not limited to 232 billion pairs of image and english descriptions but also contains 2 other subsets 226 billion multilingual descriptions and 127 billion salient but nonconcrete language samples the release of such an artifact is particularly important for the research community as the authors indicated in their paper many recent scientific advances have demonstrated the usefulness of datasets of this nature of several hundred thousand image and text pairs nevertheless in these previous research publications these datasets have not been released and the information about their content is limited this dataset whose constitution is technically complicated and costly will allow more research groups to work on 1 models requiring text and image pairs as input and 2 audit and refinement of this dataset the very large size of the dataset allows future searches to look for the best possible cleaning filters or filters a series of experiments showing that the content of the dataset does indeed allow access to similar performances of clip 34 the distribution of all the codes used to constitute such a dataset this represents huge engineering work the addition of metadata for each example of the dataset language nsfw similarity watermark i very much appreciate the presence of section 5 experiments validating the utility of laion5b nevertheless after reading figure 4 and table 2 three questions come to my mind 1 why not have carried out the training of b16 b16 and l14 on laion2ben 2 what would have been the performance of b32 b16 b16 and l14 if the training with laion400m had been continued during several epochs in order to reach the same amount of compute as the training on laion2ben 3 why the lit method has been used for training clip models on laoin it makes the comparison of their performance with the original clip model less impactful than if the training method had been identical
### Summary:
|
the reviewers agree that this dataset represents a huge undertaking that offers potential opportunities for various applications the paper also provides insightful experiments across a series of use cases models and tasks several reviewers also noted the potential risks of the dataset and the potential for misuse although there is also an acknowledgment of these issues from the authors side and active steps to mitigate these risks authors are encouraged to incorporate the reviewer comments especially some potentially factually incorrect statements raised by ethical reviewers surrounding irb practices as it stands this dataset should be of wide interest across various communities including computer vision and natural language understanding given the careful considerations of the reviewers and the ac this paper is recommended for acceptance in the program the ac also recommends a more thorough review of similar resources before coco and visualgenome there is the pascal1k flickr8k flickr30k sbu1m iaprtc12 imageclef which preceded most other works listed here
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper discusses a hierarchical iterated reasoning scheme for learning policies conditioned on the other agents policies at that stage maximizing both the mutual information between agent policies and agent rewards the authors leverage cognitive hierarchical theory wherein each agents has multiple levels of abstraction and reasoning about the actions other agents might take and take a gametheoretic approach to policy learning in which each subsequent policy is the best response based on other agents previous policies the authors proceed to show this approach yields results which outperform several baselines in complex environments such as starcraft ii and multiwalker while there are plenty of interesting ideas in this paper and this kind of approach to multiagent learning that relies both on information theory and game theory seems to hold great promise i find there are multiple issues with the paper that must be addressed for it to be of greater utility to the ml and rl communities first and foremost it seems the authors omit from their literature review and subsequently from their consideration of eligible baselines works relating to two heavily relevant bodies of work work on hierarchical learning in decpomdps for instance amato et al 2019 and work on learning to communicate in multiagent rl for instance foerster et al 2016 or sheikh and boloni 2019 at the core what the paper seems to propose ultimately is a framework for jointly learning to communicate and act simultaneously which is both interesting and useful but again certain related work seems to be missing in that regard consider ghavamzadeh and mahadevan 2004 as a relatively early but still related example a potentially critical consideration which the paper does not seem to explicitly address in its theoretical analysis is that its unclear from the setting and the rather muddled more on that later formulation of the algorithmic approach how the proposed framework avoids the plague of decentralized learning in multiagent rl which is nonstationarity in fact it almost seems baked into the solution that if each policy learned is the best response to the policies at the lower level then nothing is necessarily guaranteed to converge theres also the risk that implicitly the extent of communication baked into the framework renders the approach analogous to single agent learning theory of mind taken to its extreme and it is unclear to me from the paper to what extent this might be the case there is a proof in the supplementary material that eq 2 does converge but because the entire framework is so muddled more on that later its not clear how that resolves the stability issue for the larger framework and also one cannot rely on supplementary material to establish key aspects of their approach in practice the communication setup proposed in the paper as much as i can understand it seems rather contrived and unrealistic this may not be a disqualifying aspect of the approach we often need to make certain simplifying assumptions to establish initial results and study novel approaches but it does raise the question of how well this approach will hold up if certain constraints on communication are imposed unfortunately my biggest issue with the paper is lack of clarity terms are introduced without sufficiently elucidating their meaning in the specific context of the paper even as simple an example as rationalizability right in the abstract section 42 which should be the backbone of the paper is very confusing to read section 43 was even harder for me to follow and ultimately by the end of section 4 before discussing the empirical evaluation while i retained a high level mental picture of the general outline of the proposed approach i could not technically describe how the actual learning framework was designed how the hierarchical approach works in practice and how the framework is trained the supplementary material helps somewhat but not enough and again authors should not rely on supplementary material to present any key aspect of their work they are called supplementary for a reason this paper has interesting ideas but is lacking in how it anchors its contribution to related work and compares against it more glaringly this papers suffers from substantial lack of clarity both in terms of its theoretical grounding and more critically in terms of how it presents its core framework how it is designed and how it is applied i believe it should be revised substantially before it is suitable for publication at a venue such as iclr docsepthis work introduces infopg for learning coordination in fully decentralized multiagent games infopg models agent policies following klevel reasoning the authors present a theoretical analysis to show that under infopg policy gradient optimizes a form of mutual information related to coordination empirical demonstrations show that infopg enables better performance than conceptually related baselines with some demonstration of how infopg better adapts to a version of a byzantine generals problem strengths the paper is well written for the most part each section makes its points clearly and the paper is easy to follow insofar as it is defensible see weaknesses below the policy architecture seems straightforward and enables pg to implicitly target an mi metric associated with good coordination the authors provide both a theoretical analysis of infopg although again see weaknesses below and empirical demonstration of its effectiveness infopg accounts for the possibility that not all agents provide meaningful communication weaknesses the abstract and introduction talk about bounded rationality but that concept doesnt seem relevant thereafter much of the paper is devoted to demonstrating the implicit connection between mi of agents actions and the infopg policy formulation this comes across as something of a post hoc motivation for the setup it makes good enough sense why mi and coordination are related but as the paper points out treating mi maximization as the objective is potentially short sighted in sum i question if mi is the best lens to motivateunderstand this approach good performance is encouraging but the most relevant analyses for validating the intuitions of the method are largely ignored in the main text from the description in a9 it sounds like the connection between klevel reasoning and the actual architecture implementation is misleading it seems more appropriate to think of the agents policies as one large joint policy please justify this connection andor clarify the mechanisms by which agents actions become correlated from my understanding infopg with this architecture would not cause the mi between the 3rd and 2nd piston to actually change over time in the bgp experiments that is learning would not bring about any reduction in their mi as the main text describes questionscomments it is hard to understand the aspects of infopg that set it apart from prior work it may help to reconsider where moa and pr2 are first detailed in the section 3 paragraph about mi you write in our work x and y are policy distributions of two interacting agents this is vague please be more clear about what the policy distributions are im assuming its the action distribution of each policy but the paragraph is not very concrete consider making it a bit more clear what infopg refers to the klevel encodingdecoding policy definition something to do with the objective perhaps section 43 is sufficiently clear but just as a note i got a little bit confused about what infopg is meant to describe for the bottom row of figure 2 how are you estimating mi it would be good to state that in the paper also what makes this an ablation study my starting recommendation is weak reject i hope the authors can use the rebuttal period to correct any misunderstandings that may contribute to my lower score however assuming i have understood things accurately i am primarily concerned with the notion that infopg is a genuinely decentralized approach it seems to me that the policy is centralized because of how the communication policy works i wonder how much the benefit of infopg simply comes from the fact that each agent naturally gets more information about the state of the world more generally i worry that the work misrepresents itself using phrases like communication and klevel reasoning where they may not be appropriate i am interested in how the authors defend these choices and characterizations i also cannot escape the sense that the mi lens is somewhat post hoc it may be possible to show that other marl algorithms implicitly target mi if it is indeed a corollary of coordination bottom line a lot of the paper is spent on the theoretical connections between infopg and an abstract metric i think that space would have been better spent on empirical analysis again if i have overlooked the importance of this form of mi in these task settings i hope the authors will correct me docsepthe paper proposed infopg by reformulating the individual policy conditioned on state or partial observation to conditional on actions of other agents and with klevel rationalizability from cognitive hierarchy theory the policy is conditioned recursively on lower levels of actions from all agents the optimization of the policy is in the same form of policy gradient algorithm the author showed that with reward shaping or applying relu on advantage optimizing the reformulated policy is implicitly maximizing lower bound of mutual information across agents policies in the case of byzantine generals problem where at least one agent is not cooperative which could be detrimental to multiagent learning the author showed that by removing the relu on advantage the policy gradient on the reformulated policy lead to tuning the upper bound of mutual information the mutual information between the agent in question and the fraudulent agent will has its upper bound tuned down in case of negative reward thus the fraudulent agent wont reduce the uncerntainty of policy of the agent in question thus wont poison the multiagent learning the algorithm is evaluated on a variety of multiagent decentralized learning environments with promising results strong points i find it very innovative 1 to reformulate the policy ps to be pajk aik1 aik1 2 to use the klevel bayesian tree to represent the interaction between agents the experiments are abundant issues 1 line 12 of algorithm 1 how the map sample is carried out for discrete actions one can choose the max posterior action by enumeration what about continuous actions 2 line 19 of algorithm 1 which equation does this correspond to in the explanation text line 19 is not mentioned 3 what is jtheta exactly in eqn 2 and 15 could you elaborate on that 4 in appendix a5 full proof of theorem 1 page 17 paragraph between equation 35 and eqn 36 for a bayesian tree why sumx sumy pxxypxyypxy in general the paper is innovative and solid there are several issues as in the main review section that need to be addressed
### Summary:
|
the paper proposes a method for decentralized learning of cooperative games by maximizing the mutual information between the agents the paper is novel and interesting and well evaluated prior to the rebuttal most of the reviewers saw presentation as the biggest weakness specifically it was not clear what infopg refers to and how it is related to the mutual information during the rebuttal the authors cleaned up the misunderstandings around the presentation and provided a detailed analysis in the appendix while the author responses provided helpful clarification and analysis the authors should revise the paper holistically to remove unnecessary terminology and connections and bring the analysis in the main text
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a new spectral normalization technique that improves the crosslingual mapping of monolingual embeddings by rigid orthogonal transformations this is demonstrated by consistent gains on bilingual lexical induction and two other downstreeam crosslingual tasks strength the goal is clearly presented and the method is clean and clear the paper is wellorganized and clearly written experimental results show consistent improvement weaknesses limited to preprocessing for a specific crosslingual embedding setup no uncertaintyconfidenceerror bars on experimental results or significance testing addressed some experiments may use a weak baseline addressed the proposed normalization improves performance quite consistently overall and on average even if it sometimes slightly degrades performance on few specific settings this is an excellent sign that the spectral normalization does something reasonable and tends to help however in many experimeents the compared baseline is simply no normalization which is really not a very strong opposition it is not surprising that some normalization helps vs no normalization at all in fact bli results like table 3 suggest that there is a much larger difference between any normalization and no normalization than there is between the various normalizations tested this raises the question of how the proposed sn compares to other normalizations especially on downstream tasks in addition the conclusion and appendix e suggest that training joint crosslingual embeddings compares favourably with the normalizationrigid transformation approach yet only the latter is considered in the main paper in figure 1 in addition to the desirable impact on effective condition numbers and single value gap on all languages the iterative spectral normalization seems to have a massive impact on the effective rank for japanese taking it remarkably close to the er for other languages wherease the raw er was much lower according to the log scale any idea where this large effect comes from or whether this is due to the particular language or dataset re stability of language choice experiments are run on different languages at different points in the paper for example hindi and japanese are used in figure 1 but not later the choice of the four particular language pairs used in the crosslingual natural language inference experiments is justified however it is unclear why some bli results use ar de and nl tab 5 others include it tab 6 questions significantly improves over the baseline p7 what kind of significance testing was run normalizing contextual typelevel embedding p8 it is not clear what was done here i understand some details are available in xukoehn2021 but this paper should arguably provide enough information to be selfcontained and understandable in what way does the proposed spectral normalization generalize previous approaches p9 are there specific choices of settings for which it becomes equivalent to centering length normalization or pca removal this is not obvious especially since some of these have guarantees eg on preserving angle or relative distances that sn does not have typos p2l14 euclidean of cosine presumably or p3l2 what is wlog p6l3 publically p7l14 with cl presumably without is meant tab3 xl1 should be xl2 tab467131819 5k 3k 1k should be 5k 3k 1k lowercase thanks for the extensive reply to this and other reviewers comments much appreciated a simple clean preprocessing method to help map word embeddings across languages the article is clear and well organized and extensive experiments produce consistent gains in a number of crosslingual tasks the experimental setup raises a few questions regarding the significance of these improvements docsepthis paper describes three new methods for normalizing word embeddings before crosslingual alignment the first method specnorm caps the singular values of the word embeddings to be twice the average original singular values the second method does mean centering specnorm and l2 normalization the third repeats this for a fixed number of iterations 5 experiments on a wide variety of tasks generally show improvements specnorm seems like a very reasonable idea but it takes a long time to finally present it on page 4 out of 9 i also dont think that pseudocode for specnorm is appropriate its hard to understand but the verbal explanation is far clearer are you able to include some intuition for why capping the singular values is helpful i gather that some of this is given by dubossarsky et al 2020 but to make this paper selfcontained it would be helpful to provide some intuition here as well icsnl is a more complicated method how did you decide on these three normalizations and their order why does iterating help it would be nice if the method could be better motivated the five results tables compare four different sets of systems its especially confusing that sometimes sn means specnorm and sometimes it means icsnl besides being difficult to follow it creates a feeling that the results are selectively reported if possible please include the same set of normalizations for all experiments a much more minor suggestion is that table 4 should be transposed so that the compared normalizations are the rows as in all the other tables the core specnorm method seems reasonable but the iterative centering specnorm length normalization seems more ad hoc although the experiments are thorough and show improvements i would like to see more principled motivation for method andor deeper insight into why it works docsepthe paper proposes an iterative spectral normalization method to improve pairwise crosslingual word embedding alignment the work supposes that improvement to spectral properties in monolingual embedding spaces are sufficient to yield improvements to bilingual lexicon induction crosslingual document classification and crosslingual nli the main contributions are a the introduction of a simple portable algorithm to improve preprocessing of embedding spaces for crosslingual alignment and b empirical improvements in multiple domains of crosslingual embedding oriented tasks strengths the simplicity of the approach and combination with existing preprocessing approaches is promising as a utility for future work empirical improvement on several extrinsic tasks across multiple domains identify a benefit to sn reasonable spectral propertybased metrics are shown to improve in the appropriate direction when using this method as opposed to methods that do not consider such properties the breadth of domains of study is commendable as the authors have clearly taken great care to illustrate that their method has wide applicability the appendices show further breadth and the argument for the utility of the method is strongly presented weaknesses this method shows merit for the reasons given above but i can describe my primary concern as this you have proposed solution x for some method a claiming that a is now improved due to a reduction in effect z but you do not demonstrably support that solution x influences effect z or highlight if effect z is a critical failure in a therefore while empirical results are strong it remains unclear what solution x does to improve a even if the influence of x on a is not well understood this should be delineated further to support that the empirical claims require further introspection the work does not make a selfcontained case for what spectral normalization is a solution towards the problem is poorly defined which leads to a gap in understanding the motivating strategy of the work for example in section 3 the work states as a result if the spectral properties are extreme it can help regularize them but it is not clear what it means to i regularize such properties or ii what this intends to do with respect to the embedding spaces this could be improved if the problem hypothesis was clearer eg answering questions such as are spectral properties of monolingual embedding spaces malformed can this be demonstrably proved in this work so the problem to be solved is clear are there confounding examples within embedding spaces to motivate this eg unexpected versions of the common kingqueenmanwoman analogies based on lowfrequency words what does spectral normalisation hope to intrinsically accomplish such that the findings can be better interpreted care is not taken to explicitly link together the spectral statistics to additional methods or to explain either what spectral metrics are informing the reader of in 22 it is somewhat of a foregone conclusion that spectrumbased metrics will improve with a spectrumbased method while this section relies on the insight of dubossarsky et al 2020 there is little effort in interrogating the prior work and motivating why prior insight is directly applicable here asis at present this must be taken at face value with little motivation for the succeeding sections this caveat is somewhat improved in appendix b but i consider this insufficient without a clear hypothesis tying the motivation of this approach together overall there is a lack of research questions in the work which leads to a difficulty in scoping the contribution sn is clearly useful but why we observe empirical benefits to several tasks but the work does not source these benefits to specific components or processes within the method this could be improved with intrinsic analysis and some error analysis identifying the effect of sn this is hinted at in section 4 where it is stated that we hypothesize this is because it is somewhat uniformly stretching words along the major modes of variation however this could be expanded to aid in understanding of the mechanics of the method unsupported claims every claim should be supported with evidence in literature or in the work there are several instances of claims without proper argument yet previous work has clearly demonstrated that there exists significant overall structural similarity and alignment seeks to make correspondences between those structures for translation and joint understanding i would not consider the arguments of prior work as sufficient for motivation without a indepth of explanation in this work this work is not an appendix on prior publication effectively removes much of the clumping of words based on properties other than the pairwise similarity which encodes meaning this term is not introduced or explained as a consequence or issue within clwe prior art it is unclear what this means or how it is required to be mitigated for better clwe this buttresses the claim made by xu koehn 2021 that improving the degree of isomorphism it is unclear where you demonstrate that you improve isomorphism with your method if your iterative method subsumes methods already maintainingimproving isomorphism then it would be beneficial to demonstrate that such properties are maintained if this is intended to be obvious please state so our approach generalizes previous approaches this is not proven see previous comment and little care is taken to show that sn does not interfere with complementary components like length normalization questions in 21 what does longer vectors refer to here when you are explicitly working with fixedlength vectors it could be clearer when reduction methods like pca are cutting up embedding spaces and when frequency methods are used over others what is the difference between matrix x in 2 and a in 22 it is unclear why these are redefined section 3 what is critical information in reference to it appears some assumption is made here but this is unclear was a stopping condition explored in the iterative algorithm as opposed to a fixed number of iterations it is stated that the vocabularies are trimmed to the top 200k words but it would be interesting to see how snbased methods influence the long tail of infrequent vocabulary was this considered were multilingual l2 alignments considered please address the unsupported claims and issues raised earlier is isomorphism maintained and can you verify that your method subsumes prior methods with absolute certainty what is meant by clumping what are the motivating failings of prior work that has led to the sn method what is the intended intuition and understanding of regularizing an embedding space typosstyle not intended as criticism page 3 why state wlog here sn references both the iterative method and the sn method alone without clear discussion of the difference so should be more clearly separated when you are referring to each method equation numbering gets lost in appendices there is repeated usage of embedded clauses without clear demarcation eg this in some sense allows this in some sense allows this is generally more of a spoken pattern and could be omitted entirely procrustes alignment based methods are inconsistently referenced as a problem solution or alignment with additional unclear italicization these could be streamlined for consistency imprecision in technical descriptions terminate this iterative process after a few steps what is few here this is additionally challenging in section 4 alignment algorithms inline equation labeling for 21pca removal for d is lost figure 1 every key in the figure should be explained in the caption it is not a given to recall what icl means in contrast to cl inconsistent spelling of normalization the recommendation is based on the lack of introspection into the method itself rather than the empirical improvement the work could be improved if it was better understood what the method is doing and how this ties into performance improvements i believe the direction of this work is novel but overall there is a lack of appropriate setting which ultimately makes it unclear what problem sn solves and how this fits into existing preprocessing approaches the case for sn is not fluently wellmotivated and there is little analysis to confidently support that sn is superior to prior art or generalises existing approaches i support the merit of this method with empirical utility and wide applicability however i recommend that significant further revisions are required prior to publication in a later venue docsepthis paper studies learning crosslingual word embeddings clwe with alignment mapping two pretrained monolingual word embeddings to a shared space with a linearorthogonal mapping previous work shows that normalization methods and the spectral properties of monolingual word embeddings have a big impact on alignment methods building on these results the paper proposes a normalization method that regularizes the spectral properties of monolingual word embeddings the method is then combined with mean centering and length normalization in an iterative process empirically the proposed normalization method improves alignment and leads to better clwe measured by scores on bilingual lexicon induction crosslingual document classification and natural language inference strengths the proposed method is simple to implement the experiments are extensive and confirms that the proposed normalization method consistently improves crosslingual word embeddings weaknesses the paper may have limited impact while the paper does a good job improving alignmentbased clwe these models are outperformed by multilingual transformers eg multilingual bert and xlmr in many applications for example the xnli results reported in the paper are much worse than that from xlmr therefore it is unclear if the proposed method is useful for practitioners one way to improve the impact is extending the method to contextualized embeddings but this is nontrivial the intuition behind the method is not immediately obvious the paper may be clearer if there are more intuitive explanations of why the spectral statistics in section 22 helps alignment it is also helpful to have a more explicit discussion of why the proposed method improves the spectral properties from section 22 the paper proposes a simple method for improving clwe with good empirical results however the impact may be limited as multilingual transformers are replacing clwe in many applications therefore my recommendation is weak accept docsepthe authors propose a normalisation method for crosslingual text representations the goal is to normalise the monolingual embeddings based on spectral normalisation the main contributions are novel method to normalise word embeddings the proposed method includes different normalization approaches and the proposed method improves performance on intrinsic and extrinsic evaluation tasks the study shows that produced text representations keep their meaning and improve performance on downstream tasks strengths clear description of background knowledge and motivations needed to understand the proposed representation model clear exposition of the proposed method the authors perform a comprehensive comparison across different benchmarks and downstream tasks questions to the authors please address the following questions during the rebuttal please elaborate on how cross validation could affect the results in the cases where the scores between methods are close could you elaborate on the setup for selecting the hyperparameters for the proposed model given that is based on cross validation how the model would behave on a low resource setting could a visualization of the learned representations be useful for the word similarity embedding tasks comparing with and without normalisation a possible extra contribution could be to highlight the singlecell task results on main text i recommend acceptance given how clear the paper describes related work motivations and proposed model the authors perform an exhaustive evaluation of the proposed model with different language downstream tasks moreover the model outperforms related work on the downstream tasks
### Summary:
|
the authors propose a normalization method for crosslingual text representations the goal is to normalize the monolingual embeddings based on spectral normalization the study shows that produced text representations keep their meaning and improve performance on downstream tasks there is a disagreement among the reviewers the main concern is whether the main contribution is an empirical study or a novel idea i think the authors welladdressed the concerns of most reviewers the idea and empirical study are enough for publication for iclr2022
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper introduces a model to improve semantic segmentation by using a limited amount of pixellabeled data and unlabeled data or imagelevel labeled data the authors use a selfattention gradcam sgc and segmenter to generate the pseudolabels during training the approach shows good results on pascal voc and coco datasets and is well analyzed reasons for score i do not think the technical contribution is strong enough for iclr the paper is incremental and the proposed approach is a combination of a lot of existing approaches but i also want to highlight that the experimental section is strong and detailed pros the idea of using pseudolabels is interesting because it allows to build larger dataset without increasing the annotation cost the approach is evaluated in the settings of using unlabeled data and using imagelevel labeled data the ablation study section gives a lot of details about the model the authors analyzed a lot of things expected calibration error hypercolumn feature soft vs hard label temperature sharpening color jittering strength backbone architecture the approach shows good results on pascal voc and coco datasets the proposed method achieves good performance in the lowdata regime cons the overall approach seems incremental because it is a combination of a lot of existing approaches and there is not a strong technical contribution for instance the model uses several loss functions and all the losses are jointly optimized i think the related work section should be in the main paper instead of the supplementary i feel some parts are a bit difficult to read because of some misleading information for example the title of section 31 is experiments using unlabeled data but the model still uses some labeled data docsepsummary this paper focuses on the problem of semisupervised semantic segmentation where less pixellevel annotations are used to train the network a new onestage training framework is proposed to include the process of localization cue generation pseudo label refinement and training of semantic segmentation inspire by recent success in the semisupervised learning ssl a novel calibrated fusion strategy is proposed to incorporate the concept of consistency training with data augmentation into the framework experiments on pascal voc and mscoco benchmarks validate the effectiveness of the proposed method pro the proposed onestage training framework is elegant compared with two stage methods in this area which include one step for pseudolabel generation and another step for refinement then semantic segmentation training the new designed calibrated fusion strategy well incorporate the concept of consistency training with data augmentation into the same framework achieve a new stateoftheart on both pascal voc and mscoco benchmarks compared with recent semisupervised semantic segmentation methods questions cct ouali et al 2020 includes the consistency training with perturbations which can be treated as a kind of data augmentation on features im wondering if authors can provide some insights about why the proposed method can achieve better performance than cct when they both include the consistency training and data augmentation in the designs in table 3 i suggest to include the segmentation framework used by each method in the table in early works old version of deeplab is usually treated as the standard i understand using deeplab v3 is a fair comparison with cct it would be good to make this information clear in the table it is also suggested to report the performance on pascal voc test set as it is a common practice in this area although cct does not do so sine the unlabeled data training branch does not rely on any pixellevel annotations im wondering if the proposed method can also work under weaklysupervised setting where no pixellevel annotations are available during the training docsep summary this work addresses the task of semisupervised learning ssl in semantic segmentation following recent sotas in ssl this work also advocates for the use of pseudolabels on unlabeled data and heavy data augmentation the main novelty of this work is the novel way to construct higherquality pseudolabels besides the pixelwise classifiers probabilistic outputs the authors leverage as well cambased activation maps named as sgc as an additional pseudolabel source the final set of pseudolabels is determined by linear combining the two soft pseudolabel sources with temperature adjustment the authors conducted extensive experiments with lots of ablation studies to validate the proposed framework strengths the paper is wellwritten easy to follow extensive experiments with adequate discussions improvements over sotas on the addressed benchmarks weaknessconcerns does distribution sharpen operation always use temperature 1 if yes what is the reason how is the temperature t is chosen may the authors produce a performance analysis over t in sec 34 its not clear to me the advantage of proposed method on boundaries cambased activations mostly focus on most discriminative areas usually inner areas so hardly sgc can find pseudolabels on boundaries why does the proposed method have an advantage there more and more segmentation works report results in urban datasets like cityscapes or camvid it would be interesting to see results on those datasets one interesting aspect in urban datasets is the natural longtail class distributions which severely damages performance on minor classes especially in lowdata regime
### Summary:
|
the authors introduce an approach for designing pseudolabels in semisupervised segmentation the approach combines the idea a refining pseudolabels with selfattention gradcam sgc and a calibrated prediction fusion and consistency training by enforcing pseudo labels to be robust to stronglyaugmented data the reviewers overall like idea and point out the good level of performance obtained by the method in the challenging semisupervised context however they also point out the limited novelty of the approach and the need for a better positioning with respect to related works after rebuttal reviewers were satisfied with authors answers and paper modifications and all recommend acceptance the ac considers that the submission is a nice combination of existing techniques and likes the simplicity of the onestage approach which reaches good performances therefore the ac recommends acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a practically feasible algorithm for the average reward setting in rl the key idea is to propose a lower bound policy improvement objective for the average reward setting compared to the more popular and general discounted reward setting this work follows directly from previous impactful works on policy gradients with trust region optimization trpo and shows why previous approaches cannot be directly applied for the average reward criterion this is a significant contribution in rl research for the average reward performance criterion and opens doors for future works in this direction that might be impactful to the community overall this is a good paper there are few major and minor concerns i have about the contribution mostly from an algorithmic point of view which are described in details below overall comments this work provides monotonic performance improvement guarantees by deriving a lower bound policy improvement criterion for the average reward setting it is well known even though less significantly realized that the existing policy improvement theorem for the discounted reward setting leads to a trivial bound for the average performance criterion this work builds from that and derives a novel result which can bound the average reward performance objectives based on the divergence between two policies as mostly done in a lot of policy optimization algorithms eg trpo and many variants building from trpo additionally the authors propose a constrained optimization variant of their algorithm building from the cpo algorithm achiam et al and shows that the average reward criterion can also be used for policy optimization under cost constraints the paper is well written and easy to follow with the key theoretical contributions clearly defined it builds from the well known proposition from achiam et al for the upper and lower bounds for the performance difference lemma proposition 1 lemmas 13 are known results adapted for the average reward setting including the performance difference lemma for the average reward objective the key result is shown in theorem 1 which bounds the performance difference based on the average reward criterion but reduces to the divergence between policies only equations 9 and 10 are extensions from achiam et al adapted for this setting with the typical assumption that the changes in occupancy measures between pi and pi are insignificant this is a standard assumption assuming minimal state distribution shift between the two policies as typically done in trpo cpo and variants experimentally this work nicely demonstrates the significance of the average reward criterion in continuous control tasks which are adapted for a continuing setting instead of the typical episodic setting with reset states under this modification the authors compare to trpo and cpo with different discount factors in the continuing environment and shows the significance of optimizing the average reward lower bound objective the authors do not compare to other well known policy gradient baselines which i believe is fine in this case since the key contribution is to propose an algorithm applicable for continuing environments and modifications on top of their atrpo algorithm can be made as required to compare to other well known existing baselines even though the experimental results are less exhaustive i think this is not too of a problem compared to the significant algorithmic contribution this paper introduces few issues and comments for improvement i think the major drawback or clarity this paper requires is to describe their algorithmic steps more clearly at first glance there are few issues which seems unavoidable and it would be better if the authors can clarify on these these are as follows lemma 3 bounds the divergence between stationary distributions in terms of the divergence between policies this result is adapted as shown in appendix following from achiam et al and is known from other papers that such divergence between stationary distributions can be bounded lemma 3 equation 8 shows that this bound depends on the linfinity norm between the transitions under the policies proof of lemma 3 is easy to follow my major concern is how is this term in equation 8 dealt with in the overall algorithm theorem 1 following from the stated lemmas depends on this as highlighted in the epsilon term and in algorithm 1 equation 12 it seems that the assumption that the stationary distributions between pi and pi are close in this case is less justified in the average reward setting compared to the discounted setting can the authors comment on this in the discounted setting the normalized occupancy measures between two policies can perhaps be assumed to remain close as done in trpo however does the same hold for stationary distributions if so why would that be i think this is a major assumption i do see why in the overall algorithm this assumption is made since otherwise finding the sensitivity of stationary distributions wrt changes in policy parameters might be difficult to compute but theoretically it would be better if authors can comment on it section 51 outlines the key algorithm and steps equation 13 shows the overall objective with kl constraints which at first glance is almost equivalent to the discounted reward setting however as pointed out later the advantage function takes account of the average reward and the overall algorithm can be naturally extended from trpo and cpo however i think it would be better if the authors comment on the above issues and how is the overall algorithm implemented in appendix c there are discussions on the critic estimate if this was to be extended in an actorcritic setup with lambda returns however appendix c does not really give anything meaningful as expected it seems that the overall difference is in equation 33 where the target is now modified with the average reward this makes the algorithmic contribution clean but i wonder how is this implemented in practice the target now requires computation of the average reward for every sa pairs this seems to be a bottleneck and comes to the major drawback for the average performance criterion anyway how is this avoidable overall i think the algorithmic implementation of this is not clearly explained it seems there are major steps ommited in the overall description of the algorithm and this is partly also because the authors propose atrpo and acpo algorithms which are two different contributions itself i think it would be better if the key algorithmic idea and implementation details are rather included in the main text so that the significance of the work can be better highlighted and perhaps the extensions with cost constraints can be moved to the appendix summary i think overall this is a good paper with some clarities that are still required from the authors this is a good contribution extending existing impactful works from the discounted reward setting to the average reward setting the authors propose theoretical justifications as well as a practically feasible algorithm if the authors can clarify some of the major concerns i have given my understanding is correct i am willing to further increase the score however as it is i would recommend marginal acceptance and open to discussions with other reviewers and authors to clearly understand the significance of the work docsepthis paper advocates for the use of the average reward objective in long horizon continual reinforcement learning settings and it examines whether a limiting argument for the discount is sufficient for extending theoretical results from the discounted setting to the averagereward setting in the case of policy optimization the paper shows the monotonic improvement bound from schulman et al 2015 becomes vacuous as gamma approaches one the rest of the paper presents theory to support a new nonvacuous bound which leads to a new algorithm analogous to trpo atrpo the paper additionally shows how their main theorem can be applied to constrained mdps in section 52 empirical results are provided in three mujoco domains modified to provide nonepisodic experience i found many strengths in the paper for one it was enjoyable to read and fairly well organized the authors presented their ideas clearly and they appeared logicallyconnected throughout the paper the paper is also well motivated continual reinforcement is a broad area of open research where new advancements stand to improve the generality of our current aiml systems in that setting the average reward objective for mdps seems most natural but despite the existence of averagereward rl algorithms many of them are not well theoretically well understood eg r learning and as the authors bring up it is not known if more current theoretical results about policy optimization apply to the averagereward setting there are few minor issues i have with the presentation and some questions i have regarding the experimental results pending a satisfactory response to these issues which i describe below this paper could be ready for publication although the results of section 52 seem useful for those interested in constrained decision making processes they also seem fairly anecdotal in relation to the papers main claims and for what the main paper presents the claims supported with substantial empirical evidence only use atrpo the acpo results have been tucked away in the appendix i suggest the authors remove this material since the atrpo results are sufficient to support your main claims and considering the constrained mdp setting in addition to average reward and discounted is too much for the length of a conference paper section 7 presents empirical data showing the performance of the proposed algorithm atrpo and compares it to trpo with different discount factors the presented results seem positive suggesting that atrpo finds useful policies within the considered data regime however this section is terse and so i have a few questions for the authors im not yet convinced the evaluation is entirely sound the performance of an average reward agent is compared to discounted agents using their respective returns averaged over the last 1000 steps of their 100 most recent trajectories i take issue with this approach for several reasons 1 it is not clear that it has a meaningful connection to the averagereward criterion why not use the average reward which is what the paper advocates for as the natural metric in continual settings 2 restricting the evaluation to the last 1000 steps or trajectories of length 1000 authors please specify biases the outcome against discounted agents whose effective horizon is 1000 or greater these agents are trained to maximize cumulative reward on horizons greater than or equal to 1000 but then they are evaluated on horizons less than that 3 in a continual setting there is no separate evaluation phase learning continues indefinitely therefore it seems more appropriate to report the reward averaged over a horizon where sufficient mixing has occurred rather than report offline evaluation performance it is not immediately clear to me that the modifications to mujoco result in a well posed continual learning problem can the authors please clarify if the maximum episode length cap was removed for training can they also clarify exactly how the start states were sampled are there any aspects that change or remain the same between falling down and restarting a complete characterization of the factors of variation would be helpful here i do not have a good sense for what data is being plotted in figures 1 and 2 the data seems to vary between the plot markers so my question is what do the plot markers represent has this data been smoothed at all if so then can you please provide more details so we can understand how the data was processed the paper does a good job of positioning its contribution with respect to prior work throughout the text though it wouldnt hurt to expand the related work section so it reads less like a list of facts the goal would be to help readers understand the contribution by comparing and contrasting it with what others have done it would be helpful to include a video showing the execution of the policy as it is being learned this would allow reviewers to verify if the learned averagereward polices are actually useful and they do not simply achieve better evaluation metrics than the presented baselines the questions identified in the beginning of section 7 could be more specific for example which performance criterion is more suitable for continuing control tasks this question strikes me as something that has been answered in the affirmative throughout the long history of averagereward and infinite horizon discounted mdps this is also mentioned in two references you provide sutton barto and naik et al 2019 could you be more specific about what you are trying to address here how this is different than what others have shown and about how question 2 is different from question 3 many of the theoretical results cite a set of class notes achiam 2017 i suggest the authors remove this citation and reproduce the results needed for their arguments in the appendix this provides a way for those to be peer reviewed and for future work to reliably cite them docsepthis paper identifies an important problem policy optimization in undiscounted continuing tasks this is indeed important since the discounting factor may not be appropriate in certain applications such as health care or robotics i feel the main contribution of this paper is theorem 1 an average reward version of the monotone improvement theorem naively applying the discounted version results in a trivial bound my concern is about the algorithm 13 that is almost the same as trpo with a bit straightforward extension on the estimation for the advantage function in the constrained form the kl radius is viewed as a tuning parameter but in the regulaized version 12 the xi involves a complicated form involving true p to guarantee the monotone improvement this does not make very much senses to do this approximation note that in trpo there is no true p involved a large body of policy optimization works that focus on averagereward is perhaps missing 1 online markov decision processes 2009 2 online markov decision processes under bandit feedback 2010 those two assumes the p is known but the average reward performance difference lemma has been explicitly derived in those two more recently politex regret bounds for policy iteration using expert prediction considered a fully rl setting and averagereward setting with guarantee thanks the authors response based on the originality of theorem 1 i increase my score by 1 but still a bit worry about the sufficient algorithmic contribution beyond trpo docsepsummary the paper proposes an extension of the performance improvement bound introduced for the first time by kakade langford 2002 to the case of average reward performance index instead of discounted return the paper starts with a theoretical contribution in which all the steps of the original derivation are adapted for the new setting leading to a new performance improvement bound then this theoretical achievement is employed to derive the corresponding adaptation of trpo called average reward trpo and cpo called average cost cpo finally an experimental evaluation on some mujoco domains is provided major about the fundamental matrix the main difference between the performance improvement bound for the discounted return and the one presented in this paper is the constant that multiplies the expected totalvariation divergence between the policies while for the discounted case this constant is a function of the discount factor and of the maximum advantage in the average reward case it depends on the maximum advantage as well and on the norm of the fundamental matrix of the markov chain the authors clarify that this matrix is guaranteed to exist for regular chains however the value of its norm although finite can be very large from a practical perspective this has limited implications as the objective that is actually optimized translates the penalization into a constraint ignoring this constant nevertheless it would be interesting if the authors could elaborate more on the properties of this matrix maybe providing some bounds of its norm if possible in terms of more manageable quantities organization i have some concerns about section 52 first the change of perspective from rewards to costs might not favor the clarity of the paper second concerning average cost cpo no experiments are reported in the main paper but in the appendix only moving also section 52 to the appendix and reserve more space for the proofs of the theoretical results in the main paper might improve the organization experiments i have some concerns about the interpretation of the experimental results especially on how atrpo is compared to trpo the two algorithms are designed to optimize different objective functions the average reward and the discounted return respectively the plots that are shown in figures 1 and 2 report the return which i assume to be the undiscounted sum of the rewards over 1000 and 10000 steps respectively i am wondering whether drawing conclusions based on this performance index is meaningful indeed optimizing the undiscounted sum of the rewards is closer to optimizing the average reward rather than the discounted return and i am not surprised that atrpo outperforms trpo can the authors elaborate on this point and explain why they think that the comparison is fair minor in section 2 the paper requires that the state and action spaces are finite is this assumption really necessary to derive the presented results in the experimental evaluation section the algorithms are tested on the mujoco suite that is made of continuous stateaction environments equations 9 10 12 sometimes the state below the expectation is bold overall i think that the paper can be considered incremental compared to trpo and more in general to the papers that study the performance improvement bounds in the discounted setting moreover i have some concerns about the interpretation of the experimental results therefore my current evaluation is borderline
### Summary:
|
this paper proposes an extension of the monotonic policy improvement approach to the average reward case although the reviewers acknowledge that this work has merits well written clearly organized wellmotivated technically sound the reviewers have raised several concerns which have been only partially addressed by the authors responses in particular reviewer4 is still concerned about the discrepancy between the theorem and the implemented algorithm and the proposed simplification used in the implementation boils down to an algorithm that is very similar to trpo thus making the contribution quite incremental as also stressed by reviewer1 furthermore i share the concerns raised about the fairness of comparing algorithms that optimize different objective functions i suggest the authors take into serious consideration the suggestions provided by the reviewers in order to produce an improved version of their work the paper is borderline and i think that it needs another round of fresh reviews before being ready for publication
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary the authors of this paper make a connection between the tderror from a single unit of experience and various metrics of improvement for agents trained with prioritized experience replay and qlearning or soft qlearning they show that priorities based on tderror are indeed sensible and show that a small adjustment to tderroraspriority for soft qlearning agents is both theoretically sound and can yield improved performance the setup and motivation of the paper is relatively clear and wellexplained one reason i like the paper is that the process is quite straightforward 1 strive to better understand a commonlyused algorithm 2 derive theory with reasonably good intuition behind it 3 show that it empirically holds true on simple environments and 4 the better understood result also yields modest improvements on a test suite there are a few areas i would consider rewriting or rewording for added clarity for instance the thesis of the paper relies on a sensible definition of value of experience and this isnt made concrete until later though perhaps this is hard to do in the introduction as a small nit i think the extra use of the word surprise was at first a bit unclear especially as it aliases tderror unless i missed something i think an extra paragraph about why we should use surprise as the correct metric for prioritization would be helpful notably that when we know it upper bounds the three metrics we want to continue prioritizing sampling experiences when training our agent because this will yield faster learning since we will have larger improvements to our agent however this dovetails into an additional few questions that could be followed up here is there a correct temperature with which we sample experiences from our prioritized replay that is ideal should we not be sampling experiences and simply sorting the experiences by priority and training our agent on experiences in descending order of priority why shouldnt we do this how much worse is this than using uniform sampling i understand that some of these questions are hard to answer with limited compute but some mention of this would give the paper more depth additionally there are a number of small issues in the writing notably towards the end a careful read of the paper for proper grammar making sure the right propositions are used and that there are no missing words in sentences would help the flow and readability of the paper greatly while the overall result is a nice one i believe the paper has somewhat limited scope it doesnt fundamentally change how we should approach training our agents with a replay buffer in fact i suspect that tderror was used by the original authors because they knew of a link like this or had strong suspicions of it i think what this paper should do is explore or at least pose a gamut of interesting followup questions about the role of replay and how best to use it is it possible that there is instead a lower bound we can derive can we learn a samplingprioritization scheme by gradient descent that somehow does better are there prioritizations completely disjoint from tderror that we should consider using in addition while limited resources might make more empirical investigations challenging its also worth understanding how other commonly used algorithmic mechanisms in deep rl interact with prioritization such as stepping environments in batches or the preprocessing done to observations or things like reward or advantage clipping i think that if the paper showed more evidence of zooming out and thinking deeply about the core problem this would be an excellent paper docsepthis paper tries to interpret per from the lens of replaying experience that has the most surprise and shows how it connects to some notions of value such as the expected value of the backup and policy improvement value and evaluation improvement value and argue the authors also derive a maxent version of this and show that this can improve performance on some atari games though this is not that convincing my score for this paper is based on these points motivation i do not see the motivation for introducing these metrics and why that explains per in the first place agreed that per is a reasonable choice and it can upper bound the eib and evb metrics i have issues with this too more on this next it just seems to me that the paper doesnt make any convincing claim for why this helps us understand why per works if the focus of the paper is on understanding per then the paper does not do a good job of it if it is to introduce these prioritization based on these metrics and the paper focuses entirely on them then i then have several concerns next definition of the value metrics the cited definition of these metrics requires using the true qfunction or the true value function of the resulting policy if we end up approximating this using the learned model what is the guarantee that these metrics are indeed useful also theorem 1 should be restated to say that they care about the empirical eib and evb that is computed using the learned qfunction else it doesnt make sense to me moreover if the td error is a bound which i think isnt with neural networks as i discuss in the next point on the empirical evb cant i just drastically overestimate qvalues and get a larger empirical evb value to be super high and prioritize on those examples why is that good why wont that promote overestimation why is the update on the qvalue assumed to be tabular if the experiments are with a deep network on atari in a nontabular setting theorem 1 does not hold so either that should be rederived for the case of dqn or the experiments should be adjusted to do it on tabular settings in any case now it is not clear to me why the method works with dqn since the update in this setting isnt equal to qs a leftarrow qs a alpha tds a in general the solution isnt known with neural networks so the upper bound story doesnt hold there with the ntk jacot et al assumption i can obtain a somewhat similar update but preconditioned with the kernel gram matrix see achiam et al towards characterizing divergence in deep qlearning however theorem 1 doesnt hold anymore now so it is unclear why the method works even if i were to look at the experiments only the results are not that impressive the method is generally close to per and maybe a little better but no comparison is made on a more efficient method such as rainbow and there are only 9 atari games which is too little so that is not super convincing yet i would suggest the authors make some of the changes abovedocsepthis work aimed to understand the prioritized experience replay a widely used technique to improve learning efficiently for rl agents the authors proposed three different value metrics to quantify the experience and showed that they are upper bounded by the td error up to a constant the extension to soft qlearning was also presented finally the authors showed in experiments that derived upper bounds hold in maze and cartpole they also demonstrated that a new variant based on the upper bound achieved better performance on a subset of atari games the authors tried to achieve a deep understanding of the prioritized experience replay which i believe to be an important task however after reading through the paper i am afraid that the question why prioritized experience replay works so well in practice is not well addressed the authors provided the three metrics and provided their upper bounds unfortunately they are not sufficient to help the understanding as the upper bound derived is just the td error used in the prioritized experience replay i also do not see enough depth for these theoretical results and the presentation could be improved as well the experiments indeed showed some practical benefits but descriptions are confusing sometimes my detailed comments and questions are as follows 1 when defining evb piv eiv in eq 3 eq 5 sk is used however in the derivation of theorem 31 why is s used they should be the same if both are from ek 2 the introduction of function approximators in the last paragraph in section 2 is confusing how are they used in the following sections also why does the assumption as if the parameterized qfunction converges to its target value hold 3 for theorem 31 it looks to me that a tighter upper bound according to the derivations should be alpha td why did you omit alpha 4 in the derivation for eq 7 the authors claimed that the third line is because the increase in qfunction resulted from greedy policy improvement will not exceeds the surprise times the learning stepsize could you elaborate more on why this is the case 5 for ver in section 53 which upper bound does it use i guess it may come from theorem 41 but need more clarification also what are the exact differences between ver and perdocsepsummary the idea of prioritized experience replay is revisited but from a new perspective with new theoretical results here the authors propose the expected value of backup evb as a metric to assess the quality of a sample and its potential improvement on the policy and on the value function the authors decompose this metric into the benefit attributed to the policy and benefit to the value function the authors have two theorems the first theorem shows that the surprise aka the temporal difference error is an upper bound of the evb in the qlearning setting the second theorem shows that the surprise multiplied by a constant that depends on the policy is an upper bound to evb in the soft rl setting the authors demonstrate that the proposed tighter bound on the evb could yield improvements in the soft rl setting pros of this work the work tries to tackle an important and still not fully answered question which is why prioritized replay works well in the dqn setting but not in the soft rl settings making very nice connections to the existing empirical literature on the topic and a suggestion of how to improve per the paper is very clearly written and the motivation of addressing per from a rigorous perspective is clear overall a rigorous study of the quality of samples and experience seems like a very promising direction if the goal is to develop more intelligent agents that can make better use of the information available to them the authors try to demonstrate why the approach leads to tighter bounds than standard per with some visualization although i point out some issues below cons lack of motivation regarding the choice of evb more thorough experimentation lack of evidence suggesting a tight upper bound eg modest improvement in soft rl maze example does not suggest a tight bound in qlearning details lack of motivation regarding the choice of evb while the evb seems like an intuitive starting point for investigation its motivation and apriori connection to prioritized replay is not fully clear one of the problems with evb is that it seems to be a largely myopic measure of quality that is linked to only the current sample eg sars while ignoring the effect of the sample on further backups in the rollout recent work on the topic 1 pser suggest that a myopic approach can be significantly improved by upweighting earlier transitions that lead to good transitions in the future this would seem to suggest that a nonmyopic metric could lead to more significant improvements than working with a myopic setting which could be fundamentally flawed while i do agree that evb is a good starting point for analysis i am not convinced the results are fully conclusive particularly given the need for stronger motivation on evb and more conclusive experiments see below i would encourage the authors to think about how to better motivate the use of evb in the papers introduction more clearly if existing literature suggests so i am somewhat aware of 2 although the authors there do highlight the shortcomings of evb could sequential replay be the result of using an underlying metric that is less myopic than evb which seems to shows better promise over per if not why is evb truly the only right approach for analyzing per more thorough experimentation while i concur that the current paper addresses an important theoretical question and while the result appears trivial to implement not more difficult than per the bound should also be justified experimentally if it is to have significant value in practice i think one key detail that is missing and could greatly benefit the paper is a more careful analysis and visualization of what the modified tighter bound is actually doing in the softq setting for example how does the ranking of experience change when using the tighter bound how often does this bound lead to a revision or reevaluation of experience vs per is there a fundamental reason why per could not be as effective for soft rl in general that cannot be explained by better myopic estimates of the value of each transition lack of evidence suggesting a tight upper bound q while the authors argue at several points in the paper that the td could be a tight upper bound this bound is only attained in some special cases however this does not really mean that the upper bound is tight for qlearning since there could be other upper bounds based on td that incorporate additional value or policy information that could be tighter same also applies for soft rl the figure 1 also seems to suggest that the values are scattered quite randomly and do not attain the upper bound as claimed for qlearning this does not suggest that td is tight in any way can a gap be proved that effectively provides a lower bound on evb lack of empirical evidence supporting the tighter bounds soft rl the experiments on atari also suggest that the improvements of ver are quite mild only improving on per with statistical confidence on two of the experiments interestingly these two experiments also show mild improvement by pser whereas pser shows considerable improvements on other games evaluated here this seems to be at odds with the claims that ver is a significant improvement of per in soft rl in the games where ver does not improve upon per significantly i would encourage the authors to comment on why the results are so similar to per i think the current benchmark problems are fairly sufficient and complex and would not require more evaluation unless the authors believe this would lead to a different conclusion or if the games selected are not representative of where ver can be beneficial other points while i agree with the authors that when the learning rate is held constant td is an upper bound to evb in the qlearning case however in practice we often use statedependent learning rates that can be annealed over time with visitation counts which can often yield improvements as long as the usual stochastic convergence conditions are satisfied of course in this case wouldnt the learning rate play a role in the bound i think the work is very interesting and addresses a central issue and i hope that the above comments can be useful to improve the paper overall i think it is necessary to think more carefully about the connection between per and quantifying the value of an experience eg why evb how to reconcile moderate empirical evidence of the new bounds i am looking forward to the authors response on these issues above references 1 brittain marc et al prioritized sequence experience replay arxiv preprint arxiv190512726 2019 2 mattar marcelo g and nathaniel d daw prioritized memory access explains planning and hippocampal replay nature neuroscience 2111 2018 16091617
### Summary:
|
this paper is certainly on the way to be a solid contribution its an interesting research question and we need more understanding papers rather than yet another algorithmic trick paper the reviewers thought the paper was not yet ready the reviewers suggested 1 more motivation of why the proposed metrics were of interest 2 clearer discussion and evidence of how the analysis better articulates the performance of per 3 missing empirical details like methodology for setting hyperparameters why these 9 atari games undefined errorbars unspecified number of runs and 4 conclusions not supported by evidence in atari with missing experiment details likely too few runs and overlapping errorbars in most games few scientific conclusions can be drawn the work might be strengthen by developing the first part of the paper and focussing on the reviewers suggestions and deemphasizing the novel algorithmic contribution part
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper discusses the effect of increasing the widths in deep neural networks on the convergence of optimization to this end the paper focuses on rnns and applications to nlp and speech recognition and designs several groups of experimentsmeasurements to show that wider rnns improve the convergence speed in three different aspects 1 the number of steps taken to converge to the minimum validation loss is smaller 2 the distance from initialization to final weights is shorter 3 the step sizes gradient norms are larger this in some sense complements the theoretical result in arora et al 2018 for linear neural networks lnn which states that deeper lnns accelerates convergence of optimization but the hidden layers widths are irrelevant this also shows some essential difference between lnns and practical nonlinear neural networks comments about writing the findings are in general interesting and inspiring but the explanations need some further improvement in particular the writing lacks some consistency and clarity in the wordings for example it is unclear to me what weight space traversal means training size is mixed with dataset size and we will show that convergence to final weights seems to be a trivial comment unless there is some special meaning of convergence rate etc it also lacks some clarity and organization in the results some more summarizing comments and sections and in particular a separate and clearer conclusion section as well as less repetitions of the qualitative comments should largely improve the readability of the paper comments about results the observations included in the work may kick off some interesting followup work but it is still a bit preliminary in the following sense 1 it lacks some discussions with its connection to some relevant literature about wider networks eg wide residual networks wider or deeper revisiting the resnet model for visual recognition etc 2 it lacks some discussions about the practical implication of the improvement in optimization convergence with respect to the widening of the hidden layers in particular what is the tradeoff between the validation loss increase and the optimization convergence speedup resulted from widening hidden layers a heuristic discussionapproach should largely improve the impact of this work 3 the simplified theory about lnns in the appendix seems a bit too far from the explanation of the difference between the observations in this paper and arora et al 2018 typos and small suggestions 1 it is suggested that the full name of lnn is provided at the beginning and the font size should be larger in figure 1 2 there are some misspellings that the authors should check eg gradeint gradient 3 in formula 4 the authors should mention that the third line holds for all t is a sufficient condition for the previous two equivalent linesdocsepunderstanding the effects of overparametrization in neural network training has been a major challenge albeit a lot of progress has been made in the past few years the present paper is another attempt in this direction with a slightly different point of view the work characterizes the impact of overparametrization in the number of iterations it takes an algorithm to converge along the way it also presents further empirical observations such as the distance between the initial point and the final point and the angle between the gradients and the line that connects the initial and final points even though the observations presented are very interesting unfortunately the paper doesnt have the level of rigor required that would make it a solid reference the work presents its results somewhat clearly in the sense that one can simply reconstruct to probe in order to replicate the observations this clarity is mainly due to the simplicity of the questions posed there is nothing inherently wrong with simple questions in fact the kind of questions posed in the present paper are quite valuable however it lacks detailed study and rigor of a strong empirical work furthermore the style of the exposition anecdotal and several obvious typos make the work look quite unfinished here are some flaws and suggestions that would improve the work substantially a deeper literature review would help guide the reader put the paper in a better context especially the related work section is quite poor how exactly do those papers appear related to the present work do they support similar ideas or do they propose different perspectives the exposition should be made more to the point and concise for instance 3rd paragraph of section 43 where it starts with figure 5a whats meant by overfitting regime is it worse gen error is it merely fitting tr data how do we know from figure 2 whats a strong evidence some concepts such as the capacity do not have precise and commonly agreed upon definitions the paper uses those quite a bit and sometimes only later on the reader understands what it actually refers to the misalignment section is also quite unclear the observations can be formalized and the curve fitting should be explained in further detail the appendix touches upon simple cases but there is a strong literature behind those simple cases that could be quite useful for the purposes of the paper the authors have a lot of data available at no point the power law decay and exponent fitting are discussed for a paper whose main point is this precise scaling this looks like a major omission unless there is a specific reason for it other than the hardness of fitting exponents to power laws merely showing the observables in a loglog plot weakens the support of the main claims the theoretical argument provided is just an elementary observation whose assumptions and conditions are not discussed it is not a straightforward task for instance a suggestion for a theoretical result on the distance between the initial and final weights is presented here lemma 1 a3 httpsarxivorgabs180607572 distance shrink as the number of parameters increase consistent with the observations of the present paper note that this is in addition to the several early2018 mean field approximations to nns whose solutions are found in the limit where the number of parameters tend to infinity all the figures from 5 to 8 are presented very quantitatively such as looking at different layers and observing the percentage reductions the message one can gain from such presentations are extremely limited and not systematic i encourage the authors to formulate solid observables that can and should be tested in further detail even though the paper is touching upon very interesting questions at its current stage it is not a good fit to be presented in a conference as it only presents anecdotal evidence there is a lot of room to improve but the good news is that most of the improvement should be straightforward docsepthis paper presents an empirical analysis of the convergence of deep nn training in particular in language models and speech studying the effect of various hyperparameters on the convergence is certainly of great interest however the issue with this paper is that its analyses are mostly descriptive rather than conclusive or even suggestive for example in figure 2 it is shown that the convergence slope of adam is steeper than that of sgd when the xaxis is the model size very naturally i would be interested in a hypothesis like adam converges quicker than sgd as we increase the model size but there is no discussion like that throughout the paper there are many experimental results but results are presented one after another without many conclusions or suggestions made for practice i dont have a good takeaway after reading it the writing of this paper also needs to be improved significantly in particular lots of statements are made casually without justification for example if hidden dimension is wide enough to absorb all the information within the input data increasing width obviously would not affect convergence not so obvious to me any reference figure 4 shows a sketch of a models convergence curve its not a fact but only a hypothesis for example what if for super large models the convergence gets slow and the curve gets back up again in general i think the paper is asking an interesting important question but more developments are needed from these initial experimental results
### Summary:
|
this paper studies the behavior of training of over parametrized models all the reviewers agree that the questions studied in this paper are important however the experiments in the paper are fairly preliminary and the paper does not offer any answers to the questions it studies further the writing is very loose and the paper is not ready for publication i advise authors to take the reviews seriously into account before submitting the paper again
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper focuses on federated learning on noniid features this is a crucial problem when applying federated learning the authors propose a new federated learning scheme called adcol adversarial collaborative learning for noniid features specifically the server is designed to train a discriminator to distinguish the local representations from local parties while the local parties train the local models and expect the representations not to be distinguished by the discriminator authors conduct experiments on multiple parties where the data have heterogeneous features but share the same labels and label distribution and the results clearly show the effectiveness strengths 1 the motivation of adcol adversarial collaborative learning is both clear and reasonable indeed the features in realworld scenarios are usually noniid and how to apply federated learning on the noniid data has always been an important question in practice therefore it is quite necessary and challenging to design a thorough federated framework 2 the experimental evaluation seems to be convincing authors choose several typical fundamental tasks digits officecaltech10 domainnet for experimental evaluation the results demonstrated the effectiveness of the proposed adcol method weaknesses 1 in adcol parties deliver representations of data instead of local models it may lead to more serious privacy breaches it needs more work to protect the local data 2 the advantage of communication size is not persuasive in fact smaller models than used in solo are often used 3 the statements in this paper are not complete especially in the experiment sections there is a lot of confusion here such as the statics of all the datasets the method for averaging results from different data sources arithmetic average or weighted average according to data volume and methods for adjusting consistent label distribution but noniid feature distribution and the final distribution of label and feature as shown in the main review the motivation of the paper is clear and reasonable and also the experimental evaluations are convincing while there are also some issues that need further refinement to be accepted especially the statements in the experiment docsepthe paper proposes an adversarial training method for learning from noniid data in federated learning both a theoretical and an experimental analysis of the method are provided pros the paper addresses an important problem namely the one of federated learning that accounts for the fact the realworld data for collaborative learning is often noniid in the features the paper is wellwritten and motivated comparison to prior work is very welldone the proposed adversarial training method is intuitive and seems to perform well in practice the issues of communication complexity and privacy are discussed even if those can in some situations be limitations of the proposed method cons it is indeed possible that for certain modes for example the local sample sizes being large as compared to the number of users n the proposed method can be communicationinefficient similarly privacy can be seen as an issue it is true that classic fl is also prone to privacy problems however the proposed method sends multiple features of the same input data to the server at every round intuitively it should be easier to infer information about the original data with so much information rather than just with sharing one locally updated model at every round suggestions the paper will benefit from further explanation on why the proposed method finding representation that obfuscates the differences of the distributions of the local datasets is expected to work the idea is intuitive and is in lines with classic domain adaptation techniques however expanding the discussion on that will be beneficial similarly equation 1 and the kl distance come a bit out of nowhere in the text it will be nice to justify using the kl as opposed to other distance measures in section 31 it will be useful to repeat again the assumption that only the input distribution changes since this is where you formally introduce the studied problem the conclusion and in some sense the comparison to standard fl throughout the paper seems to suggest that the method is more computationally efficient than other fl methods the validity of this claim clearly depends on the values of multiple parameters of the problem such as n n d etc in this sense the technical discussion is section 35 is much more helpful to the reader and i would recommend against overselling the merits of the proposed method related to the last point perhaps the authors could comment on the possibility of communicating the representations of only a subset of the local data of each client at every round overall this is a very wellwritten paper that studies an important topic and proposes an interesting new idea while there are some limitations they are quite welldiscussed although improvements such as those mentioned in the suggestions section are desirable and i think that the paper will spark some interesting discussions and new ideas in the context of fl docsepthis work presented an adversarial collaborative learning named adcol for federated learning on noniid features it learns the invariant feature representations for all parties strengths 1 it proposed a novel adversarial collaborative learning for federated learning on noniid features 2 it provided the convergence state of the training process 3 experiments showed it outperformed baselines weaknesses 1 the overall idea is marginally novel as it simply adapts the training procedures of domainadversarial neural network ref 1 into the federated learning this idea has also been explored in previous work ref 2 in this case ref 2 also focused on the noniid features in fl it is more convincing to use ref 2 as one of the baselines in the experiments ref 1 ganin yaroslav evgeniya ustinova hana ajakan pascal germain hugo larochelle franois laviolette mario marchand and victor lempitsky domainadversarial training of neural networks the journal of machine learning research 17 no 1 2016 20962030 ref 2 peng xingchao zijun huang yizhe zhu and kate saenko federated adversarial domain adaptation in international conference on learning representations 2020 2 one major concern of this method is to the privacy issue it suffers much more seriously from the privacy leakage by sharing the hidden representations compared to sharing only parameters in fedavg this might break the privacy constraint of federated learning the communication cost also significantly increases due to the shared representations 3 the theoretical results provided the convergence state of the training process but it is not clear whether the proposed fl method can achieve the convergence result 4 the traintest split for each client is not given the novelty of the proposed model is incremental and the privacy issue of sharing features is not solved docsepthis paper studies federated learning when different devices have noniid features to address the heterogeneity problem it proposes a federated learning scheme called adcol based on adversarial learning in adcol the devices transfer local representations to the server while sending the discriminator to the devices the server aims to distinguish the devices local representations while the devices aim to train local models that generate nondistinguishable representations to make the representations nondistinguishable adcol adds an additional regularization term to the devices loss functions this practice aims to maximize the probability that the discriminator cannot distinguish the local representations the experimental results show that the proposed method has some advantages over several baselines strengths 1 this paper studies an important problem the noniid features problem in federated learning the authors have some good ideas to tackle the problem 2 the paper presents both theoretical and numerical results the experiments show that the proposed method has some advantages over some baselines 3 the paper is wellwritten in general weaknesses 1 using adversarial learning to add the noniid features problem in federated learning is not new the proposed method is similar to the fedufo algorithm in federated learning for noniid data via unified feature learning and optimization objective alignment zhang et al iccv21 not cited the authors might want to compare the proposed method with fedufo 2 the proposed method requires the server to distinguish the devices local representation in practice only a small number of devices are selected for each round of federated learning in addition the number of the selected devices might also be different it is questionable whether the devices can learn a common representation efficiently under these scenarios even if they can these dynamics might significantly degrade the performance of the adcol algorithm the authors might want to explain why the proposed method can be applied to these scenarios the appendix has shown some results related to device sampling but it only has one large sampling rate 04 in practice the sampling rate is much lower eg below 02 3 while the experiments appear to be comprehensive there is a lack of study on how different degrees of noniid influence the proposed algorithms performance besides the authors might want to increase the number of devices and adjust the device sampling rate while this paper has some nice ideas to address the noniid feature problems in federated learning its contributions are limited due to novelty and practicability issues
### Summary:
|
this manuscript proposes an adversarial method to address noniid heterogeneity on federated learning distinct from existing methods the mitigation is implemented by training and local communicating learned representations so the metric of success is indistinguishability of representations across devices there are four reviewers all of whom agree that the method addresses an interesting and timely issue however reviewers are mixed on the paper score and many raised concerns about communication overhead apparent privacy costs and convergence concerns with the adversarial methods there is also some limited concern of novelty compared to existing methods the authors provide a good rebuttal addressing these issues either based on experimental evidence adding differential privacy comparing communication overhead tradeoffs as it varies with model and sample size and some existing convergence analysis however after reviews and discussion the reviewers are unconvinced that the method is sufficiently robust to the concerns outlined authors are encouraged to address the highlighted technical concerns in any future submission of this work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposed a gradientbased moo algorithm with adaptive reference vectors the core idea is to view it as a bilevel optimization problem then the authors proposed an interactive solution for this bilevel optimization strength a new way to reformulate the task to bilevel optimization better performance than stateoftheart methods weakness it is interesting to view this task as a bilevel optimization problem but the solver seems not novel the bilevel solving is similar with existing wellknown expectationmaximization algorithm or the way used in neural architecture search nas so i have concerns about the technical novelty of this paper there are two assumptions proposed in this paper in which case we should the uniform one gmooaru rather than hvoptimal one gmooarhv another concern is the scalability of the proposed method it can be seem from table 1 and table 7 that the performance gain is getting worse and worse when extending to large models the authors did not talk about limitations of their work docseppaper tackles multi objective optimization moo and proposes a gradient based moo using reference vectorsrvs previous rv based moo algorithms use a fixed set of rvs and they might not be able to cover some parts of the pareto front pf authors propose to jointly learn rvs along with the solving the moo problem by formulating it as a bilevel optimization problem authors present an algorithm for solving the bilevel optimization problem and theoretical convergence analysis of the algorithm is also provided the paper is novel and interesting it seems to be more reasonable to learn the rvs rather than using a fixed set of rvs 1 some more details about how the film conditioning is done using rvs should be added for completeness 2 datasets in section 41 and even the multimnist multifashion and multifashionmnist are synthetically created it would have been interesting to see results on more real world tasks such as the multiobjective tasks considered in yoto 14 image compression authors should add some discussion about whether their algorithm can scale to large scale datasets for eg large scale mtl tasks like image classification object detection semantic segmentation docsepthe paper proposes a novel multiobjective optimization algorithm based on reference vectors rv in contrast to prior works the paper proposes learning these rvs using gradientbased optimization instead of using prespecified and fixed rvs this is formulated as a bilevel optimization problem where the lower level obtains the optimal solution given the rvs and the upper level optimizes a quality measure wrt the learned rvs uniformity and the hypervolume indicator are considered as quality measures an efficient solver for the bilevel optimization problem is proposed requiring just one gradient descent step for lower and upperlevel problems a convergence analysis is also provided the algorithm is evaluated on both synthetic and realworld problems and achieves stateoftheart performance strengths the approach to extend existing rvbased approaches with a mechanism to adaptively learn said rvs is sensible and appears to improve upon the baselines in all cases the formulation as a bilevel optimization problem is elegant and the provided solver is efficient quite simple and appears to work well in practice the overall presentation is excellent the technical presentation of the method and background material is sound and clear and the method is also wellmotivated the experiments cover a good number of benchmarks from more artificialsynthetic ones to more realworld applications weaknesses the experiments are most convincing on the somewhat artificial benchmarks around mnist and fashionmnist the experiments might be more compelling with more realworld applications eg multilabel image classification with more than two objectives adding a comparison in terms of computational efficiency eg in terms of runtime to table 1 for example could be helpful for the readers a discussion of the limitations of the method appears to be missing a section with limitations is unfortunately missing and would definitely benefit the paper docsepthis paper aims to obtain a set of paretooptimal solutions that can better uniformly cover the pareto front by learning a set of appropriate reference vectors rvs the proposed method is formulated as a bilevel optimization problem where we can obtain the paretooptimal solution in the inner loop and optimize the rvs by maximizing the quality function in the outer loop the convergence analysis of the proposed method is provided and empirical studies show the effectiveness of the proposed method strengths 1 this paper aims to obtain a set of paretooptimal solutions that can better uniformly cover the pareto front by learning a set of appropriate reference vectors rvs which is meaningful for multiobjective optimization 2 this paper is wellwritten and easy to follow weaknesses 1 the novelty of this paper is limited and it is just an ensemble of the existing works the rvconditioned network is adapted from 14 as mentioned in section 31 the solver section 33 of the bilevel problem in eqs 89 is more trivial which is similar to maml 2 the description of pareto domination in lines 3839 is problematic 3 how to compute the gradient wrt r in eq 7 4 the eq 9 is equivalent to minimize sumj1m sumi1n rijfjphi and can only obtain a paretooptimal solution based on a new reference vector sumi1n ri1 cdots sumi1n rim rather than a set of paretooptimal solutions as mentioned in line 120 thus how the proposed method can obtain multiple paretooptimal solutions and how to guarantee those solutions can uniformly cover the pf 5 in section 33 and algorithm 1 it seems the hypergradient ie the gradient of phik1 wrt r is ignored which may cause a large approximation error refer to above docsepthis paper propose a novel gradientbased multiobjective optimization moo algorithm with adaptive reference vectors rvs they formulate reference vector adaption as a bilevel optimization problem and solve it with an efficient solver experiments on an extensive set of learning scenarios demonstrate the superiority of the proposed algorithm over the stateoftheart strengths the motivation is clear and straightforward this paper is wellwritten the authors provide theoretical results on the convergence of the proposed algorithm experiments on several learning scenarios including synthetic problems multitask learning accuracyfairness tradeoff and usage on larger networks demonstrate the effectiveness of the method weaknesses the novelty of this paper is limited they simply formulate how to generate a proper rv and the models parameters as a bilevel optimization problem and solve it with an solver which is limited novelty since the idea of rv adaption is commonly used in some evolutionary algorithms 4026 and the bilevel optimization problem also is a common problem the authors simply combine these two together to generate their method the solver proposed in this work gmooar to solve the bilevel optimization problem seems exists some problems i am not sure can solve the bilevel problem in this simple way the algorithm needs to be explained more detail how the authors can guarantee the generated rvs are proper rvs there needs more experiments to demonstrate this point there miss some sota methods to compare such as multiobjective meta learning ye et al neurips 2021 for limitations see weaknesses and questions no social impact discussed given the effort neurips has put into this effort i believe something should be added to discuss this
### Summary:
|
the paper tackles the multiobjective optimization moo problem using adaptive reference vectorsrvs the authors claim two main contributions 1 the use of adaptive reference vectors rvs instead of fixed ones as in existing gradientbased moo algorithms 2 rv adaptation is based on the gradient of the quality function it is reasonable to learn the rvs rather than using a fixed set of rvs the authors also provide theoretical results about the proposed method however the concerns raised by reviewers should be carefully considered and addressed also the authors also consider only 2 or 3 objectives it would be interesting to see results with more objectives see 10 on classical numerical multiobjective optimization problems last figure 1 is confusing and difficult to understand
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper proposed a novel benchmark for relation extraction from very long documents and a new method knowfi to deal with the problem the dataset is constructed from wikia containing long fictional texts the method knowfi generally contains two modules passage ranking and multicontext neural extraction they first rank the source passages via a similarity function and then feed the topk passages to a bertbased multicontext neural extraction module to get its relation experiments showed that knowfi achieved strong performance on this new benchmark and two conventional shorttext benchmarks strengths 1 the benchmark proposed in this work is novel and worth exploring 2 the proposed method knowfi is simple yet effective 3 strong results on this benchmark and two conventional shorttext benchmarks weaknesses 1 the statistics of the constructed dataset lofido are missing how many samples do you use for training and evaluation how many passages are given for each sample 2 lacking ablations of the multicontext neural extraction module how does the similarity threshold which is set to 05 affect the performance questions 1 have you tried any other baselines for passage ranking for example matching spo by word embeddings or contextualized word embeddings 2 what do you mean by for context selection we rely on tfidfbased bagofwords similarity choosing the top100 tokens per relation as its context 3 have you tried the comparison of the proposed method with glre the passage ranking docsepstrengths 1 a new task and benchmark this paper focuses on a new task to extract relations from long fictional texts and release a new benchmark with kb annotations from wikia or manual annotations 2 model design is reasonable to solve the sparse issue of distant supervision the authors rank passages based on tfidf to tackle the long text input issue multiple passages are encoded and are then concatenated 3 the experiments show the superiority of the proposed model weakness and detailed comments 1 missing model and experiment details 11 lack of details of prototype representation how long is the short list of synonyms and close hyponyms or hypernyms where did you get the hyponyms or hypernyms 12 according to the aggregation layer is configured to concatenate the representations of all passages the number of passages should be fixed otherwise it cannot match the dimension of the following classification layer however in the experiment it is said that we select all passages with cosine above 05 as positive training samples which seems that the number of passages is not fixed but is decided by the similarity threshold 13 what do your mean by context selection in the parameter setting there is no such terminology in the model sections do you mean the passage ranking 14 also i cannot follow chosing the top100 tokens per relation as its context 2 missing dataset details 21 there are no data statistics in both the main paper and the appendix which is important for a newly released dataset 22 will the passages that you selected be released 23 about 20 of the triples from each of these universes are withheld for testing how about the validation set it also does not mention whether parameters are selected on the validation set 3 questions about the experiments 31 why do you choose tfidf to rank passages did you compare with embeddingbased ranking method it seems that the ranking step is the most crucial step in your model but tfidf model cannot handle synonyms 32 in section 8 we pursued a user study to compare entity summaries one by knowfi and one by a baseline which conflicts with we compare knowfi to all baselines does the first sentence is written by mistake 4 additional questions to deal with long text using transformerbased encoders the models designed for long text such as longformer should be a natural thought have you tried that typos e1e2 for which e1re2 does not hold e1e2 for which e1re2 does not holddocsepthe paper focuses on the relation extraction task from long fictional texts ie booklevel re which is an important step towards kb construction from books the authors present a neural architecture to address the challenges such as sparsity distantsupervision size limitation etc although the proposed method is not particularly novel the paper presents a nice first work in this direction the authors also provide a new benchmark lofido for evaluating different methods which is also a great contribution i dont have any major concerns about this paper
### Summary:
|
this paper proposes a method for knowledge extraction from long documents and evaluates it on a new dataset constructed for this purpose all reviewers found the method novel and the details added about the dataset and the model were appreciated thus i recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
1 this paper analysis the weaknesses of current hmc benchmarks the statistical and structural properties not adequately represent the problem space and can confound performance analysis among competing methods 2 this paper propose a toolkit ontologue to declaratively construct and analyze a subset from dbpedia as a benchmark for hmc problems with indirect control over structural and statistical features 3 this paper perform comparative analysis of the new benchmarks against the old benchmarks and evaluate the performance of state of the art models using four new benchmarks derived by ontologue 1 the proposed ontologue can provide customizable benchmarks 2 this paper try to evaluate the proposed method from different aspects 1 in the introduction part the paper mentioned we analyze 20 hmc benchmark datasets but in the results part both in table 1 and figure 2 there are 6 benchmarks are included 2 this paper compared the four new benchmarks derived by ontologue with 6 existing benchmarks in section 5 comparative analysis of benchmarks but the domains of the four new benchmarks are different form the 6 existing benchmarks therefore it is not clear whether the difference of the performance is caused by the benchmarks themselves or caused by the classification problems domains 3 in table 3 only one method is used to evaluate the effectiveness of contextual information it is better to include more methods to evaluate if the contextual information can help to improve the performance for more methods docsepthe authors propose a tool for generating hierarchical multilabel classification benchmark data sets discuss the issues with currently available solutions generate their own example benchmark datasets and provide baselines for them using wellestablished methods as a person who also spent a significant amount of time on tackling problems with established multilabel data sets their label distribution how some of them are mostly singlelabel or multiclass problems and should not be evaluated in the multilabel field i found it very satisfying that the authors provide a view on shortcomings of data sets that are currently used shortcomings that may confuse the modeler to take artifacts for progress it is a strong contribution of the work although in the appendix comparing the figures 3 and 4 in the appendix is the main reason for accepting this paper additional strengths of this paper taking the application context and looking at triviality or impossibility of labels in figure 2 discussion of the hmc vs fewshot learning using a selection of widely used methods for baseline establishment a strong weakness of the contribution is that at the time of review and this review is a little late the code and data are still not available in the github repository it is therefore impossible to assess whether the authors actually deliver on what they write another weakness is that with dbpedia labels quality is uncertain a lot depends on which field one chooses and the choice of the hierarchy depth the problem is especially with recall as the taxonomy on wikipedia evolves it takes time for articles to be reclassified by editors not to mention new articles to be added to specific labels the visual analysis method introduced in the paper helps to identify deterioration as it starts to happen but it still does not remove the problem that there will be cases of topic depth combinations where the dbpedia data is not going to allow generating a good hmc data set i wouldve welcomed a limitation section discussing the limits of dbpedia in this regard docsepthe main proposal in this paper is a way for filtering the dbpedia dataset so that it can be used for a hierarchical classification task called ontologue ontologue allows control over the depth of the tree on the label space the number of examples in the specific label and the topic of the dataset based on the choice of the starting node the paper shows the usefulness of ontologue by generating 4 different datasets and running baseline and state of the art hmc models on the datasets the paper also presents the argument that existing hmc datasets present 2 artificial problems with some examples having a multilabel classification and another sizable set of examples having a few shot learning problem they also note that existing datasets dont text data associated with labels which may be leveraged by modern few shot learning methods the datasets which one can generate with ontologue remedy this the ontologue framework seems useful for furthering methods for hmc it is also nice that it comes with a framework to visualize the generated datasets the experiments presented demonstrate that the datasets generated with ontologue offer a meaningful challenge to models fig 2 the motivation of the paper seems to be based too much on a specific model for hmc developed by the authors in prior work surj the paper uses surj to demonstrate that specific desirable properties are fulfilled by the dataset eg the benefit of text associated with labels lines 280287 but also argues that these properties are important for leveraging modern few shot learning methods lines 5055 this seems like a circular argument which makes it seem like the datasets built here were built to demonstrate the capabilities of a model with focus on a realistic task somewhat lost i would encourage broader discussion and framing of the challenges presented by the dataset beyond those specific to one model the paper also repeatedly lines 7981 line 157 argues that the proposed framework generates datasets which are more realistic than existing benchmarks given that the existing datasets are realistic datasets and the proposed allows one to generate synthetic datasets this claim is strange what makes a dataset realistic for example what makes nodes with multiple parents more realistic i would encourage greater discussion of what makes the generated datasets more realistic quantitative comparisons to other hierarchical datasets eg perhaps ones not used for hmc and perhaps more grounding in the literature or discuss the implications of the proposed framework which generates challenging synthetic yet challenging datasets synthetic challenging wouldnt be a slight on this work the exact nature of the task proposed in the paper is unclear what exactly is the input abstract and what do the labels associated with it say about the input docsep1 the authors develop a toolkit ontologue which allows to extract hmc datasets from dbedia which meet certain predefined conditions 2 the authors extract four datasets and evaluate performance of sota methods using them 3 the authors have conducted an extensive analysis of existing hmc benchmarks and revealed their limitations and challenges 4 the paper presents several novel and useful analyses of the hmc datasets with respect to descriptive statistics domain and structural diversity label distirbution and global hierarchy violations 1 hmc is an important problem this paper increases our understanding of hmc methods by inroducing data for method benchmarking 2 the ontologue tool allows to extract such datasets from dbpedia which meet experimental needs at the same time the ontologue tool prohbits cherry picking behaviour 3 the four benchmarks create a reasonable problem space for hmc methods 1 minor along with four datasets in english the authors could extract one or two datasets in other languages 2 minor at the moment the github repo of the paper is empty the authors are requested to update the repo 3 minor line 283 for reproducability reasons the authors are requested to state explicitly which bert model was used docsep1 this paper analyzes the weaknesses of the existing 20 hmc benchmark datasets 2 this paper proposes a toolkit called ontologue to construct customized benchmarks 3 the authors derive four new benchmarks and evaluate the performance of stateoftheart models 4 the authors perform a comparative analysis of the new benchmarks against the old benchmarks 1 this toolkit takes the application context distribution of labels and data and hierarchy structure into consideration 2 it can be used to produce a customized benchmark based on any topic in wikipedia which contributes to the accurate evaluation of the effectiveness of the hmc methods 1 the figures in this paper are not clear enough please modify them 2 this proposed system called ontologue is not open so far and the proposed datasets in the github repository are empty
### Summary:
|
this paper proposed a benchmark for hierarchical and ontological multilabel classification most reviewers find that the paper is interesting and reveals some important shortcomings of the existing multilabel classification datasets during the rebuttal period the authors response addressed most concerns raised by reviewers i recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes an effective approach to the problem of fewshot gan learning by leveraging the frequency information from the generated images specifically the author designs a model fregan which integrates wavelet transformation a highfrequency discriminator hfd frequency skip connections fsc and highfrequency alignment hfa together experiments are conducted on multiple datasets with different number of samples the ablation of the experiment well explains the functionality for each component strengths 1 the method is novel to improve highfrequency components for lowshot gan learning is interesting 2 the comparisons are extensive both qualitative and quantitative comparisons show strong results 3 the ablation study well explains the effectiveness for each component weaknesses 1 using some visual examplesfigures to illustrate the effectiveness of each component could be even better for example how does hfa work as hfd already provides a supervision to enforce g to synthesize highfrequency component similar to the real images why should we add additional constraint on the intermediate feature maps does hfa stabilize the training 2 the clarity of the paper can be improved for example it is not clear what mathcalg and mathcalt are in eqn 7 yes docsepthis paper aims to stabilize gan training in the lowdata regime by incorporating wavelet information into the generator and discriminator specifically the paper proposes three measures to improve image synthesis in the lowdata regime i frequency skip connections fsc ii an additional highfrequency discriminator hfd and iii highfrequency alignment hfa the proposed method improves over the baselines on various datasets and for various metrics strengths generally the idea to exploit frequency information for image synthesis in the low data regime is interesting as it might provide additional guidance to the model the proposed method is evaluated on a multitude of datasets and metrics further the proposed design choices are validated in an ablation study that demonstrates their effectiveness weaknesses the paper is missing any comparison to swagan1 while 1 is not aiming at the lowdata regime specifically it features many similarities to the proposed method as it considers image synthesis in the wavelet domain there needs to be a detailed analysis on how the proposed method differs from swagan as well as to include results for 1 as a baseline without this it is difficult to judge how novel the contributions of this work are the paper uses fastgan as baseline however projectedgan 2 improves over fastgan also for small datasets the comparison to projectedgan is missing from the related work and the experimental analysis and thereby lacks comparison to one important stateoftheart method it would also be interesting if the proposed method can be combined with projectedgans for even further improvements the language is often colloquial see misc 1 swagan a stylebased waveletdriven generative model gal et al siggraph 21 2 projected gans converge faster sauer et al neurips 21 misc l43 106 employ a highfrequency discriminator hfd and frequency skip connections fsc l45 141 151 etc synthesis synthesize l54101 etc unhealthy unbalanced l71 however their styleganbiggan success is mainly from sufficient training data i disagree the main success of styleganbiggan is due to architectural choices carefully tuned training strategy and regularization nonetheless it is correct that their performance degrades with limited data l90 prove demonstrate l112 the haar transform does not transform an image into the frequency domain but rather to the wavelet domain l138 add a pointer to ablation where this is shown l198 implement implementation the results in the appendix indicate that recall is very low for all methods and all datasets while this limitation seems not specific to the proposed method it should be discussed in more detail in the paper docsepthe paper proposes to improve the training of gans with limited data by means of more careful attention to spectral properties of real and fake images the proposed model fregan utilizes wavelets to map the generators and discriminators intermediate features to the frequency domain these features are used as input for an additional discriminators branch hfd to raise the awareness of the discriminator to the frequency domain of images as well as for an additional loss hfa which is used to provide supervision in the frequency domain for the generator fregan is evaluated on a range of limitedsize datasets 645000 images and exhibits better fid and kid scores compared to previous fewshot gan models strengths 1 originality the paper combines the task of training gans under limited data and the task of addressing the spectral discrepancy between real and gangenerated images the experiments demonstrate that raising the frequency awareness helps to achieve both improved image quality fid kid and better rendering of high frequencies visually the paper is the first to explore this synergy for this specific task 2 results the results in tables look strong the gain in fid and kid in comparison to the baseline fastgan is clearly visible and is consistent across different datasets image resolutions and the number of training images weaknesses 1 evaluation major concern in fewshot image synthesis image quality metrics fid kid can be often improved at the cost of the synthesis diversity 1 figure 4 for example the model that perfectly reproduces the training set can achieve neartozero fid and kid the diversity aspect is not evaluated in this work at all this makes the whole evaluation of the paper unconvincing does fregan improve fid of fastgan at the cost of diversity is fregan memorizing training images more than fastgan are the interpolations in the latent space of fregan smooth it is not possible to answer with the provided figures and tables in addition to fid or kid i suggest adding a metric that explicitly measures the diversity of generated images as done in some previous works one can choose pairwise lpips as in 2 tables gh or intracluster lpips as in 3 table 2 moreover it would be good to measure the average lpips from generated images to the nearest example from the training set which would help to assess the degree of memorization of the training set 4 table 8 lack of diversity is also partially supported by low recall across different datasets reported in the supplementary materials on the qualitative side it would be good to add examples of latent space interpolations as in 4 figure 5 and the nearest real neighbors to some of generated images 4 figure 1213 overall without this analysis given the size of the used datasets only improving fid and kid is not a convincing result to me 2 claim major concern the paper claims that the proposed method alleviates the unhealthy competition between g and d lines 11 47 54 in the low data regime the caption of figure 5 says that the fregan generator can better deceive the discriminator the justification for the claim is claimed in the loss curves plot in figure 5a in fact what is seen in figure 5a seems to say the opposite the discriminator loss of fregan is lower than the one of fastgan even though it has an additional term lhf this indicates that the discriminator can dissect real and fake images more confidently it would be interesting to see the curves of the discriminator outputs during training eg as in 5 figure 1bc where this effect should be more visible consequently based on the plot 5a the fregan generator struggles to fool the discriminator more than fastgan this is confirmed by the larger g loss for fregan it is thus not clear how the better gan equilibrium is quantified by the authors so one of the main claims is not supported 31 clarity major concern generally it is not clear why the technical proposals of the paper are introduced as techniques for training gans in low data regimes the frequency bias for gans exists not only when the training data size is limited 6 in principle any gan model should benefit from generating images with better spectral properties eg the motivation in lines 3339 also fits to training gans on large datasets the explanation on mitigating unhealthy g and d competition is in my opinion unsupported see weaknesses2 why are the introduced modifications tested only in lowdata regimes do the proposed techniques help to improve the quality of images or spectral properties of gans trained on large datasets 32 related work major concern the proposed method is by design aimed to improve the quality of images in the frequency domain there are other works with similar motivation eg 6 as an overview these methods can be potentially placed instead of the techniques introduced in section 3234 it is not clear whether the effect of the proposed techniques could not be achieved with already existing techniques mitigating spectral biases of gans a proper comparison to previous works is expected 33 evaluation major concern the proposed method is aimed to improve the quality of images in the frequency domain but this aspect is not evaluated quantitatively the only provided comparisons of highlow frequencies are visual which makes it subjective especially since ground truth spectrums to compare to are not available quantitative assessment of gan images spectrums was approached before for example by measuring the accuracy of a binary classifier trained on spectrum features of real and fake images 7 a proper quantitative comparison of spectral properties of images for different models is expected 4 clarity moderate concern the motivation of the frequency alignment loss lines 151161 is not clear firstly what is meant by g can only synthesize arbitrary frequency signals the synthesis of g is guided with the discriminator loss which provides supervision on the image realism which includes the intensity of details of different frequencies so they should not be arbitrary secondly why is hda called a regularizer for d while it is included only to the objective of g finally eq 4 is a form of a reconstruction loss but it is computed between ds features of a real image and gs features of a randomly sampled fake image these images are not necessarily aligned spatially so it is not clear why imposing a pixelwise l1 reconstruction loss is meaningful please explain references 1 fewshot adaptation of generative adversarial networks robb et al arxiv 2020 2 generating novel scene compositions from single images and videos sushko et al arxiv 2021 3 fewshot image generation via crossdomain correspondence ojha et al cvpr 2021 4 towards faster and stabilized gan training for highfidelity fewshot image synthesis liu et al iclr 2021 5 training generative adversarial networks with limited data karras et al neurips 2020 6 on the frequency bias of generative models schwarz et al neurips 2021 7 watch your upconvolution cnn based generative deep neural networks are failing to reproduce spectral distributions durall et al cvpr 2020 the authors have adequately addressed the limitations and potential negative societal impact of their work
### Summary:
|
the majority of reviewers voted for accept and we had a 7 among the mix which i think cancels out the 4 this paper proposes using a wavelet method to bias the gan towards generating the proper frequency distribution which helps especially in the lowdata regime overall i think lowdata data modeling is important and although this approach doesnt seem extremely novel it seems useful and on balance the reviewers voted to accept i agree
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a more generalized form of certified robustness and attempts to provide new results on applying randomized smoothing to semantic transformations such as different types of blurs or distortions the main idea is to use an imagetoimage neural network to approximate semantic transformations and then certify robustness based on bounds on that neural network the authors provide empirical results on standard datasets like mnist and cifar showing that their method can achieve improved results on some transformations compared to prior work overall i would recommend a reject for this paper the paper discusses an important topic robustness to semantic transformations but does not tackle the problem from a perspective that i feel is significant in particular it is unclear to me how useful certified robustness to semantic transformations will be especially for nonresolvable transformations what does it mean to be robust to rotational blur for attack range alpha2 20 for cifar10 for example does this correspond to any guarantees on real datasets or at least empirical improvements on benchmark datasets i appreciate that the authors are thorough with comparing to various prior works in their experiments however their method usually performs very similarly or slightly worse than prior works when comparisons are possible thus the main experimental improvement is that it is now possible to achieve results on certain nonresolvable transformations that prior works are incapable of handling still in my view a key question is unanswered how can we be sure that the generative model is capturing all the possible nonresolvable transformations that we may care about because if we can not answer this question then it seems that certified robustness is not particularly meaningful and instead empirical robustness on benchmark datasets for example measuring accuracy on cifarc as well would be a better measure of success additionally the paper could be presented better in particular the important components should be explained more clearly eg the question i asked above about what alpha2 20 means whereas other parts of the paper can be moved to the appendix for example theorems 1 and 2 and corollary 1 are primarily based on prior work and use up a lot of text but there is not much description or explanation of how these theorems can be applied to get certified robustness to semantic transformations the best example of a useful explanation that i found is the short paragraph just under figure 1 on page 4 which was helpful for me additionally i believe the authors are missing very relevant prior work the work of 1 discusses a similar idea where semantic transformations are modeled via generative models and lprobustness in the latent space of the generative model is used as a proxy for robustness to semantic transformation some more specific comments and questions can the notation section be simplified any further how large is the error term epsilon in theorem 3 look for the generative models used in the experiments 1 httpsarxivorgabs200510247 modelbased robust deep learning generalizing to natural outofdistribution data alexander robey hamed hassani george j pappas after the rebuttal period i have read the authors responses and changed my score from a 3 to a 5 i appreciate the authors taking the time to attempt to respond to the concerns of all reviewers and for updating and improving their work during the rebuttal process i am glad to see that they do provide empirical evidence of improvement to commoncorruption robustness compared to augmix one of the stateoftheart approaches for standard commoncorruption robustness and tss although i can not tell how the authors derived their baselines i can not find references to the augmix accuracy numbers that the authors provided in their rebuttal in the tss or augmix paper still in my opinion the papers novelty is limited as the authors and i agreed upon in our discussion the main novelty is not improvement for resolvable transformations prior works that the authors cite perform about the same or better but rather is the ability to handle nonresolvable transformations i agree that robustness to nonresolvable transformations is important however certified robustness to nonresolvable transformations is not meaningful to me because they are only being certified with respect to a neural network that is trained to approximate those nonresolvable transformations without mturk studies to confirm how good the neural networks nonresolvable transforms are i do not find certified robustness here meaningful because it does not necessarily correspond to anything concrete that we can understand on the other hand empirical improvements on nonresolvable transformations would be meaningful thus the main reason i increased my score to weak reject is due to slight empirical improvements over baselines on cifar10c the paper leaves some key questions unanswered and can be presented much more clearly it is also missing very relevant prior work thus i recommend a reject docsepthe authors propose a randomized smoothing based certification algorithm for general semantic transformations the key idea of the work is to use a neural surrogate for the semantic transformations and add noise in the latent space of the surrogate for randomized smoothing the neural surrogate appears to convert the nonlinear multiplicative transformation into an additive operation in the latent space thus allowing for randomized smoothing methods to be applied their approach proposes to decompose the resolvable and unresolvable parts of the semantic transformation by lifting the datatransformation parameters into a larger augmented latent space defined by an imagetoimage network the resolvable parts of the transform can then be certified similar to previous works yang2021 salman2019 using the lipschitzness of the smoothed transforms the nonresolvable part of the transform however are assumed to be lipschitz in the latent space the authors provide theoretical and empirical evidence for gsmooth and show improvement on contemporary methods strengths 1 the topic of certifying generalized semantic transformations is extremely relevant the paper is well motivated and the authors clearly point out differences with existing work 2 gsmooth is able to certify several nontrivial semantic transformations which include difficult combinations such as rotational and defocus blur 3 while the certified accuracy values are not that significant for the more complex transforms the algorithm is good first attempt which succeeds as compared to other methods weaknesses 1 i am not convinced that an ell2 bound in the parameter space for compound transformations makes sense a real world attacker instead would have ellinfty or ell2 type bounds for parameters of each transform for example in the case of rotational blur an attacker could independently increase rotations and blur to the max value given the very different ranges of the two tranformations an ell2 bound may emphasize one transform over the other the current algorithm does not seem to take this into consideration 2 the idea of augmenting the latent space to decouple resolvable and nonresolvable components is interesting however from the text it is not clear why such a decoupling would occur without additional training objectives for the surrogate could the authors clarify this 3 its not clear to me why h would not reflect in the m term the proof seems to ignore the effect of h by considering it to be a concatenation of an identity and a zero matrix however the decoder itself may not be lipschitz due to the nontrivial nonlinearities involved a more clear explanation here is warranted 4 the paper lacks a empirical confirmation of the presented bound specifically the gsmooth classifier should be tested against an adaptive attack that has access to the surrogate and the parameter space to find if the bounds do actually hold the certificate should also be tested against standard expectation over transformation attacks this would also help quantify the tightness of the proposed bound minor comments 1 the explanation regarding the dimensionality augementation is confusing and can be presented better specifically we introduce the augmented data tildex additional dimensions to 0 also eq 9 should include the expression for augmented tildeh for better clarity 2 table 2 does not mention the radius for the certified accuracies 3 section 3 needs to be more clear the notation for the augmented parameter space needs some clarity on the dimensions used 4 while the authors present a nice simplification for f1 how about explicitly forcing lipschitzness for f1 f2 and h overall the paper presents a novel and interesting addition to the randomized smoothing literature this appears to be the first attempt at certifying complex semantic transformations and therefore has merit the paper still needs some additional work however specifically in terms of quantifying the tightness of their proposed bound against semantic attacks i also believe the authors should take a more careful look at the actual certificates for the compound attacks an ell2 bound for the entire parameter vector may not capture the worst case for each component of the transform the paper in its current form is marginally below the acceptance threshold however if the authors show empirical evidence of their certificates against semantic attacks i will be glad to improve my score docsepthis paper generalizes randomized smoothing to certify robustness against complex semantic transformations to do that they construct a surrogate neural network mapping images to images to approximate complex semantic transformations and certify robustness with respect to this network in this work the authors first construct a smoothed classifier gx based on smoothing the parameter of the parametrised transformation tau based on the compositional properties of tau they define conditions to obtain a robustness radius theorem 1 then they proceed to construct a surrogate image to image translation model approximating the transformation tau then the parameters of tau and inputs x get augmented to tildetau and tildex respectively to divide the transformation into a resolvable and a non resolvable part the surrogate transformation then leads to a more concrete form of m theorem 2 which is further simplified by assuming that f1 is an affine form corollary 1 finally in theorem 3 the authors provide a correction for the robustness radius rr for the case where surrogate transformation tildetau is approximating bartau dependent on the maximum ell2 approximation error the main ideas of the paper are interesting and to the best of my knowledge novel particularly the surrogate network however this paper needs to be written much more clearly to convey the central ideas to the reader particularly section 3 and section 4 are hard to understand and required considerable effort after which some aspects remain still unclear on page 5 what are the dimensions for x and theta or tildex tildetheta and how is x chosen how do you decompose the transformation to get a resolvable and a non resolvable part do i guess correctly from eq 9 that the resolvable part is the sum between theta and x and the non resolvable part is the sum between f1theta theta and f2x the related work li et al 2021 and fischer et al 2020 do evaluate on imagenet can the same be done for gsmooth table 1 indicates that verivis is not able to certify against translations however verivis can certify translations was the certification accuracy 0 what about the other works can the other methods potentially be adapted to handle blur or the pixelate transformation further the runtimes need to be stated in order to judge the trade off between performance and accuracy where do the authors show how they calculate their epsilon for the surrogate transformation networks as needed for theorem 3 the authors claim multiple times abstract and later in the paper that existing work can not handle complex semantic transformations can the authors substantiate this impossibility i could not find a proof or some further justification for this further in the related work section 21 apart from madry et al 2017 these works do not attempt to certify some simple geometric transformations but actually do certify them mardy et al 2017 on the other hand focuses on attacks and defenses the authors clarify the difficulties the approach of li et al 2021 presents for generic transformations can the authors also clarify the difficulties the approach of fischer et al 2020 presents in section 33 the authors imply that existing methods based on convex relaxations and randomized smoothing require the development of a specific algorithm for each individual transformation to what extend is this true for balunovic et al 2019 and fischer et al 2020 minor comments eq 1 it is slightly confusing to write theta sim gtheta just before eq 8 the dimensions for tildetau seem off while the ideas in this paper are to the best of my knowledge interesting and novel many questions remain unanswered and key points in the paper remain unclear docsepthis paper proposed gsmooth a generalized randomized smoothing method for semantic transformations the main technical contributions are 1 introduce the use of an imagetoimage translation network to provide a unified framework for the analysis of nonresolvable semantic transformations 2 theoretical proof on the certified radius and the approximation error from the imagetoimage translation network 3 the empirical performance is superior to existing methods on most transformations more importantly the method can certify many new transformations that are hard to analyze based on existing methods in my opinion this work shows groundbreaking results in certified robustness by providing an elegant and general framework for robustness certification with arbitrary and possibly complex semantic transformations with performance guarantees i believe it achieves an important milestone in adversarial robustness the paper is also wellwritten and easy to follow the comparisons to existing methods are thorough and convincing the ablation studies especially the noise distribution part that uniform distribution is quite competitive also shed some new insights into randomized smoothing i have two suggestions that i hope the authors can incorporate into the revised version 1 in table 1 since the considered transformation has very few parameters i would like to see the comparison of empirical robustness by running the actual attack on the certified value to understand the gap between certified and empirical robustness 2 since the proposed method can handle composite semantic transformations can the authors some case studies to demonstrate such an advantage minor comment 1 in sec 31 nabla should be defined and i believe psi function needs to be differentiable 1 strong theoretical and empirical results 2 the contributions are significant for expanding randomized smoothing to more complex semantic transformations
### Summary:
|
this paper proposes a more generalized form of certified robustness and attempts to provide new results on applying randomized smoothing to semantic transformations such as different types of blurs or distortions the main idea is to use an imagetoimage neural network to approximate semantic transformations and then certify robustness based on bounds on that neural network the authors provide empirical results on standard benchmark datasets like mnist and cifar showing that their method can achieve improved results on some transformations compared to prior work the review committee appreciates the authors taking the time to attempt to respond to the concerns of all reviewers and for updating and improving their work during the rebuttal process the committee is glad to see that they do provide empirical evidence of improvement to commoncorruption robustness compared to augmix one of the stateoftheart approaches for standard commoncorruption robustness and tss however the reviewers still have concerns about the novelty of the paper the main novelty is not improvement for resolvable transformations prior works that the authors cite perform about the same or better but rather is the ability to handle nonresolvable transformations the reviewers agree that robustness to nonresolvable transformations is important however the reviewers think certified robustness to nonresolvable transformations is not meaningful because they are only being certified with respect to a neural network that is trained to approximate those nonresolvable transformations without mturk studies to confirm how good the neural networks nonresolvable transforms are the reviewers do not find certified robustness here meaningful
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper builds on recent advances in deep learning for physical simulation to reduce the computational complexity of learned surrogate models by avoiding local evolution instead of updating the values at each discretized point in space the authors propose to evolve the model in a global latent space by encoding both states and boundaries into lowerdimensional latent vectors the model can evolve dynamics in the latent space and recover the inferred states only when needed the paper also introduces techniques for training such as a new consistency loss directly in latent space a fast way of dealing with inverse optimization problems by backpropagating through time in the latent space and an annealing technique for boundary optimization experiments in various pde settings demonstrate that the proposed model lepde latentevolution of pdes compares competitively against stateoftheart methods while requiring a fraction of the other methods computational resources strenghts the paper is appropriately placed in the current growing literature on scientific machine learning to create a novel fast model for tackling complex problems in the realm of pdes the exposition for the motivation is written crisply and is generally easy to follow the experiments are performed in challenging settings such as boundary control with the 2d navier stokes equations the results show that very large neural networks are not always necessary to represent complex physics and that simple latent mlps with few parameters can learn such physics evolutions thus considerably lowering computational requirements which is significant in deep learning for simulation weaknesses although the speedups of lepde are considerable it is unclear how the proposed approach would compare against other models with a similar number of parameters what if the fno had fewer parameters comparable to lepde would it still be better than the proposed approach in other words it is unclear whether the method is pareto efficient or not since there is no pareto plot considering an extensive hyperparameter search for both lepde and close competitors such as the fno about the novelty there is a possibly missing reference 1 although not published in conference proceedings in which the authors evolve a model in the latent space see figure 1 and section 31 of the reference and the main idea of the paper is very similar to the presented one how does this work compare against it the experimental section in the appendix about the 3d extension of the method 3d navier stokes lacks some experimental details and evaluation the learned model is not compared with other deep learning approaches so it is difficult to assess how the proposed method would perform against other models except for the same version of the model without the global latent space evolution while the comparison in the main text is against other deep learning approaches here it is against the ground truth solver which is known to be generally much slower than deep learning approaches moreover there is no report on statistical errors but only a qualitative plot that also shows noticeable artifacts 1 sanchez a kochkov d smith ja brenner m battaglia p pfaff tj learning latent field dynamics of pdes third workshop on machine learning and the physical sciences neurips 2020 minor comments these are primarily typos and do not influence my score line 235236 burgurs equation line 355 sweep spot line 707 appendix the details of the dataset has already given in line 748 to explore how lepde to larger scale turbulent dynamics a see above as stated in the main text no major negative societal impact is to be expected from this work docsepthe authors propose a method lepde to efficiently perform forward simulation and inverse optimization of partial differential equation based models by performing the evolution in latent space a loss term penalizing 3 terms including the consistency between latent space and the original space is defined and optimized via backpropagation for the inverse task experimental evaluations including an ablation study is provided strength the authors put together a very clear introduction and motivation for their work the method is described clearly in an easily reproducible manner weakness the authors state that low dimensional representation exists for high dimensional data line 4964 please have a look at johnsonlindenstrauss lemma and cite appropriate literature applicable here the authors may want to try their experiments on a larger scale grid size of 64 is too small as the authors mention dimensions in the millionsbillions in the introduction at least one experiment should demonstrate the efficacy of the proposed method in such a larger scale where other baseline methods are computationally very very expensive and resource consuming another concern about the proposed method is the applicability of the proposed system in realworld scenarios experimental data are often noisy the authors may want to look at how noise affects the latent space evolution and encoderdecoder performance yes the limitations are discussed in the appendix docsepthis paper looks at the problem of expensive time cost when simulating the time evolution of a pde existing methods employ values at different spatial positions at each time step causing a long simulating time lepde accelerates by learning a low dimensional latent representation and evolve the lowdimensional state rather than the highdimensional original variable optimization is performed upon reconstruction loss of the latent state and the evolution accuracy empirical advantages on time cost achieved over previous methods on different pdes strengths paper demonstrated empirical improvements on both 1d nonlinear pdes and 2d navierstokes pde over multiple previous methods model has a relatively flexible structure weaknesses did not study when can pde states be encoded into a lower dimensional state andor how many dimension can be get rid of did not study the number of parameters used against other methods using an extra encoder and decoder might take extra number of parameters resulting in unfair advantages over other methods figure 1 is unclear specifically is the box on the left the initial condition and boundary condition of the pde the schematic in the middle is also unclear the author proposed to accelerate evolution of pdes by first reducing its dimension the idea is intuitive and straight forward and is proven empirically with different pdes however a theoretical study on what pdes this reduction can be efficiently applied to is not conducted and it is not argued why the proposed reduction method is better than existing ones like vaes the method is flexible and should be able to be accompanied with existing surrogate models and other method docsepthe paper proposes a neural pde solver based on learning how to represent the system with an evolving latent global state the advantage is reducing the cost of approximating pde solutions without significantly reducing accuracy the method can also be used to solve some inverse problems summary and strengths originality the paper proposes an approach which is simple to understand and very general allowing any architecture to be used for the various components eg encoders decoders furthermore it appears novel to me although im not familiar with the neural pdes literature quality the proposed method is always at least competitive with existing methods and outperforms them in some important examples the experiments are thorough covering all the important ablations i could think of while reading the paper clarity overall the paper is clearly written and easy to understand but there are some minor problems discussed below significance the authors make the case for the importance of neural pde solver well in my opinion this is a good paper and clearly above the acceptance threshold weaknesses i found few important weaknesses in the paper some minor points are listed below and in the questions section while the paper is generally well written there are a few typos and confusingly worded sentences most of these are unimportant but below i list a few which made the paper harder to understand line 75 flow probe flow to probe line 89 location that satisfies given remove that satisfies line 122 lepde relieves of local evolution i dont understand relieves in this context use another word line 205 the prevents the gradient to pass through to the boundary parameter p such as continuous location im struggling to parse this sentence though i think the point is that its not possible to backprop through discrete variables line 236 eta in 002 should this say 02 line 324 average amount of the advected smoke simulated by the solver i dont follow what this means and i cant find the exact details in the supplement the paper covers technical and ethical limitations clearly in the supplementary material i agree with the authors assessment that there are no obvious negative social impacts i also appreciate that the authors included experimental results where existing methods slightly outperform the proposed approach this helps readers get a clear picture of the relative strengths of the methods in different scenarios
### Summary:
|
the paper presents a new method for accelerating the simulation and inverse optimization of partial differential equations pdes of largescale systems the proposed approach learns the evolution of dynamics in a global latent space ie with fixed dimensionality the reviewers agree the proposed approach is novel and empirically competitive issues regarding experiments have largely been addressed by the authors in their rebuttal their authors are expected to add some extended discussion if possible on theoretical properties of pdes where their approach is expected to succeed some of the reviewers increased their scores after the rebuttal period
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a tensor formulation as a unified framework for studying different architectures of neural networks this formulation is mainly used under the special case of linear networks with orthogonal tensor representation to provide a unified proof of many existing results on the implicit bias of gradient descent under different architectures overall the tensor formulation is an interesting and new to my knowledge abstraction on the other hand in terms of writing i found the key ideas behind the unified results regarding implicit bias to be a bit obscured within the tensor formulation since this paper is positioned as a new formulation with a goal of providing useful generalized abstraction i believe it would be beneficial to be more explicit about the core ideas and their connections to existing concepts in particular have the following concrete suggestions 1 although the tensor formulation is more general introducing the results including the orthogonality condition for the case of l2 layers first would be very useful for 2 layers the analysis only requires matrix algebra and it is easy to motivate the generalization and identify the key unifying concepts 2 the condition on orthogonality of tensor formulation which unifies existing results on diagonal and convolutional networks have an interesting connection to linear diagonal networks i believe the condition is equivalent in the sense of if and only if to stating that there is exists some orthonormal transformation of the layer parameters vl vl ulvl and some linear transformation of input x xsx such that after the transformations the networks behaves like a linear diagonal network thus at the core networks with orthogonal tensor representation are equivalent diagonal networks in a different parametrization moreover from the proof one can also show that this specific of change in parametrization does not affect the gradient flow path ie that gradient flow on vl is equivalent to gradient flow on the orthonormal linear transformation of the parameters tildevlulvl it would be useful to discuss the connection between the orthogonality condition and diagonal networks explicitly in the text as a motivationclarification for the orthogonality condition or as proof sketch 2b another connection is that for l2 where mx is a matrix the orthogonality condition is also equivalent to commutativity of mxi for the dataset inputs xi this also highlights why for a single datapoint the analysis goes through as mx is a single matrix which is commutative with itself 3 one of the main differences from the existing work is claimed to be the fact that this analysis does not make assumptions on convergence of loss however the main theorems introduce additional assumptions on initialization that effectively guarantee convergence of loss it is not clear if this assumption is more useful than assuming the loss converges the assumption on initialization is more easy to control however such constraints on initialization in turn affect the implicit bias explicitly for eg in regression the effect is explicit in the definition of q and for classification although the asymptotic direction is independent of initialization recent work on diagonal networks show that the initialization has strong effects on the convergence to the asymptotic max margin solution see httpsarxivorgabs200706738 docsepthe paper gives a general framework of what they call linear tensor networks which is essentially linear neural networks that is expressed in the tensor formulation under certain assumptions they show that the network parameters converge to the direction of singular vectors of the tensor for linear fullyconnected neural networks they recovered prior result for classification and underdetermined linear regression they show under decomposable assumption the limit point of gradient flow and its characterization they further corroborate their results partly by a few very simple regression and classification task strengths 1 the paper is organized in a way easy to read and follow given the amount of different pieces of results it has it has a clear introduction section to explain the motivation and their contributions the framework is written with examples of simple models within each piece of result when studying the limiting behavior of gradient flow the assumptions required are stated clearly 2 the contribution of the paper especially with respect to a number of related works has been listed in a rather detailed and clear way in the paper for instance for separable classification the relationship with counterexamples for certain initializations in prior work the comparison with prior results that only consider fulllength filters etc these comparisons and relationships can be found everywhere in the paper and i think it meets the satisfactory criterion for a framework paper that aims at unifying and generalizing the prior understandings 3 the incremental contribution from prior work is valid i think removing the convergence assumption for a number of prior works in characterizing the gradient flow and that generalizing the woodworth et als result for beyond the difference structure are both cool while the setting that the paper has considered is still arguably oversimplified i still view it as a nice step toward a better understanding for the implicit bias of the gradient flow for linear neural networks that offer nice insights weaknesses 1 to me some motivation and explanation behind the assumptions are lacking especially in remark 1 if the initial direction is for guaranteeing convergence then it might be more restrictive in use as compared with the convergence assumption right if not some examples will be nice also i believe it will be helpful to say a few words first on the assumptions of matrices for theorems like theorem 7 2 the classification experiments in section 6 couldnt serve as corroboration for the theoretical results in the paper even though it is in align with some recent papers it actually only shows the difference of limiting finitehorizon behavior in dependence of the initial distribution in that sense i dont know how much it corroborates or matters considering the points that this paper is making here also the synthetic dataset for the experiments seem a little bit too simple 3 overall i find this discussion on the problems implicit bias in terms of initial distribution for the 2layer one data point case a bit confusing it will be nice to state the motivation for considering this case and also its implication for some for more general overparametrized setupsdocsepthis paper first provides a general framework to represent fullyconnected networks convolutional networks diagonal networks etc using tensors for classification with the exponential loss linearly separable data and gradient flow it is proved that the parameters of a linear tensor network converge to the singular vectors of the limiting data tensor when the data tensor is orthogonally decomposable which is true for linear diagonal networks and linear fulllength convolutional networks the function computed by the network is shown to converge in direction to a stationary point of the l2lmargin maximization problem where l denotes the depth of the network for underdetermined regression and the squared loss this paper proves that gradient flow finds a global minimum which also minimizes a normlike function this paper further considers the special case where there is only one data point and two layers and provides empirical support i think the general framework which represents networks using tensors is interesting and may be useful elsewhere it is also nice to have a unified analysis of the implicit bias which also removes a few assumptions from prior results on the other hand i think it would be better to include a thorough comparison between proof techniques used in this paper and in prior work for example ji telgarsky 2020 prove that the weights and the gradients converge to the same direction and theorem 1 of this paper assumes that the weighted average of the data points by the residual vector r also converges in direction the claim of theorem 1 then almost already follows what remains to be proved is that dvl2dt is the same for all layer l but similar results have appeared in arora et al 2018 simon s du wei hu and jason d lee algorithmic regularization in learning deep homogeneous models layers are automatically balanced etc here are some minor comments 1 for linear tensor networks i think eq 6 is actually a multilinear form i think it is worth pointing out and can help the readers 2 in the equation ulh ul ikl in assumption 1 it seems that the dimensions dont match 3 it seems that the last sentence of section 52 shouldnt be there
### Summary:
|
this paper suggests an extension of previous implicit bias results on linear networks to a tensor formulation and arguably weakens some of the assumptions of previous works eg loss going to zero is replaced with initialization assumptions the reviewers were all positive about this work saying it is clearly written and an original significant contribution there were a few issues raised eg the novelty of the proof techniques and the authors responded the reviewers did not clarify if this response satisfied these concerns but did not change their positive scores i will take this to indicate they still recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper presents a method for statically learning code invariants from source code using a variant of transformers strengths the paper demonstrates that on the synthetic dataset the proposed approach can infer many invariants weaknesses the evaluation with a synthetic dataset seems very weak if checks are not good proxies for useful invariants as in most programs there are many if checks that are simply unreachable or redundant in practice not all invariants are useful for the downstream tasks code fixing bug finding etc mentioned by the authors without a more direct evaluation it is very hard to tell how useful the learned invariants actually are for these tasks the paper will be significantly stronger if the authors can evaluate their tool against existing loop invariant inference datasets with ground truth data like those used in si et al neurips 2018 the transformerbased model seems to be directly reused from hallendoorn et al iclr 2020 thus the contribution in terms of model design is limited the authors also did not citecompare against the current stateoftheart loop invariant learning work cln2inv learning loop invariants with continuous logic networks ryan et al iclr 2020docsepthe paper proposes to discover likely invariants for code by observing snippets that check for the given conditions and assuming these conditions encode invariants for the code executing before and after the condition check was checked to hold respectively not hold for negated invariant this is a novel idea that uses code with correct if conditions to guess the invariants for code that has the conditions missing my main criticism for the paper is that it does not give a compelling reason why one would want to apply this technique while this is a smart way to obtain the invariants the paper does not give too much intuition why it could be useful in practice even on the examples in the paper the machine learning algorithm probably learns invariants from identifier names and not from the semantics of the code around the authors can relate the work to a large corpus of learning invariants for functions based on things like usages of functions eg like done in 1 or 2 and the techniques there find actual bugs in code for example if the invariant is nonnullness of x this may be because x comes from a function that sometimes returns null or because it comes from a function that does not accept null if i would want to do for example bugfinding i would want to know contradicting invariants coming from the two functions in terms of execution the paper is well written and the techniques look stateoftheart from a machine learning perspective although there are no baselines given however the experiments are insufficient for showing usefulness of the idea with daikon overlap in the 70 range and precision also in the same range it is not clear that the tool gives any new valid invariants on top of daikon in terms of bugfinding the results are also inconclusive that any bugs can be found if i would put the tool to test 100 methods where normally less than 10 of them are buggy i can expect 20 false positives minor theorem proofers theorem provers figure 5 a talks about overlap but axis says precision 1 ted kremenek paul twohey godmar back andrew y ng dawson r engler from uncertainty to belief inferring the specification within osdi 2006 2 insu yun changwoo min xujie si yeongjin jang taesoo kim mayur naik apisan sanitizing api usages through semantic crosschecking docsepsummary this paper proposes a novel approach for training a transformer model to predict program invariant the model is trained using training data that are synthesized from explicit conditional checks in functions and is used to predict invariants of unguarded blocks in similar functions strength 1 the paper addresses the important and challenging problem of program invariant generation from static code in a scalable way 2 realworld missing ifguard bugs are detected using the proposed model weakness 1 the idea of synthesizing training data by automatically converting explicitly guarded code to its implicitly guarded counterpart is interesting however the effectiveness of the trained model to infer program invariants in a general way is not clear from the experimental results the evaluation with realworld bugs focuses on missing ifguard bugs the difficulty of detecting this bug cannot be understood as there is no accuracy results from an existing tool for example daikon in detecting this realworld 2 although a comparative analysis with daikon is presented the presented approach focuses on a narrower class of invariants compared to daikon moreover daikon relies on execution traces a comparison with an existing ml based approach using static code eg 1 would provide more interesting insights about the models accuracy 3 a contrastive hinge loss is introduced to address the syntactically close but semantically opposite cases however from figure4 it seems the performance of the model is not impacted in a significant way by the loss function 1 p garg d neider p madhusudan and d roth learning invariants using decision trees and implication counterexamples question to author please address and clarify the cons above docsep summary the paper presents a technique for inference of certain kinds of program invariants directly from the programs source code the basic idea is to treat conditional statements as hints for facts about the program state that should hold at a given program point strengths this is a challenging problem and the paper shows some successful examples the idea that some useful invariants can be inferred based on local information while not new is interesting and can lead to follow up work of practical value the contrastive hinge loss of syntactically close but semantically opposite statements is interesting weaknesses the paper falls short on the framing of the invariant inference problem and on the technical details of what does it mean to infer a meaningful local invariant starting from trivialities like the fact that the problem is generally undecidable and not as stated in section 2 through the use of incorrect terminology for invariants guards prepost conditions etc this just makes the paper hard to follow fundamentally beyond simple invariants array bounds nullness checks it is not clear why program invariants would generalize well across different programs the exception is of course the use of libraries and invariants in library contracts as learned in pldi19a pldi19b for nullness guards you should take a look at httpsarxivorgpdf190206111pdf i think it would improve the paper if you could focus on a certain kind of invariants and show that these invariants can in fact generalize across programs as a concrete example take your own figure 1 assuming that these are two different programs there is no reason to assume that the contract of calculatetime remains the same had calculatetime been part of some standard library shared between programs the case for generalization would have been much stronger there has been so much work on static inference of invariants that it is impossible to list even all the closely related work some things that are worth looking into are the work on scalable static analysis scaling the inference of necessary preconditions logozzo and bug detection that is based on belief deviant belief which is closely related to your intuition about naturalness and humanwritten invariants also helpful to look at loopinvariant and the related work mentioned there comparison to daikon as you correctly point out daikon infers likely prepostconditions the description of how you compare your invariants to those inferred by daikon is not clear unless all relevant cases related to preconditions on method parameters questions for authors it would be helpful to see more characteristics of the real missing if conditions that you have collected i am wondering if these are simple conditions of the kind of missing nullness checks or missing arraybound checks the way in which you have collected these samples is likely to create a bias towards simple missing conditions how many terms are in these conditions how many of them are nullness checks how many are arraybound checks how many include simple string operations andor other simple method calls as implied by table 2 improving the paper 1 i liked the idea of removing conditionals to infer likely necessary preconditions it would help to clarify when what you predict is a guard a precondition an invariant or something else 2 you are clearly not trying to infer any loop invariants and it would help clarify that upfront references pldi19a scalable taint specification inference with big code httpsdlacmorgdoi10114533142213314648 pldi19b unsupervised learning of api aliasing specifications httpsdlacmorgdoi10114533142213314640 scaling scaling static analyses at facebook httpsdlacmorgdoi1011453338112 logozzo automatic inference of necessary preconditions httpslinkspringercomchapter101007978364235873910 deviant bugs as deviant behavior a general approach to inferring errors in systems code httpsdlacmorgdoi101145502034502041 belief static error detection using semantic inconsistency inference httpsdlacmorgdoiabs10114512507341250784 loopinvariants learning loop invariants for program verification httppapersnipsccpaper8001learningloopinvariantsforprogramverification
### Summary:
|
the paper gives a way of constructing a dataset of programs aligned with invariants that the programs satisfy at runtime and training a model to predict invariants for a given program while the overall idea behind the paper is reasonable the execution in particular the experimental evaluation is problematic as a result the paper cannot be accepted in its present form please see the reviews for more details
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper proposes two new gnn rewiring layer motivated by a bound from lovsz bounding the difference between the commute times between any two nodes and a term given through their degrees by the second eigenvalue of the normalised laplacian and the minimum degree of the graph based on this bound the authors propose to use either their first proposed layer ctlayer or the second proposed layer gaplayer instead of computing the eigenvectors involved in the ctlayer with runtime roughly okn2 for k eigenvectors they formulate a loss function that nudges the embedding vectors into this direction they compare the two approaches with some graph rewiring baselines on typical benchmarks and achieve competitive performance main review this paper provides two interesting approaches to the graph rewiring problem however my main critique is that the connection to the lovsz bound seems rather loose one could derive and motivate the two proposed layers without it hence focusing on it in the title is somewhat misleading i am also not sure if i would call it the lovsz bound as it is not immediately clear which bound and property of lovsz is meant eg it could also be lovsz theta function either way this work is probably interesting to the log community and tackles an important and recent problem thus i would for acceptance however i would recommend to motivate the two developed layers more clearly as they seem rather adhoc and also discuss how practitioners should choose between the two more clearly questions the formulation of ctlayer is essentially the optimisation problem yielding the first k eigenvectors of the laplacian hence the computed embeddings should be very similar to wellknown embeddings based on laplacian positional encodings what is the difference to these kind of approaches further minor remarks line 38 typo to the capture to capture line 43 and following khop khop with k with k etc huang et al 19 prove they dont prove it mathematically maybe better say show line 160 is defined similarly maybe is defined analogously is a more appropriate wording figure 1 colors in greyscale unclear the 01 vertical bar just looks black i would recommend perceptually uniform colormaps that work in greyscale and colored semidefinite positive positive semidefinite spectral graph probably meant spectral gap many mathematical functions operators are formatted not properly please use eg declaremathoperator for functions like mlp etc rebuttal i raised my score from weak accept to accept as explained below docsepthis paper proposes diffwire a fully differentiable inductive and parameterfree graph rewiring algorithm based on the lovsz bound diffwire uses the commute times as a relevance function for edge reweighting and proposes two types of new layers to either learn the commute times ctlayer or adds a layer that optimizes the spectral gap for the network and the task gaplayer experimental results show the proposed approach performs well on grass classification tasks strengths the paper is wellwritten with vigorous theoretical grounding the connection between ctlayer gaplayer to the two sides of the lovasz bound and the graphs spectral gap is interesting experimental results demonstrate the good performance of diffwire on various graph classification datasets ctlayer and gaplayers performance differences on sbm verify the assumption of gaplayer being more suitable under the case that the lovsz bound is restrictive weaknesses only two datasets have node features as stated in the paper diffwire performs well on graphs with no node features as it can leverage the topology of the graphs however i am not sure if diffwire can also utilize the node features well when they are informative and asking for related justifications or additional experiments to be provided questions why are experiments only conducted on graph classification tasks can the proposed framework also work for node classification tasks like 1 mentioned in the paper it would be great if the authors can discuss this or provide experiments on node classification datasets i am interested in if the graph homophily affects the effectiveness of the proposed method a31 claims to provide nodes edges average degree assortativity number of triangles transitivity and clustering coefficients in table 3 but i cannot find the assortativity assuming this refers to homophily it would be desirable to include related data and discussion on graph homophily i am unclear about how the statement the smaller the graphs bottleneck the more useful the ctlayer is explains the better performance of ctlayer on collab can this be further elaborated overall this paper introduces a differentiable framework for graph rewiring with good theoretical and empirical support albeit having several confusions i tend to accept this paper 1 jake topping francesco di giovanni benjamin paul chamberlain xiaowen dong and michael m bronstein understanding oversquashing and bottlenecks on graphs via curvature in international conference on learning representations 2022 docsep summary the paper proposes a novel graph rewiring scheme that is differentiable maintains graph topology and doesnt require hyperparameter tuning the authors propose two different layers for mpnns that satisfy these properties a ctlayer which approximates commute times and reweights edges based on this and a gaplayer which rewires the adjacency matrix to minimise the spectral gap the proposed layers perform well on the given benchmarks reasons for score although the method proposed is novel very interesting and performs well on the given benchmarks i currently lean towards a reject as more needs to be done on describing how the method actually tackles the limitations of mpnns discussed in the abstract and introduction oversmoothing oversquashing and underreaching additionally the experimental analysis could be more convincing strengths the theoretical justification for the layers in terms of differentiability and retaining the graph topology is very convincing the use of the lovsz bound is novel and the framework given for rewiring is very exciting and i expect the community will find it useful and be able to build from it the paper does a nice job of showing the relation between commute times curvature and the spectral gap the performance of these layers seems very promising the authors do a good job of explaining the reasons for rewiring in general and its importance for the community weaknesses the authors state how other works have proposed edge sampling to avoid oversquashing line 128 and removed and added edges to reduce oversquashing line 134 however the authors are not clear how their specific approach also mitigates these issues given that the abstract introduction and related work focus a lot on oversquashing underreaching and oversmoothing i think it would be extremely beneficial to outline how your specific layers can improve with regards to these issues either with a theoretical or experimental justification the authors state the intention of the experimental section is to shed light on the properties of both layers line 312 if this is the goal then i think more needs to be done in explaining the uncovered properties of the layers its not clear to me what properties you are trying to test maybe write in the intro of the section why you specifically choose these datasets and what insights you want to gain from this graph classification the current results seem more like a limited analysis against a couple of previous rewiring approaches and the performance increase based on bottleneck properties is not rigorously analysed some questions i might want to think about is improvement gained by rewiring or having the commute times as a feature commute times improves gnns here 1 how are the layers changing the number of edges in practice across the datasets are there no longer an exponential increase in nodes in the mpnn computational graphs to reduce oversquashing maybe you could put some of the latent space analysis in the main text is oversquashingoversmoothing being reduced you can increase number of layers without performance drop minor suggestionsthoughts figure 1 this figure is quite hard to understand as there are a lot of edges in this graph maybe reducing this in the original graph or including node degrees or making it clear the number of edges would be useful the larger the bottleneck the more useful gaplayer is line 364 maybe you could convince me with a table of bottleneck vs accuracy erdosrenyi performs similarly between the two and the lovsz bound is very restrictive here there is a lot of repeats in the introduction and related work for example lines 94115 seem to be very similar to the intro these could be mergedremoved i think line 329 you mention the runtime for graph classification for another rewiring approach maybe you could state the complexity of your layers and be clearer here how your approach is an improvement line 94 maybe you could add references where mpnns have shown competitive performances on these tasks writingspelling line 126 missing fullstop 1 affinityaware graph networks velingker et al recommendation to accept having seen the authors response and the updates of the paper i am changing my score to a clear accept docsepsummary this paper proposes a novel aspect for graph rewiring task which is a wellestalished problem in the graph learning community the proposed model leverages lovsz bound as a fundamental tool based on which some existing results are borrowed to shed lights on the task of graph rewiring on top of the theory two network layers on the basis of ct distance and spectral gap are proposed for improving graph rewiring finally experiments on several realworld and synthetic datasets are conducted to verify the approach overall i vote for weak accepting the proposed model is somewhat novel and theoretically grounded and it aims to address an important and open research problem in the community my major concern lies in the lack of empirical comparison with more graph rewiring baselines see details below hopefully the authors can address my concern in the rebuttal period strengthes 1 this paper studies a significant research problem in the ml community and provides a novel perspective for problem solving and analysis 2 while the theory is already established in the literature they are related and helpful for sheding insights on the problem and pave the way for the reasonable methods 3 the proposed methods are reasonably validated by abaltion studies in the experiments 4 the figures for illustrating the models are pretty nice and helpful for understanding the implementation of the models weaknesses the weakness of this paper lies in the presentation of the model section while the logic is clear the technical contents seem fairly dense and not easy to follow also while the related work section compares with graph structure learning which aims to solve a similar research problem as is done by this work more comparison with gsl methods in the experiment section is expected see details below questions 1 the proposed approach aims to improve the input structures via topological and spectral properties and consistency in a differentiable and inductive way but the learnt structures are not optimized towards the downstream tasks i wonder if the methods can truly benefit downstream prediction or it can merely maintain some good properties for the new structures also the paper argues that the new model can learn inductive graph rewiring but it is not clear what the inductive specifically means more elaboration on this would be helpful if i miss something 2 the experiments only compare the proposed models with several simplified variants as ablation studies i would suggest adding more comparison with existing models on graph structure learning which may help to make the contributions of this work more convincing for example comparison with simple similaritybased rewiring models like knn kernel etc or other learnable models eg attention
### Summary:
|
we discussed this among the pcs and agree with the ac assessment we encourage authors to take the comments into accountin particular those pertaining to accessibility and framing in terms of related work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper studies the video and text retrieval task as the clipvideo contains rich information in a sequence of frames while the text contains limited information about one clipvideo there is a semantic gap between the video and text matching to bridge the semantic gap the author proposed to learn prototypes from the video and match the text with the video via those learnt prototypes the prototypes should capture different semantic level information about one video to learn these prototypes the author proposed textadaptive multiple visual prototype matching model specifically it maps a video into a set of visual representation it then learns an aggregation function to aggregate those visual representation to form the prototypes then the prototypes are used to match the text for video and text retrieval the proposed approach achieves better performance on several video and text retrieval datasets pros the main argument that there exists a semantic gap between the text and video is important and interesting the proposed model that using the prototypes to represent video and matching the prototypes with text is also interesting and promising although how to define the prototype is an open question this approach provides one of the definition of the prototype and also shows one implementation of this idea cons 1 prototypes i wonder whether the prototypes really capture the semantic meaning of the video concretely what is the performance of setting the of prototypes as the same as the of tokens b as in l128 i wonder the improved performance is due to the similarity function eq 3 or due to the proposed prototypes 2 i didnt fully understand the variance loss specifically l145l146 wed like all masks in a batch of videos is pretty confusing 3 what is the meaning of the figure 3 specifically what is the definition of the minimum of intratext similarity and how to interpret the figure 3 although l112l121 tries to explain this figure i still didnt understand the aim and the meaning of this analysis 4 as the aims of prototypes is to capture different semantic meanings of video i wonder does the proposed prototypes really achieve this goal specifically is there any qualitative or quantitative resultsexperiments to verify this argument 5 my main concern is the improved performance might not due to the proposed prototypes there is no negative societal impact docsepthe paper tackles the problem of textvideo retrieval the authors argue that current models concentrate on mapping one video to one sentence using only one feature that needs to be matched with multiple texts is hard to achieve and instead propose a text adaptive multiple visual prototype matching where the model captures multiple prototypes that describe a video in this way the query is matched with the most similar prototype instead of having only one video level representation finally the authors test their approach on four retrieval benchmarks the paper tackles an important problem textvideo retrieval and has a interesting idea at its basis it reports results on four datasets and has good results my main concern is related to the novelty while i like the idea and find it interesting i fell like it is quite closely related to the idea of probabilistic embeddings that was introduced for textimage retrieval while there are definitely differences i feel like a proper comparison and discussion between the two needs to be added chun sanghyuk et al probabilistic embeddings for crossmodal retrieval proceedings of the ieeecvf conference on computer vision and pattern recognition 2021 figure 4 maybe make the picture larger there still is space on the sides so the frames can be made larger minor lines 169170 look weird can they be moved to the next page the limitations are somehow discussed through the ablations the potential societal impact is not discussed docsepwhen performing video retrieval from a text query or vice versa representing each video with a single embedding can be problematic because a video can depict many things and potentially match multiple different search queries one solution is to represent the video by multiple embedding prototypes and define videotext similarity to be the similarity between a text query and the nearest of the videos prototypes the authors propose to create k different prototypes for each video by computing vit tokens for the 8 input frames and applying kdifferent learned functions each of which masks out a subset of the tokens and sums the remaining ones these functions are learned as part of dual textvideo encoder setup using symmetrical contrastive loss of course if the learning model produces several nearlyidentical prototypes for a video then theres no gain to prevent this a training loss term is introduced to maximize the variance of the aforementioned mask functions encouraging diversity between prototypes strong recall numbers are demonstrated on a variety of videotext retrieval datasets the components of the model are validated with ablation studies originality although the idea of representing a complex item with multiple prototypes for the purposes of retrieval is not novel this particular instantiation learning k different views that aggregate vit tokens across all the input frames is novel and very interesting particularly because its learned rather than adhoc and can potentially attend to one element across many frames of the video i would like to have seen a little more investigation as to what the model is learning eg a visualization of what each mask is attending to or a demonstration that all of the prototypes for a video are useful matching different ground truth queries quality the method and evaluation seem sound and convincing it might have been nice to see a comparison to a baseline with one prototype per frame as proposed on lines 122123 clarity line 130 what does we find that the maskbased production of kth is enough mean what baseline method is used in tables 5 6 i couldnt find it explained anywhere equation 2 what is the dimensionality of the output of fmask is it 1dimensional or ddimensional the paper would benefit from additional editing for grammar some references are incorrect eg figure 33 on line 118 there are duplicate sentences on lines 105 and 107 significance i think the idea is technically novel and the performance on standard benchmarks is strong enough that this paper would be of interest to the video searchretrieval community and possibly to the videotext representation learning community as well the authors did not describe any shortcomings i would appreciate some mention of where the model fails or could be improved for example the premise that a query need only match a single prototype might lead to lowprecision results think about matching based on an or vs and operator is this observed are there any other areas for future work i dont see any significant potential negative societal impact docsepthis paper tackles the task of videotext retrieval motivated by the fact that multiple different text descriptions can apply to the same video the paper proposes a new model where the video is represented by multiple prototypes in the videotext embedding space this allows multiple different texts to have a high similarity score with the video to encourage these different texts to be diverse the paper also proposes a variance loss the proposed method is tested on 4 videotext retrieval datasets msrvtt msvd didemo and lsmdc strengths the observation that multiple different text descriptions can apply to the same video is interesting the analysis on intertext and intratext similarity of captions for different videos is valuable the proposed method is simple and the ablations should a clear improvement over the baseline approach weaknesses the main issue i have with this paper is that it ignores prior works which consider multiple relevant captions for imagesvideos a considers onetomany correspondences between images and text and proposes a probabilistic embedding method so that multiple texts can be matched with a single image the related work of this paper also highlights previous methods in images considering manytomany correspondences which are also relevant in video b highlighted issues with the onetoone assumption in evaluation of video retrieval methods and proposes a new evaluation metric to address this it also performs some additional experiments where videos are trained with other relevant captions from the training set in addition to their annotated captions c considers multiquery video retrieval where multiple queries are used in both training and testing c also has an experiment where they show multiquery training can improve on singlequery testing figure 6 and the results seem to outperform the ones in this paper without these works being acknowledged in related work and the differences betweeen these and the newly proposed method being explained i dont think this paper can be accepted where appropriate prior works which consider multiple relevancies should also be compared to this paper claims state of the art but it appears c may actually outperform it i would like clarification from the authors in the rebuttal on this point i also think a should be compared to it was created for imagetext retrieval but since the motivation and problem it tackles are so similar to this paper it seems like a valuable comparison particularly as the proposed method in this paper doesnt seem to be specific to video other than the backbone network the features come from the choice of datasets is strange msrvtt and msvd make sense as each video is annotated with multiple captions however didemo and lsmdc dont seem to fit with the problem tackled in the paper for didemo the paper states all sentence descriptions for a video are concatenated into a single query and for lsmdc there is no mention of there being multiple captions per video vatex and activitynet captions would be suitable as they both have multiple captions per video little insight is iven about the results for instance the proposed method consistently improves on recall1 and recall5 but doesnt on recall10 tables 1 and 2 this is not explained there is a potential issue with the annotations for the datasets used as the datasets are not exhaustively annotated with possible captions this means it is unknown whether captions from another video may correctly describe part of the video in question therefore some correct retrievals may be penalized by the evaluation metric this not reason to reject this paper as this is potentially also a problem with prior works in videotext retrieval im just interested in whether the authors think this is an issue and how much it would effect the results a probabilistic embeddings for crossmodal retrieval sanghyuk chun seong joon oh rafael sampaio de rezende yannis kalantidis diane larlus cvpr 2021 b on semantic similarity in video retrieval michael wray hazel doughty dima damen cvpr 2021 c multiquery video retrieval zeyu wang yu wu karthik narasimhan olga russakovsky arxiv jan 2022 final justification my main concern prior to the rebuttal was that the paper ignore prior works which shared the motivation of multiple different text descriptions being applicable to a video or image the rebuttal has addressed this concern by providing comparison to a an imagefocused method which uses a probabilistic embedding and c a video retrieval method which uses multiple relevant captions in training the revised version of the paper also doe include a little more insight into the results although this could have been better i do appreciate the new figure 5 visualizing the mask values and prototypes with the new results comparing the paper to similar prior works i recommend accepting this paper however i think it is important that in the final version of the paper these prior works are also included in the related work and the difference with this work is clearly highlighted limitations or ethical concerns are not mentioned which is a shame considering there was a half page of space left which could have been used for this
### Summary:
|
for the one video vs multiple descriptions issue in videotext retrieval this paper presents a textadaptive multiple visual prototype matching model which could adaptively find the suitable visual prototype for an arbitrary textual description and comprehensive evaluation results were conducted the rebuttal successfully addressed some of the major concerns and in the end there is a general consensus about accepting the paper moreover most reviewers suggest that the authors update the related work section with a more thorough comparison with the papers mentioned by reviewer xjvw
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper explores different metrics to measure the selectivity of single neurons for a class in deep neural networks using alexnet as the model under study the paper shows strengths and weaknesses of several recent methods in the literature the paper conducts a psychophysics experiment to see if human subjects can reliably label images generated through activation maximization techniques major comments this paper undertakes a careful analysis of different ways of measuring singleunit selectivity for a class the conclusions drawn are that no neurons exhibit true localist selectivity and most have some more complex selectivity some of the specific examples make this point very nicely for instance a unit that responds very strongly to several custard apples and would appear to be a custard apple detector except that it responds extremely weakly to other custard apples this is a somewhat negative result that may be useful in advancing the field away from single neuron analyses which may be misleading one worry is that the methods applied are looking for a very strong form of selectivity in particular even the output layer is judged to contain a low percentage of selective units according to the definitions in the paper it may be worth considering slightly weakened versions of the metrics that allow for some errors it would be useful to add discussion of the connections between these metrics and generalization performance the class conditional selectivity metric for instance may not measure localist coding very directly but it does correlate with important performance metrics like generalization performance the discussion in morcos 2018 suggests that high single unit selectivity is detrimental to generalization do these correlations persist using other metrics the psychophysics experiment with human subjects appears to have been done to a high standard and yields the result that only the very highest layers of a network yield interpretable images this is somewhat interesting but unlikely to be that surprising as selectivity for objects in lower layers is not a claim made by many works in these lower layers selectivity for object parts is a claim that has been made and could potentially be addressed by the data collected overall this paper critically analyzes single unit selectivity measures reaching the conclusion that tuning in modern deep networks is usually far more complex than strict localist coding the significance of this conclusion may not be so high given that this conclusion is probably already the intuition of many docsepsummary this paper analyzes the selectivity of individual units in cnns the authors analyze existing techniques such as precision selectivity classconditional mean activity selection and localist sensitivity these methods are analyzed in the context of alexnet and imagenet the authors also use activation maximization am techniques for visualizing singleunit representations in cnns paper strengths the authors have minutely examined each of the metrics and the underlying assumptions they make example number of images used for computing the precision threshold in zhou et al 2014 the wrongly stated range of ccmas 0 1 considering the second highest ccmas class is a good way of handling multiple classes that activate a single unit the results of this paper are surprising compared to existing work the authors have made a surprising discovery and done a good job of both presenting it well and experimentally validating it the paper raises interesting questions and this should inspire future work in understanding networks figure 2 is insightful it compares the various different interpretations of selectivity for a single unit in fc6 it shows how the mean activating class and the maximally activating class can be semantically very different it also shows that despite the high precision and ccmas score the unit cannot be labelled as a detector for the single concept custard apple more such results are presented in the appendix eg fig a6 the human study in section 33 is a good way to evaluate the generated am images paper weaknesses one of the major weaknesses of this paper is that it uses only imagenet images to evaluate the units as this is limited to 1000 classes the authors cannot probe other visual concepts such as color texture materials for the units as an example network dissection zhou et al 2017 proposes a dataset called broden which has many diverse sets of visual concepts labeled this paper focuses only on one definition of selectivity selecting objects this should be made explicit and the authors have not done a good job of clarifying this assumption or showing that it exists all of the analysis is limited to alexnet with modern architectures that use residualskip connections it is not clear how well this analysis will generalize it is an open question if the authors work overfits to alexnet the jitterplots are hard to understand especially if there are many overlapping dots samples since the yaxis values are not really meaningful anyway using a histogram to see how many samples have a particular activation value is easier a possible suggestion for figure 2a split into two parts 1 histogram of all samples 2 histogram of the highest mean activating class the organization of the paper could be improved the sections in the paper are not well connecteddocsepthis is a paper with scattered potentially interesting ideas but the execution is limited and the writing poor with critical details lacking a major limitation of the paper is that it is not clear what contribution it makes some of the analyses are indeed interesting but 1 these analyses are mostly descriptive and 2 they are limited to one particular outdated architecture how would batch norm or residual connections or any of the developments that have happened since alexnet affect these results as a side note the referencescomparisons between alexnet and recurrent nets see abstract etc are misleading this is based on the claim that bowers et al 2014 qualitatively different results but this is for entirely different domains words indeed what could have made potentially the work more relevant would have been to show some kind of benchmarking between alexnet and alternative architectures possibly rnns as such the current study does not contribute much except for comparing different semiarbitrary measures of selectivity for one specific outdated network architecture trained on a particular problem ilsvrc minor points the study is limited to correctly classified images as stated on page 3 this seems like a major confound in a study aimed at understanding the visual representations learned it seems to me that the conclusions of the paper could be heavily biased because of this when computing any measure based on inter and intraclass responses in general this is a relatively poorly written paper which would be hard to reproduce for instance the image generation for activating units assuming it is novel could be interesting but it is not even described with sufficient details so as to reproduce the results
### Summary:
|
the paper examined the folkknowledge that there are highly selective units in popular cnn architectures and performed a detailed analysis of recent measures of unit selectivity as well as introducing a novel one the finding that units are not extremely selective in cnns was intriguing to some not all reviewers further they show recent measures of selectivity dramatically overestimate selectivity there was not tight agreement amongst the reviewers on the papers rating but it trended towards rejection weaknesses highlighted by reviewers include lack of visual clarity in their demonstrations the use of a severalgenerationsold cnn architecture as well as a lack of enthusiasm for the findings
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper is a theoretical analysis of domain randomization in the context of latent mdps the paper provides bounds on a definition of the simtoreal gap defined as the difference between the optimal value function and a domain randomization oracle value function in this context the oracle is defined as a history dependent policy which uses the first steps in the environment to improve its belief on the latent variable and then optimally behaves according to this belief provided bounds can be applied in three different contexts finite domain randomization ie domain randomization on a finite number of domains satisfying a separation conditions ie a requirement on the existence of a stateaction pair which leads to sufficiently different next states finite domain randomization without the separation condition and infinite domain randomization bounds are expressed in terms of d ie a bound on number of time steps required to reach any other state in the state space and of h ie the number of steps in an episode in the case of finite simulators with separation the domain gap is od log3h in the case of finite simulators without separation the domain gap is od h12 log12h in the case of infinite domain randomization the gap is approximatively od h12 log12h bounds are provided with assumptions and most of these assumptions are justified by proving that removing these assumptions opens the possibility to counterexamples which do not meet the provided bounds the paper is well organized and easy to read most of the mathematical concepts do come with an intuitive description of their meaning and this allows the reader to get a grasp on the importance of the theoretical results the focus of the paper is very relevant to the ml and rl community domain randomization is emerging as a fundamental training procedure and theoretical guarantees on its performance are a necessary requirement for application in the real world the submitted paper not only provides this bounds but also express them in terms of quantities eg the value function gap which are relevant for downstream realworld applications the main limitation of the paper is in assuming no finetuning with realworld samples the concept of domain randomization oracle implicitly assumes the ability to eventually find the global optimal policy exploiting history and memory somehow this ability shares some commonalities with finetuning and therefore it seems possible to extend the authors approach to the more relevant scenario of domainrandomization with finetuning another limitation of the paper is related to the format required by the conference the submission is limited to 9 pages which the authors comply to however the proofs of all theorems are in the supplementary material which as a reviewer shouldnt be necessary to get most of the paper this is in this case not really the case because i believe proofs are a necessary component to understand the quality and relevance of the paper additional comments page 5 the agent does not know explicitly which mdp is sampled but she is allowed to interact with this mdp m for one entire episode it page 5 we consider the ideal scenario that the domain randomization algorithm eventually find the globally optimal policy of this lmdp finds an important and relevant paper which addresses the problem of computing bounds for the value function of a policy trained with domain randomization and tested on one instance of the latent mdp distribution sampled during training with domain randomization the paper doesnt have limiting assumptions besides the absence of analyzing how results extend to the finetuning case as a reviewer checking the correctness of the paper wasnt easy because all relevant proofs are not in the main paper 9 pages but mostly in the supplementary material 14 pages the proof overview section 6 isnt enough to check the paper correctness docsepthis paper presents a theoretical framework for reasoning and analyzing domainrandomization techniques its approach models a simulator instance as a markov decision process for which the parameters of the mdp correspond to tunable simulator parameters in this designed setting the work proves sharp bounds on the gap between an optimal policys value in the domainrandomized and the realworld setting the work also analyzes the conditions under which simtoreal transfer can be successful in the considered theoretical setting overall i believe that the main strength of this work is providing a theoretical analysis of simtoreal deployment via domain randomization simtoreal is of particular importance for enabling safe training in simulation theoretically analyzing and understanding this setting has the potential to provide safety guarantees and inform the design of new algorithms while the analysis provided by this work is interesting in the current stage i have three major concerns first its motivation and contribution seem to be detached from simtoreal practice second the assumptions if i understood them correctly seem to be very restrictive and finally there is no evaluation whatsoever eg to quantify a gap between the derived bounds and any realistic simtoreal setting motivation and contribution it is a very interesting approach to analyse simtoreal transfer by interpreting simulators as a class of mdps with different transition models however in the way this work is currently phrased i am not sure if it provides any practically useful results a theory work does not necessarily have to be at a stage that it is useful but i would have expected a clear technical discussion of the papers limitations in order to make use of such results in any realworld system currently the link to practical implementations is provided to some extent in sec 32 however this section feels very detached from the rest of the work it does introduce how domain randomization works in practice but does not discuss how the results of this work relate to it assumptions the initial discussion speaks of domain randomization in general however the assumptions seem to be very restrictive while i believe that assumption 1 is common it would be interesting to see for which simulator realworld simulator it holds in the case where the statespace is continuous with continuous support the expectation etsmathcalm pi s can easily become infinite for many interesting system classes there is a similar issue with assumption 2 as i understand it it makes it very hard for the simulator instances in mathcalu to be configurable in a continuous way because this assumption basically enforces to have a minimum distance between any two simulator instances i wonder where this happens in practice again in theoretical work it may be okay to have such restrictive assumptions however the authors should spell out which simulator environment combinations currently satisfy such assumptions and if there are none which may be okay for an otherwise interesting theoretical work there should be at least a discussion on what are the main limiting factors of the assumption compared to practical settings evaluation i believe that the absence of any experiments is also problematic theoretical papers do not necessarily always need to have experiments some theoretical contributions cannot be evaluated or may not require experimental evaluation however in the present case the work aspires to address a practically relevant setting and seeing the performance evaluated seems quite relevant given that domain randomization already has an extensive empirical body of work even some of the pure theoretical results can be nicely illustrated with numerical experiments such as demonstrating cases in which derived bounds are particularly good or particularly bad in summary i believe that this work addresses an interesting problem its main shortcomings are the lack of discussion linking the particular contribution and assumptions to particular usecases as well as the absence of any experimental demonstrations addressing these shortcomings will result in an interesting contribution docsepthe goal of this paper is formalize and analyze the technique of domain randomization in robotics the authors frame domain randomization as training over a set of plausible mdps only one of which is the mdp that will be used at test time the authors then analyze the best possible performance of models for this problem under 3 different settings when the set of plausible mdps is finite with and without a separation condition and when the set of plausible mdps is infinite the authors find algorithms that achieve a performance gap that beats the worst case performance gap of oh in all three settings olog3h and osqrth i found this paper very well written and easy to follow each assumption that the authors make for instance that the mdps are communicating or that the value functions are smooth is justified by showing that there exist mdps where the best case transfer performance is oh in which case not much can be said the bounds themselves are also intuitive and explain to some extent why zeroshot transfer is possible when training with domain randomization one suggestion that i had the authors mention in the abstract and introduction that their results explain why policies trained with domain randomization should have memory i think it might be worth mentioning in the body of the paper that the way in which algorithms 134 use memory is somewhat different from parameterizing a network by an rnn in these algorithms memory is simply used to isolate the test time mdp after which point the optimal policy which is memoryless for that mdp is used in contrast people in general would train a single rnn over the distribution of mdps and then test the rnn on the testtime mdp its possible that the rnn internally learns to switch from using memory to becoming memoryless once enough observations have passed to internally determine the mdp but it might be worth including some discussion of how algorithms 134 compare to standard training schemes for applying domain randomization currently i feel that the bounds derived in the paper largely justify standard practices eg parameterizing policies with memory when training with domain randomization or why domain randomization should be used in the first place if one wants to transfer a policy trained in simulation to the real world but dont suggest new practices in the conclusion the authors say they hope their analysis will lead to the design of more efficient algorithms for simtoreal transfer but i think the paper could benefit from more discussion of which of their results the believe could be the most applicable to improving how domain randomization is used in practical settings this paper formalizes the common practice of domain randomization and develops algorithms and bounds for the performance of domain randomization depending on if the set of randomized mdps is finite with or without separability or infinite the central result is that the performance gap is tildeosqrth in all cases the formalization of domain randomization is accurate and the analysis of the problem setting is thorough docsepthis paper presents a theoretical framework for analyzing the simtoreal gap in the context of domain randomization the paper defines formally the simtoreal gap and the domain randomization method next it analyzes the upper bound of simtoreal gap in three different scenarios finite simulation classes with and without separation conditions and infinite simulation classes the paper provides constructive arguments to prove the upper bounds in my opinion this paper has made multiple contributions i am most impressed by its theoretical framework that formally defines the simtoreal problem and its proof of the upper bound of the simtoreal gap based on the constructive arguments described in the paper let me start by admitting i am a simtoreal practitioner not a theorist therefore it is likely that i am not fully aware of other related works that develop a similar theoretical framework i have found the paper itself very inspiring and both the theoretical formulation of the simtoreal gap and the highlevel idea behind the proof overview make very good sense to me still i have a few comments and it would be good if the authors could clarify them 1 there is another possible definition of simtoreal gaps the performance difference between a policy pi in particular pidr in the simulation environment and in the real world gappimathcalu vmathcalm1pis1 mathbbe mathcalmsimnu vmathcalm1pis1 while i agree eqn 1 is perhaps a more interesting definition of simtoreal gaps eqn 1 is fairly difficult to measure in practice could the authors comment on these two definitions 2 it looks like the stepwise reward is assumed to be within 0 and 1 i guess the whole framework is also applicable to unbounded stepwise rewards eg by remapping them to 0 1 via a sigmoid function does this affect your main results 3 comparing sec 51 and sec 52 seems to imply that narrowing the simtoreal gap is more difficult for problems without separation condition but i have a feeling that this is not always the case consider an extreme scenario where all mathcalm in mathcalu are very similar to each other and therefore they are all similar to mathcalm such a mathcalu is not a separated mdp set but i would expect the simtoreal gap to be much smaller than the gap given for the separated mdp set theorem 1 in particular if mathcalu simply copies mathcalm multiple times the simtoreal gap would be 0 it looks like assuming only a few pairs of nonseparated mdps in mathcalu may make the problem harder but if all mdps are nonseparated the problem seems to be much easier 4 the constructive argument for proving theorem 1 seems interesting and smart but i have a few questions about its technical details alg 1 line 7 the condition in the ifstatement why not simply say if pi pmathcalm1ss0a0geq pi pmathcalm2ss0a0 this would also avoid the need for treating pmathcalm20 in sec d1 also this ifcondition does not seem to be symmetric if there exists pmathcalm10 and pmathcalm20 it always chooses to eliminate mathcalm2 does it affect your proof looks like it does not eqn 41 are sa referring to s0a0 specifically or are they arbitrary states and actions it looks like they should be s0a0 according to the sentence immediately after eqn 41 what is tildemathcalm in eqn 42 and eqn 49 i was expecting mathcalm1 there eqn 49 i do not see why h appears on the righthand side it would make more sense to me if the righthand side is 2alpha s n0 i cannot see how eqn 51 is derived from eqn 50 after trying to plug into eqn 50 the provided definitions of alpha delta0 and n0 in particular are c in eqn 50 and c0 in the definition of n0 two different numbers also it seems that delta in the numerator of n0 between eqn 50 and eqn 51 should be replaced with delta0 so that it matches the definition of n0 in lemma 5 a few minor comments 1 eqn 1 why does gap need to take mathcalu as input if all it needs is mathcalm 2 what is d in theorem 3 3 the notation pv is defined in sec 31 but does not seem to be used elsewhere in the main paper 4 how do we check if assumption 3 is satisfied in practice i appreciate the whole theoretical framework this paper establishes for studying domain randomization techniques for simtoreal problems i think defining the problem formally and properly is a good contribution the highlevel proof overview makes good sense to me i also manually checked the proof for finite simulator class but not very carefully in the appendix the whole proof seems legit despite some minor issues which i tend to think is relatively easy to fix the construction of base policies algorithms in the appendix may also be interesting both for simtoreal theorists and practitioners overall i recommend strong acceptance assuming that all the equations in the main paper and in the appendix are correct or are relatively easy to fix
### Summary:
|
this manuscript introduces a theoretical framework to analyze the sim2real transfer gap of policies learned via domain randomization algorithms this work focusses on understanding the success of existing domain randomization algorithms through providing a theoretical analysis the theoretical sim2real gap analysis requires two critical components uniform sampling and use of memory strengths all reviewers agree that this manuscript provides a strong theoretical analysis for an important problem understanding sim2real gap well written manuscript and well motivated intuitive understanding for theoretical analysis is provided weaknesses analysis is limited to sim2real transfer without finetuning in the real world the manuscript doesnt provide a novel experimental evaluation lack of takeaways rebuttal the authors acknowledge the limitation of not addressing finetuning but also point out that several papers have performed sim2real transfer without finetuning the authors address the lack of novel experimental evaluation by arguing that the theoretical analysis can be directly linked to existing algorithms for which empirical evaluations have already been performed i agree with the authors that in that context it seems of little value to redo those experiments however i also believe that those links could be made even clearer in the manuscript and i would encourage the authors to do so furthermore while the authors do provide intuitive takeaways for domain randomization algorithms it would be helpful if those takeaways were more clearly linked to existing algorithms as well given that there is no experimental evaluation of this summary this manuscript provides a theoretical framework for analyzing the sim2real gap and using that framework provides bounds on the sim2real gap all reviewers agree this is a strong theoretical analysis some takeaways on what makes domain randomization algorithms successful are provided by the provided sim2realgap analysis memory use uniform sampling thus i recommend accept
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes to use contrastive distillation to train a binary network by maximizing the mutual information between itself student network and the fullprecision network teacher network empirical results show that this new objective further improves the binarization performance on top of several recent binary networks on image classification tasks the authors also empirically show that models trained with the proposed contrastive objective have good transfer performance my main concern is about the novelty a similar contrastive distillation method has already been proposed in 1 this submission falls into the first setting shown in figure 1a of 1 which compresses the model by maximizing the mutual information between the binary student network and the fullprecision teacher network the derivations of contrastive mutual information maximization in equations 47 are almost the same as 411 in 1 the loss function 9 in this paper also resembles that equation 18 of 1 the authors should clarify the connectionsdifferences of this submission with 1 minors what does dot mean in equation 1 figure 4 legend should be cmim instead of mim 1 contrastive representation distillation iclr 2020 this paper is overall structured clearly and well written however the novelty may be limited as it can be viewed as an application of the previous contrastive distillation method to the quantization task docsepthis paper proposed an approach to reduce the performance gap between the binary neural networks and their realvalued counterparts via maximizing the mutual information between the binary activations and the realvalued activations that exist in the binary neural network specifically the authors propose to treat binary activations and realvalued activations as two views of the same image and use contrastive learning to pull these positive pairs while pushing other negative pairs generated from different images the idea of treating binary activations and realvalued activations as two views of the same image in selfsupervised learning is interesting and makes sense the results also well support the effectiveness of the proposed method overall the submission is technically sound i have several questions as follows 1 does inserting the mlp layers and using the proposed cmim module slow down the training greatly can the authors provide the training time comparison with the baseline method 2 have the authors visualized the binary and realvalued activation distributions inside the bnns trained with cmim does it look significantly different from other binary neural networks minor issues 1 the authors mentioned researchers in 13cvpr 2021 propose to shift the thresholds of binary activation functions however learning to shift the thresholds in binary neural networks is first proposed in reactnet eccv 2020 2 the caption in fig 4 the effect of number of negative samples in contrastive mutual information maximization cf should be ef in summary this is an interesting paper with clear motivation and novel techniques i will raise my score if my concern about the training time is well addressed docsepthis paper proposes an auxiliary method for training bnn models it follows the idea of contrastivemutualinformationmaximization which utilizes the fullprecision and the binary activation of bnn to form positive binary and fp activation of the same sample and negative binary activation of different samples pairs for contrastive training the auxiliary contrastive loss can provide data augmentation and effectively enhance the models generalization ability strengths the paper has good writing and structure thus easy to follow the paper demonstrates promising experimental results the results show that the proposed method improves performance on different data sets multiple models and multiple tasks it shows outstanding scalability and generality weaknesses limited novelty however the contribution and novelty of the proposed approach are limited the approach looks like applying the method proposed in 1 to the bnn optimization problem the combination is very straightforward and lacks real innovation a convincing explanation is required the effect of data augmentation by utilizing negative pairs seems to be more effective but the mutual information maximization between positive pairs lacks a convincing explanation how do we understand the essential meaning of maximizing mutual information between fp activation and their signs so let p denote the distribution of fp activation and q denote the distribution of their signs socalled binary activation in the paper the mutual information of p and q is the entropy of q since q is always part of p the increase of the entropy of p cannot explain the increase of the mutual information only when the distribution 1 1 of q itself is closer to 5050 the entropy is the largest in this case the binarization is more uniform and it can encode more information but in this case mutual information maximization has the same effect as maximizing entropyq i hope the author can provide reasonable proof or at least give a more convincing explanation of why maximizing mutual information of p and q makes sense at present it feels a bit rough to apply the existing contrastive learning framework to the bnn optimization problem ac please note that page 5 after equation 10 where the detailed proof is shown in the supplementary material i am very confused where is the authors proof i did not find any supplementary materials in addition the formula derivation in this paper has many similarities with 1 the supplementary materials in 1 provide specific derivations of the similar formula real difference with other contrastive learning methods not found i cant see the obvious difference from other contrastive learning methods it seems to be the application of existing methods in the bnn training scene unconvincing experimental results experimental setup on imagenet is quite confusing hyperparameters are pretty different from the known open source bnn methods including the use of weight decay the essence of bnn training is to change the sign of weight weight decay is almost useless so usually wd is not used too large initial learning rate usually 10100 times smaller sgd optimizer usually adam see adambnn paper for explanation i am curious how the author can exceed the results of reactnet by training only 100 epochs because of the latter trains at least 256 epochs in summary according to the above questions i am curious whether the author considers providing code and more experimental details such as logs and models this will be a great help minor issue how do you implement bats since the author still didnt share the codes as they claimed would you like to share your implementation it will be a great effort the imagenet results using resnet18 also resnet34 seem to be relatively weak strong baselines such as reactnet with bireal backbone are not compared 659 top1 accuracy httpsgithubcomliuzechunreactnet other stronger results from realtobinary and meliusnet are also ignored all of them achieved a higher accuracy with aligned computation complexity and model size related work overlooked a lot of recent efforts on binary neural network research eg reactnet meliusnet adambnn etc however some of them are compared in the experiment section 1 yonglong tian dilip krishnan phillip isola contrastive representation distillation iclr2020 regarding the existing problems in the paper i recommend rejection for now but it may be adjusted according to the rebuttal docsepthe authors propose to make full use of the fullprecision latent weights in bnn training by utilizing the popular contrastive loss between samples generated by fullprecision activations and binary counterparts they follow the derivations of contrastive representation distillation to bridge the gap between mutual information maximization and the proposed loss function the experiment results show consistent improvements over strong baseline methods on image recognition tasks pros 1 the experiment results reported in table 1 and table 2 seem quite strong 2 the idea of applying contrastive loss to bnn training seems new it is interesting to see that fullprecision latent weights can still be used in a new manner cons 1 the main concern is the direct similarity between cmim and crd 1 please refer to the details of ethics concerns 2 since the core part of this manuscript shares the same idea with crd the authors may overclaim the contribution of a novel cmim framework in light of this i think the novelty of this paper can be limited 3 though introducing latent weights into bnn training seems new the extra training cost remains the same as the standard knowledge distillation kd framework based on fp32 teacher networks however i found no ablation study on it why cmim should outperform crd if the authors argue that cmim is indeed different from crd 4 it is well known that kd should further improve the performance of student networks besides label refinery 2 and realtobinary 3 have shown that kdbinconv leads to 7 top1 accuracy on imagenet with xnorres50 and 43 top1 accuracy on cifar100 with res18 5 since cmim achieves much higher accuracy than rbnn it is no doubt that tsne results will be improved note that the authors tend to maximize the lower bound of the mutual information i expect some indepth analysis on the change of mutual information during training 6 how to prove that hmathbfakibmathbfakjfpijmathbfakibmathbfakjf 7 is there any constraint on the form of hmathbfakibmathbfakjf why use eq8 as the critic function to approximate the target distribution how to determine whether hmathbfakibmathbfakjf converges to the target distribution 8 strangely the most related work crd is not included in section difference with other contrastive learning methods ref 1 contrastive representation distillation iclr2020 2 label refinery improving imagenet classification through label progression arxiv2018 3 training binary neural networks with realtobinary convolutions iclr2020 the authors may clearly discuss the differences between cmim and existing works the current draft makes it hard to fully evaluate the contributions of this paper
### Summary:
|
description the paper applies ideas from contrastive representation learning to train binary neural networks namely the algorithm promotes binary representations to be similar to the fullprecision representations while at the same time it promotes binary representations to be dissimilar from fullprecision representations corresponding to other input images this is enforced for activations in all layers by the added contrastive loss 9 decision the main weakness of the paper pointed by reviewers were 1 overlap of the large part of derivation with the prior work 25 tian et al contrastive representation distillation iclr 2020 and 2 the meaning of the derivation when applied in the setting of the paper to binary and full precision weights and its soundedness the authors proposed their arguments for 1 the reviewers board considered these arguments and did not agree see below point 2 was not addressed by authors no paper revision justifications proofs corresponding to the missing supplementary it was discussed further and was found critical see below such that it is a clear reason for rejection regardless of 1 overall the idea is interesting and the method appears to be helpful experimentally however the paper needs a major revision that would address the two points details overlap with crd reviewers were in a consensus on this issue disagreeing with authors since the whole derivation chain of the contrastive loss already exists in the crd work 25 it is redundant to repeat this derivation if not raising ethical concerns instead an original work should review or just refer to the existing derivation and only discuss the new context and eg change the critic function hat h meaning of the derivation the reviewers have questioned the soundness of the initial criterion of mi between binary and full precision activations as it reduces to just the entropy of binary activations in particular it seems very different in meaning to the contrastive loss the paper optimizes in the end here is additional feedback from the discussion 1 maximizing the entropy of binary activations with respect to the data distribution makes some sense if a single binary activation was considered its entropy is maximized when it is in the state 1 exactly for 50 of the data which makes it discriminative of the input a similar centering can be achieved by batch normalization put in front of the activation if the preactivation distribution was symmetric then bn would achieve the max entropy for the sign of preactivation such network design is not uncommon maximizing the entropy of the full vector of binary activations appears more difficult however we can also understand it as the mutual information between the input image and the layer of binary activations thus the criterion is to retain as much information about the input as possible this makes sense as a regularization often neural networks are regularized by adding data reconstruction capabilities loss and is aligned well with goals such as reusing the features for other tasks as in sec 35 but contradicts to some other principles proposed in the literature eg the information bottleneck that the maximum information about the target rather than the input should be preserved amongst methods that study the direction of maximizing the entropy in binary networks reviewers mention irnet and regularizing activation distribution for training binarized deep networks the architecture with bn before activation is used in the latter work and some more recent works eg boolnet 2 it is not clear whether optimizing the contrastive loss retains the same meaning as maximizing mi the derivation from crd paper used here applies several lower bounding steps maybe the strongest one is that the critic is chosen to be of a specific function rather than a universal approximator however there is no obvious gap in fact knowing that binary activations are just a sign mapping of full precision ones should allow one to estimate pij aib ajf in a simple way 3 in the estimator h in 8 the authors make a mistake applying their and crd theory incorrectly h should be the probability of a conditional bernoulli variable estimating pij aib ajf it should not depend on ajf for other values of j than the given one however in the denominator in 8 it does therefore this estimator and as a result the specific nce loss proposed appear unjustified if the critic from crd eq 19 is adopted it is not clear whether it makes sense for a pair of binary and full precision descriptors note that for ij the scalar product between the two is just af1 it seems that the design of a meaningful critic is a serious gap the authors should address observing that the initial objective the mi criterion was in fact independent of fullprecision states as it is the entropy of binary states one can propose that an appropriate critique should use binary states only such as habiabj sigmaleftabi abjright c when fixing hat h the result in 10 that the maximum likelihood estimator for pij abi afj with a generic neural network can approximate this distribution arbitrary well becomes irrelevant when the paper speaks of randomness eg binary and full precision activations as random variables considering ij as a random variable it is needed to specify the source of randomness or the distribution ie to add for a network input drawn from the data distribution in the first case and under i and j picked at random uniformly in the batch in the second theoretically the paper would become more convincing if the the entropy of binary activations was measured by independent tools from the literature after training with and without nce loss and it was shown that indeed the method achieves an improvement in this objective reconfirming that the principle and the derivation were sound an ablation study on other modifications such as weight decay may be helpful to convince researchers that the main source of improvements in experiments is the new contrastive loss note that not all reviewers were convinced by current experimental results due to lack of descriptions code to fully reproduce and or lack of such ablation studies
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper is studying the problem of learning to interpret manuals textual information about the task with the goal of faster learning and generalization in rl unlike recent prior work narasimhan et al 2018 zhong et al 2020 where the entities in the environment are already partly grounded to text eg by representing them as their textual description the agent here needs to learn the mapping between entities and corresponding text in order to study this problem the authors use a new environment and dataset of natural language descriptions as well as propose a selfattention model that matches entities to relevant sentences the proposed model performed comparable to a partly grounded policy as in narasimhan et al 2018 reasons for score i find the topic interesting and relevant and the use of humangenerated descriptions in contrast to procedurally generated manuals as in zhong et al 2020 is welcomed however there is relatively little in terms of novelty both in the proposed model and the environment i think the paper would benefit from identifying and focusing on one or two of the more specific issues and exploring them indepth more specifically 1 the proposed model is a fairly straightforward application of selfattention to text which has been similarly used albeit with bilstms in zhong et al 2020 2 while the challenge of needing to learn which entities map onto which words is greater each sentence refers to exactly one entity so the agent just needs to match 3 sentences to 3 entities which is a marginally greater challenge if at all 3 with slight modifications the environment proposed in zhong et al 2020 should be suitable for addressing the main topic of this paper why introduce a new environment questions and comments there should be a more detailed comparison to the two of the closest works narasimhan et al 2018 zhong et al 2020 ie how exactly is your model and environment different the mdp notation for transition and reward function is imprecise actions are a random variable in both expressions ie use rs a s why is the curriculum introduced is it necessary to solve the task the use of the term validation here is confusing it seems you are both interested in zeroshot on test games and transfer learning performance which is referred to as validation in fig 5 right why is learning from scratch so much worse shouldnt learning from scratch be as easy as in fig 5 rightmiddle update after reading the authors response the reviewers comments and the revised version i have increased my score see my response to the authors for more comments docsep natural language grounding is an interesting research direction and has attracted many researchers in recent years previous work mainly considered grounding the text to image objects this paper considers collaboratively to learn entity representations and natural language explanations with a reinforcement learning framework specifically a multimodal attention network is proposed to model the interaction between the entity representation and the text descriptions the entire framework is trained over multiple games in a multitask manner experiments are conducted on a newly designed benchmark the proposed rl framework achieves reasonable performance in domain games training test are from the same games and also has a strong zeroshot generalization to unseen games and entities thanks to the parameter sharing and multitask learning besides the newly released dataset may facilitate future research in natural language grounding i am not familiar with the natural language grounding literature so this might be an educated guess which is listed as follows the writing is relatively clear although some specific illustrations are sometimes hard to follow for example the term entity is very ambiguous in the nlp field an entity might relate to a phrase that has specialized semantic meaning i read several recent papers in the natural language grounding field but still puzzled about the term for example the semantic meaning of entity in 2 is different from this paper after reading this paper i am not sure which part is the main novelty the proposed reinforcement learning framework is relatively standard no specialized designed state policy and reward the interaction module is designed in a slightly straightforward way utilizing parameter sharing to encourage stronger generalization ability also seems reasonable the novelty might be further clarified the proposed approach seems to be similar to 1 cited the general idea is to learn a parameterized policy model that inputs the pair entity this paper or visual representation 1 and text and outputs the action of the agent in the game correct me if i am wrong depending on different scenarios the encoding network can be slightly different it might be beneficial to have a detailed discussion of the difference between the methods proposed in the two papers in figure 5 the reward of different approaches seems to have small differences however the final winning rate of games varies a lot it might be beneficial to have some discussions on this part if my understanding is correct the contribution of the paper is mainly focused on the strong generalization ability of the proposed approach across different games however it may need to clarify which part of the proposed approach contributes to the most significant influence other minor points i am interested in the omap baseline which seems to be a very good upper bound of the proposed approach i am surprised to see that the emma approach outperforms the omap in the training settings some intuitive explanations might be useful the second question is that though utilizing the upper bound omap the winning rate still has some room to improve it is suggested to have some further analysis i do not quite understand why the overall training process contains two rounds of stages is s1 stage training a warmup training i would like to hear the authors responses to make my decision 1 zhong et al rtfm generalising to new environment dynamics via reading iclr 2020 2 lai et al contextual grounding of natural language entities in images neurips 2019 workshop docsepthe paper considers the task of training an agent to act following a manual expressed in natural language the manual describes the roles and the behaviors of the entities in the environment each entity can be the goal the message or the enemy and it can also be either fleeing chasing or not moving the agent has to bring the message to the goal while avoiding the enemy a model called emma is proposed to change entity representations based on the manual thereby making the agent aware of the entities roles it is shown that emma is more effective than simple baselines the paper is mostly clearly written the proposed model makes a lot of sense for the considered task the paper can be seen as another proofofconcept paper for acting based on a manual but it should be noted that it is not the first one of its kind 1 some things were not entirely clear to me in several places in the paper it is mentioned that the model learns mapping between entity ids in observation space and their symbols in text entirely through interaction with the environment i am not sure what interaction means here looking at the formulas my understanding is that emma establishes a correspondence between the observed object symbols and the entities based on the entities descriptions what role does interaction play in determining the objectrole mapping same question about a similar sentence in paragraph multitask setup of section 51 why is gid baseline performing so poorly on validation and test data doesnt it have the complete access to the roles and behaviour of every object why is the paper not using a setup from the prior work eg from 1 while the paper is clear and seems to be very accurately and correctly executed i have concerns which might be considered subjective about the limited realworld impact of this kind of work the paper does not discuss how this work is related to any real world rl applications such as eg robotics or even building rl agents for games that people actually play it does not discuss what the manual would be in these cases would it look anything like the description of the objectss roles it appears common in the field of rl to not ask for such justifications and happily accept gridworld studies motivated by generalization to new environments but i would like to flag that for an outsider this disconnect from reality may seem troublesome docsep summary this is a significantly improved extension of prior work in this area the authors significantly improve upon the previous techniques see below i have a few questions on clarity but overall the paper was wellpresented quality clarity the paper is generally fairly clear there are a lot of details in this paper and some of them did not quite feel fully developed understandable given the space constraints such as justification for certain modeling choices for example in section 4 under entity representation generator the authors say for each entity e in the observation we place its representation xe into a tensor x at the same coordinates as the entity position in the state s to maintain full spatial information this implies to me that tensor x is a 2d grid matching the coordinate system the next section says that the three most recent observations are concatenated together and a 2d convolution is run over them which of the three potential dimensions are used in the 2d convolution and which is flattened the xy coordinates or the time dimension in section 4 model text encoder you say that these alpha and beta weights imbue our model with the ability to focus on relevant tokens can you provide an intuition for what uk and uv are learning in section 61 multitask performance i was not quite able to grasp why the singlecombination tasks are more difficult can you explain the difference between these two kinds of tasks a bit more overall though i was able to easily understand the main points of the paper and the data was clearly visualised in graphs and tables originality significance there are a lot of good contributions in this paper natural language crowdsourced manuals with two layers of turkindirection not only was the text crowdsourced but first the templates for the texts were crowdsourced a comparison of several different techniques with different attention mechanisms experiments with additional evaluations of generalisation and robustness including more difficult test scenarios adding neutral distractor entities finetuning with additional sources of punishmentreward negating the text and replacing entities with synonyms a visualization of attention to demonstrate that the attention mechanism is working properly these are original and significant contributions that go beyond what was done prior suggestion while i didnt understand the difference in difficulty between the multicombination singlecombination games what i did hope to see when they were presented was an evaluation of how the agent was able to transfer learning from one of these enabled transfer to the other eg learning in the fantasy setting transferred to the nature setting this would demonstrate that the kind of structure present in the tasks is learned and can be applied to different kinds of entities a different vocabularydocsepthe paper presents a model for entity grounding from its textual description for a textbased language game to test their new model named emma entity mapper with multimodal attention they also present a new toy game framework and crowdsourced data composed of 1320 games from 3881 entity descriptions with crowdsourcing in this game the agent has to associate an entity with its description the game is described in figure 1 and section 51 there are three entities to each game messenger enemy goal and each entity can be stationary chasing or fleeing movement the game has a onesentence description for each entity if im interpreting it correctly the space of action is up down left right and stay the goal of the player is always static bring a message to the goal while not touching the enemy from what i understand the game here itself is substantially simpler than that of previous work zhong et al 20 which involves varying goals and modifiers their architecture deviates from existing approaches zhong et al 20 narasimhan et al 18 which did not require the model to learn a mapping from an entity to its description either by providing a mapping between objects and their textual descriptions or using entity names plainly this is the main focus of the model to learn a mapping between entities and their descriptions the evaluation is also done in a manner that between trainvalidationtesting theres no overlap between entity role combination the model architecture is pretty standard using a bertbase encoder with attention figure 2 clearly explains their model where each character takes a weighted sum of descriptions as value vectors to decide the action the game feels a bit retrospectively designed to test the research idea it would have been a lot more convincing if they use their model on the existing game environment instead of creating a whole new game specifically designed to test their idea the paper is clearly written and easy to follow except for some parts bam parts werent easy to follow the paper presents nice studies throughout section 6 looking into learning in a more challenging evaluation set up such as generalization and supplementary material is also thorough for figure 5 graphs are on the union of single and multicombination games how much does it help to use pretrained bert encoder vs training from scratch im not sure what future work can be done in this space the model seems to be doing well in this constrained setting table 2 could the authors provide how this model can be applied to more challenging and complex game scenarios where theres no onetoone mapping between entity description and an entity and the goal is more complex
### Summary:
|
like the reviewers i find this paper extremely borderline on the one hand it is clearly written about a topic i find fascinating and generally well motivated if not shockingly novel ie removing some of the simplifying assumptions from zhong et al 2020 eg requiring grounding to be learned use of real language rather than synthetically generated on the other hand i agree with the leitmotiv present amongst the reviews that the problem at the centre of the experimental setting is very very simple 3 objects 3 descriptions i am mindful of the fact that access to computational resources is unevenly distributed and am not expecting a paper like this to immediately scale their experiments to highly complex settings with photorealism etc but i cant help but feel that a more challenging task with a deeper analysis of the problems presented by both grounding and the use of nonsynthetic language would both have been highly desirable to make this paper uncontroversially worth accepting as a result the decision is to not accept the paper in its present form work on this topic should definitely be presented at iclr but its a shame this paper did not make a stronger case for itself
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
pros this is an interesting approach of assigning contribution weights to each source sample could be very helpful for tasks where we have a noisy and a small clean dataset the method seems to be performing well for the tasks chosen especially for the cifar experiments simple idea and relatively easy to implement cons clarity could be improved especially in the experimental section the motivation for the svhn 04 to mnist 59 is not clear it would make more sense to me to transfer between svhn 05 to svhn 59 or from the entire svhn to the entire mnist but this particular transfer seems somewhat irrelevant to the claims the two domains are particularly dissimilar and trying to select good svhn samples according to 20 or 25 mnist samples seems somewhat illposed it is also particularly surprising to me that 25 mnist samples were enough to train a lenet to the point of 84 accuracy on the entire mnist test set im referring to the targetonly line is that really the case or was a larger training set used for that particular line there is a claim that soseleto has superior performance to all of the techniqiues which do not use unlabelled data however im not sure whether these techniques were used as prescribed and if the comparison was fair for example i believe domain adaptation techniques like dann largely assume a common label space between the domains comparison with previous reweighting techniques would have been very informative quality the quality of the writing was overall high with a few exceptions including the related work and the experimental section in related work the bilevel optimization section could be a bit more descriptive maybe some of the explanationgiven in sect 3 could be moved here the experiments were convincing with the exception of the svhn to mnist section clarity i believe a better synthetic experiment could be chosen to highlight the approach how about a truly noisy dataset that is not as separable as the noisy dataset in figure 1 maybe you could have the same noisy dataset but with a small portion of random points having the wrong label for the same experiment it should be clearly stated that your task is binary classification and what was the classifier used for the cifar experiments it is very good that it performs well but itd be informative to see if soseleto can perform even better with 10k samples it wasnt clear to me whether the avalues of only one batch 32 samples at a time were affected if so how does this scale to really large datasets like say imagenet in the cifar experiments it is mentioned that a target batchsize is chosen to be larger to enable more avalues to be affected this seems like a typo but it was confusing i assume that the source batchsize is chosen to be larger figure 2 could use a better caption and a legend it would also be an easier figure to parse if the xaxis was reversed eg if the xaxis was the fraction of data used it was not clear to me what true transfer learning means as opposed to domain adaptation originality it seems that this idea has been explored before however im not personally familiar with that work i would have definitely liked to see comparisons with it though significance this is a simple idea that seems to work well as i wrote above it would be great to know how it compares to other reweighting techniquesdocsepthis is an interesting paper claiming that on assumptions are made or explicitly made on the similarity of distributions traditionally we learned the weights for transfer learning by matching the distributions i am wondering if there are any relationships between those two methods it is necessary to show the differences between the weighted source domain and the target domain and compare them with the traditional matching methods my another concern is about the technical contribution the model is very intuitive and simple some analyses are made for optimization however theoretical justifications are lacking making the technical contribution weak and looks like a simple combination of two existing techniques i would like to know if the weights are identifiable and what kinds of weights are preferred by searching i found related papers on transfer learning with label noise and learning with label noise by importance reweighting eg yu xiyu et al transfer learning with label noise arxiv preprint arxiv170709724 2017 and liu tongliang and dacheng tao classification with noisy labels by importance reweighting ieee transactions on pattern analysis and machine intelligence 383 2016 447461 however they are not discussed in the submission it is curious to see the relationships and differencesdocsepin this paper the authors propose a soseleto source selection for target optimization framework to transfer learning and training with noisy labels the intuition is some source instances are more informative than the others specifically source instances are weighted and the weights are learned in a bilevel optimization scheme experimental studies on both training with noisy label problems and transfer learning problems demonstrate the effectiveness of the proposed soseleto overall this paper is wellwritten and easy to follow the intuition is clear and reasonable although it is not new regarding the technical section i have the following comments 1 the paper assumes that the source and target domains share the same feature representation parameters theta this is a widely used assumption in the existing works however these works usually have a specific part to align two domains to support the assumption eg adversarial loss or mmd in objective of soseleto i do not see such a domain alignment part i am wondering whether the assumption is still valid in this case from the experimental study i find soseleto achieves very good results in transfer learning problems i am wondering whether the performance would be further improved if a domain alignment objective is added in the weighted source loss 2 each source has a weight and thus there are ns alpha as minibatch is used in the training i am wondering whether batches are overlapping or not if overlapping how to decide the final alphai for xsi as you may obtain several alphai in batches 3 another point is abouth lambdap in the contents you omit the last term q alpham lambdap in eq4 as you use the fact that it is very small i am not convincing on this omission as lambdap is also a weight for the entire derivative moreover if lambdap is very small the convergence would be very slow in the experimental studies you use different lambdap for different problems then whats the rule of setting lambdap given a new problem regarding the experimental results the experimental settings for the section 42 are not very clear to me you may need to clearly state the train and test set eg data size for each method
### Summary:
|
the paper proposes an approach for transfer learning by assigning weights to source samples and learning these jointly with the network parameters reviewers had a few concerns about experiments some of which have been addressed by the authors the proposed approach is simple which is a positive but it is not evaluated on any of the regular transfer learning benchmarks eg the ones used in kornblith et al 2018 do better imagenet models transfer better the tasks used in the paper such as cifar noisy cifar and svhn04 mnist59 are artificially constructed and the paper falls short of demonstrating the effectiveness of the approach on real settings the paper is on the borderline with current scores and the lack of regular transfer learning benchmarks in the evaluations makes me lean towards not recommending acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper introduces the attention free transformer aft an alternative to multihead attention mha operation in transformer while the motivation of the authors is to replace mha with more costefficient operation it is not clear whether the proposed method is the better alternative pros 1 aft shows better asymptotic space and time complexities than mha 2 the implementation of aft allows for faster training with larger batches cons 1 theoretical analysis is conducted for the extreme case of numheadshiddendim and relu nonlinearity it is not clear how to generalize them to more practical cases with numheadshiddendim and softmax nonlinearity there is a missing link between the theory and motivation arising from it and the bestperforming implementation aftsoftmax 2 aftsoftmax does not fully complies with the title of the paper as the proposed operation contains aggregation via softmax also despite the claim that aft can be readily adopted as a plug in alternative to transformers the architectures from the experimental section also use vanilla mha blocks in addition to aft thus it is an exaggeration to say that the transformers evaluated in the paper are attentionfree 3 language modeling experiments on wikitext103 draw an ambiguous picture baseline transformer implementation has large positive difference between train and valtest ppl at 70k iterations which decreases as the training progresses for aft models on the other hand this difference is negative which might suggest that they have already overfit at 70k iterations and they will never reach the resulting performance achievable by the baseline the plot with trainval ppl number of iterations for those experiments would be more informative than the tabledocsepthe paper suggest an alternative to the multihead attention mha operation which is one of the core elements in transformers models the proposed alternative is targeting the nonlinear softmax operator in the mha and suggest to replace it with the relu operator after doing so they could reformulate the new attention mechanism as a ot operator instead of the original ot2 operator where t is the context size arguably the mha is one of the important components of the transformers architecture and reducing its memory and time complexity is crucial to increasing the training batchsizes and the usage of more context the paper is nicely written and presents a comprehensive experimentation section ranging over several machine learning benchmarks in computer vision and nlp strong points simple solution comparable results with reduced memory and latency the paper is clear and nicely written comprehensive experimentation section weak points moving from aftrelu to aftsoftmax is is not sufficiently motivated only empirically i would expect more experimentation to clear this point not all experiments show improved or comparable results for example table 5docsepthis paper proposes an efficient transformer variant by replacing softmax in the selfattention layer with a relu activation and arranging the computation using elementwise products and globallocal pooling this reduces complexity to linear complexity in the nonautoregressive case and loglinear complexity in the autoregressive case the evaluation shows that it can reach the performance of a vanilla transformer in most of the examined tasks while having fewer memory requirements in general strengths the paper is reasonably wellwritten and clear for the most part the problem of scaling transformers to longer sequences is an important one since transformers cannot deal otherwise with long sequences due to their quadratic complexity the proposed idea is interesting and reminiscent of recent methods that rearrange selfattention computation using kernels albeit it differs in the way the computation is carried out this one is simple computationwise and does not aim to approximate the original computation in any way the evaluation performed on multiple tasks shows that the proposed approach can reach the quality of a vanilla transformer and be more memoryefficient weaknesses 1 the motivation of aft and the positioning with respect to prior work were somewhat weak the introduction does not acknowledge recent efforts towards efficient transformers and what is the unique contributions of this work what are the benefits of aft compared to recent established efficient transformers such as sparse transformer child et al 2019 reformer kitaev et al 2019 or linear transformer katharopoulos et al 2020 it is unclear why one should prefer the proposed variants over existing ones both from theoretical and practical perspectives related work states some previous efficient transformers without any individual discussion about their merits or limitations in comparison to aft 2 one major limitation that stands out from the experiments despite their size is that there is no headtohead or controlled comparison with a previously established efficient transformer such as the ones mentioned above the results compared to sparse transformer given in table 3 are not directly comparable since the model size and design are quite different in brief it is not very clear what are the practical benefits compared to previous efficient alternatives 3 the memory benefits are not reflected or they are not as important when looking at the quality achieved in the tasks where a speedquality tradeoff was reported in language modeling aft has higher perplexity even when it uses a much larger number of parameters which makes the memory benefits less interesting in mt aft reaches the performance of the baseline but then the efficiency benefits are not present so i am curious is it the same in the two former tasks when comparing to the vanilla transformer under what circumstances we should expect aft to reach vanilla transformer performance and still offer clear efficiency benefits when using the same setup 4 in terms of training speed aft is generally slower than the vanilla transformer when the form reaches the same quality as the latter also it is especially slower when the depth is small in table 2 30 with 12 layers could the authors elaborate a bit on why that happens moreover it would be useful to show in table 2 what is the quality nll or bitsdim achieved by each model because its hard to tell how good the speedquality tradeoff is 5 recent studies have shown that it is possible to speed up inference time using efficient transformers see above what is the benefit of aft during inference time docsepthe paper introduces a method to replace qkvattention by a simpler efficient building block this is done by elementwise multiplication of a query representation with a compressed kvmemory perchannel attention pooling is used to compress the kvmemory the model is derived from a softmaxfree version of selfattention the results show good performance on a couple of standard image and language modeling tasks while occasionally exhibiting favorable training speed the results are largely on par with transformer baselines i have a couple of major concerns with this paper 1 the derivation the model is derived by using relus on the qk dotproduct andthen simplified in eq 4 which basically makes the resulting model different from the starting point eq 23 the final model is further changed by a applying a softmax over the keys channel by channel eq 5 so i am not sure how the resulting model actually still relates to the original formulation in eq 2 that leaves me with the impression that the derivation just exists to establish a connection with standard qkvattention which is a bit handwavy the results figure 2 even suggest that without a perchannel softmax over the input elements the method doesnt work well the arbitrariness of the derivation is exemplified further by the fact that one could have similarly started from using no nonlinearity at all after the dotproduct which directly leads to eq4 the nonlinearities on k and q could be arbitrarily applied before the dotproduct 2 the efficiency comes at a cost of a strong memory bottleneck as we basically pool the entire memory into one fixed state on hidden size that wont scale well to larger inputs the conducted experiments are mostly on smaller scale settings small images standard text lengths which reinforces my impression that this approach wont help solving the efficiency issues of selfattention using larger receptive fields though costly typically leads to better performance in standard selfattention here however due to the strong bottleneck results are getting worse after a point table 1 this makes also sense because the memory pooling imposes a very strong information bottleneck that is much of the memory has to be forgotten 3 there should be at least some controlled baselines from related work that tries to eliminate the selfattention bottleneck with similar compression techniques eg linformer sinkhorn transformer compressive transformer performers 4 the potential efficiency gains are never put to practice that is the authors dont show any application of the model to very large input sequences other comments and questions the work reminds me of dynamic convolutions 1 which compute depthwise convolution kernels dynamically based on the current context here we compute a dynamic depthwise 1x1 convolution on the qs based on the context around each q i think this connection might be more closely connected to the proposed model than attention why were the models not trained till convergence in on wikitext103 1 httpsarxivorgabs190110430
### Summary:
|
the new non linearity proposed in this paper present interesting observations and improvements on image and text datasets however reviewers point out that there shouldve been more comparisons to other efficient transformers and on more datasets the speed improvements are also not clear id encourage the authors to revise and submit in the future
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes an approach for global localization of an object from tactile sensing and noisy pose of the tactile sensor in the global frame similar to odometry in slam the paper sets up this problem in a similar setting to localization in a known map from the slam community the paper converts rgb images from digit tactile sensor to depth using a nn followed by learning a latent feature space tactile code from the estimated depth this tactile code is used to compare between tactile readings a particle filter is then used to infer the pose of the tactile sensor wrt the object based on the tactile code at different poses offline tactile code is generated from the known object mesh to use in the particle filter the proposed approach requires an object mesh to work the paper is well written with extensive validation of the approach in sim and the real world i appreciate the creation of a real world dataset which will enable other methods to benchmark or use without requiring expensive tracking systems the paper also does a good job in explaining limitations and also experimentally showing these limitations the paper is in very good shape i have some feedback to help improve it which i list below 1 recording and plotting the force applied with the digit sensor during data collection would help understand the relationship between large contact area vs small contact area one experiment would be to run the same trajectory but with two different force profiles i suspect that the orientation error will become worse as the force applied becomes lower 2 why is there a very high orientation error on the real world dataset is this because of not sufficient contact area during data collection adding a plot of the contact patch area would help in analyzing this question 3 why was the pose prior for the object chosen to be in a narrower range for the real world dataset line 260 describing why this was chosen in the paper would be helpful 4 when i read this paper i started thinking on how this could be applied to a real world manipulation demo one idea would be to show localization on a transparent object where vision based pose estimation would fail showing an example with a transparent object would help readers understand the need for tactile localization in contrast to vision 5 does passing the rgb image from digit to the tactile code network help disambiguate geometrically similar but visually different object regions eg a pringles can or a mustard bottle front has a different label compared to back i am curious if this was attempted docsepthe work develops an online tactile perception system to build global context for relative pose tracking that tracks evolving pose distribution of vision based touch sensor for this they propose three distinct modules depth network code network and particle filter that converts the tactile image into its 3d local geometry which is further condensed into a code embedding via a convolutional network these codes are used to output sensor pose distribution that evolves over time to build global localisation the work performs experiments over 10 distinct ycb objects strengths the work builds 1 a novel approach to global localisation for tactile sensing via longhorizon online particle filtering 2 does not rely on pose initialisations and 3 uses local surface geometry instead of images to learn tactile embeddings weaknesses 1 does not account for dynamic pose tracking sensor relative to object 2 most of the experiments focus on large everyday objects the work does not include enough experiments on relatively smallersized objects so it is insufficient to conclude whether the system works on varied sized objects docsepthis paper presents a tactile perception system that mimics lidar for place recognition where an offtheshelf tactile sensor is used as the lidar placed on the surface of ycb objects as the environment then collects data through a sliding motion to determine sensor localization the authors intend to release the dataset but no link has been provided yet the attached code is not runnable as explained by the authors the overall idea is exciting and relatively straightforward once you make the comparison with lidar the major contribution would be the idea of connecting tactile perception with lidar although its specific usage remains unclear at least not welljustified in the current paper releasing such a dataset would be a contribution although not yet released in the paper the learning part is no surprise but i expect to see a direct transfer of the slam method applied in this dataset to see if the result or method is transferrable or to what extent strength the analogy from slam to touch is interesting the way data is collected is straightforward and the data collected from the simulated and actual environments are solid and concrete weakness no data has been released yet unclear how the data could be used objects are much smaller than large scene environments on the practical side it is challenging to generate a scene map using one sensor or even multiple sensors with one or more robots exploring the scene with uncertainty but objects are usually much smaller and therefore could be digitally captured using many visionbased methods in one shot so what would be the usage scenarios to justify the proposed data unclear how robots could be involved again this is not well justified throughout this paper since the object data is already known what if we take a square box and slide it over the objects point cloud to collect the same data then add noise to it to make it real wouldnt it be much easier this paper only discussed the geometric features through the point cloud for tactile sensors but from a robotics perspective we are more interested in the physical interactions on the touch interface with respect to the object including data regarding force and torque unfortunately nothing is reported in this paper making the data less useful for robotic applications as that seems to be the crown jewel with tactile data docsepthis paper uses highresolution tactile sensing to estimate the relative sensorobject pose of known 3d objects with object models the pose estimation is via estimating local object geometry under contact which updates a distribution of object poses sliding over the object results in a successively more localized distribution of object poses this has a connection with haptic exploratory procedures that humans are known to use to make decisions about object properties methods to estimate object pose are trained in simulation and demonstrated with handheld motion of a tactile sensor across real objects strengths complete pipeline from lcoal representation of tactile data inference of object pose based on object model code models data are all openly released solution to a demanding problem that has long been a challenge to the field good combination of simulationbased and real databased methods to address the problem good combination of deep learning and conventional sampling methods clear how this work opens up new capabilites that will be of benefit to the field eg estimating object pose could be a prerequisite to object manipulation weaknesses only minor comments as i think the paper is basically fine as it is i was a bit unsure about how and when you would apply the motion model for human held motion presumably this is due to noise in visionbased odometry eg in fig 6 you can see the attached markers for estimating pose robot arms can have very precise reporting of end effector pose so would a motion model be necessary then any thoughts about the tactile sensor you use a digit but is this well suited to sliding motion as past work has mainly considered dabbing motion with the sensor
### Summary:
|
the paper presents a tactile perception system with real digit sensors to estimate relative sensorobject pose of known 3d objects with object models reviewers agree that the proposed approach is interesting limitations are well articulated and that the dataset will be a valuable contribution to the community reviewers also raise several questions that need additional clarification authors are encouraged to respond to reviewers comments and questions not releasing the dataset at the time of submission which can be due to a myriad of reasons eg dataset size data release policies etc is not grounds for rejection however authors are encouraged clarify the relevance to robotics update reviewers appreciate the authors detailed responses and modifications the work tackles an important problem in tactile perception eg for manipulation provides an accessible dataset to accelerate tactile learning research and extensive real world experiments and analysis all of which are contributions that are highly relevant to the robotics community including the revisions the manuscript is overall well polished and should make for an excellent oral presentation
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper provides empirical and theoretical evidence that valuebased methods that optimize td errors with sgd have an implicit regularizer that increases the dotproduct of the representation at successive states the theoretical analysis shows that this can be offset by a term that penalizes large dot products which motivates the proposed regularizer dr3 the paper shows empirically that these large dot products are a source of divergence and that training with the dr3 regularizer can stabilize learning strengths interesting theoretical analysis while somewhat compromised to make the algorithm tractable still provides insight into the problem addressed it might be good to further show evidence that the noise model is welljustified in the rl regime which would justify the leap from theory to empirics empirical analysis is comprehensive clearly demonstrating the problem and algorithms stability when the problem is addressed through the proposed regularizer weaknesses while the empirical analysis is comprehensive the performance of the algorithms with the dr3 regularizer in the experiments are not completely compelling although the mean performance seems better the statistical significance is not there in figure 3 and table 2 and figure 6 seaquest the results in atari across all games are more impressive but the aggregation and normalization across games can hide idiosyncrasies detailed comments section 3 experimental setup for tdlearning it says an action is sampled shouldnt this be an expectation over the policy and hence use every action without sampling if all the actions are included in the target through an expectation the problem of divergent actionvalues should be less pronounced if not absent the problem seems to be that some actionvalues are left dangling ie the actionvalues of actions that are not sampled by including them all through an expectation any problematic danging actionvalues should be corrected immediately the problem remains for qlearning but further analysis needs to ablate maximization bias and other confounders sec 31 paragraph 2 what is meant by learned behavior policy shouldnt the behavior policy be given or fixed i think there is a leap in reasoning by connecting states aliasing and high dotproducts dot products can increase between two vectors without any aliasing if the magnitude of both vectors increase then the dot product will increase without any change in the angle between the vectors increasing dissimilarity between successive states is intuitively reasonable but this can also be a hindrance if the successive states are indeed similar is there any way to account for this shouldnt a target network prevent the representations from being similar the networks are different and hence the representations will be a few gradient steps away section 2 equation 1 while we do take the gradient of this expression it is not correct to say that this is a proper optimization objective it is not minimized in practice and often diverges in addition methods that actually minimize this objective such as residualgradient do not result in a good control policy figure 2 the left and center plots communicate the issue clearly except that the training loss plot on the right is not reflective of the problem as you point out bootstrapping algorithms do not minimize any objective function as the gradient does not belong to any objective function overall the paper raises and simultaneously addresses an interesting problem for rl algorithms that employ function approximation the paper is somewhat held back by the empirical analysis specifically in the lack of statistical significance in several of their experiments i am also not sure of the correctness of the intuition that the dotproducts result in aliasing in reinforcement learning the paper is still above the acceptance threshold but i hope that my comments below contribute to improving the paper edit after discussion with the author and reading the backandforth between the authors and reviewer 5pgu i have increased my score from 5 8 docsepthis paper discusses how the implicit regularization effect of sgd could be harmful in the offline deep rl setting due to degenerate feature representations aliasing the representations for stateaction pairs that appear on either side of the bellman backup to address this issue the paper proposes a simple explicit regularizer dr3 that counteracts the undesirable effects of this implicit regularizer when combined with existing offline rl methods dr3 has strong performance and stability atari 2600 games d4rl domains and robotic manipulation from images the paper is overall wellwritten and clear the method proposed is simple quite clearly motivated and the claims are quite well supported by the experiments here are a few elementsquestions that might need some clarifications the dot products can increase either because of an increased coadaptation or because the magnitude of the vector of features considered increases how can be sure that it is due to a coadaptation and not because of an increase in the magnitude of the features in fig 3 5 and 25 of the data used by singh et al 2020 are used to make these more challenging would it also be useful to provide the results for the case of 100 of the data it seems that even when data is relatively abundant it would be interesting to report the performance with and without the regularizer the same comment applied for figure where a plot with 100 of the data could also be of interest the authors mention that they will release scores for individual runs as well as opensource their code is it possible to have access to that during this rebuttal period minor remarks typo and the the dotproduct paper that investigates a simple regularization technique for offpolicy value function learning in deep rl the idea is quite well motivated by showing a few insightful values through learning not only the score the empirical results are quite convincing even though they could be more complete docsepthe paper propose to study the underparametrizationcoadaptation effect observed while optimizing overparametrized networks with sgd arising in td learning based rl algorithms which looks correlated to performance drops they prove an implicit regularization effect following previous works focussing on supervised learning that could explained the coadaptation phenomenon they derive an explicit regularizer and show its effects on plethora of experiments overall the paper is well written pedagogical and the problem at stake is really interesting but i find the presentation of experimental results misleading which makes me doubt about the performances of the proposed solution major revisionsquestionsremarks roughly 3 papers out of 5 refer to non peered reviewed papers arxiv which algorithm is used to learn the qvalue in figure 5 i am assuming it is cql since the results match the ones for cql in kumar et al 2021 it also alleviates rank collapse with no apparent term that explicitly increases rank section 5 and this is potentially suprising because no term in the practical dr3 regularizer explicitly aims to increase rank appendix a3 the only sranks behavior exposed in this paper are the ones observed while learning the qnetwork with the cql algorithm what about dqn rem and brac is the alleviation of rank collapse is specific to cqldr3 looking at the simplified loss the authors based their intuition on composed of 3 maybe 4 terms 1 td error r phiprimetop w phitop w2 2 cql ood samples regularization phiprimetop w at least 3 implicit regularization phi2 phiprimetop phi 4 potential l2 regularization on weight w2 at least it looks like adding the explicit regularization is an explicit way to increase rank even if this regularization is more computationally efficient than kumar et al 2020 am i missing something plotting the norms of phi phiprime and their cosine distance with and without dr3 could be quite informative comparing figure 6 resp a4 to figure 5 resp a3 is misleading the reader at first sight it looks like the performances are somehow related to the srank not shrinking but at 50x625k steps the srank of cql is higher than cqldr3 for breakout for example why the average return of asterix is missing figure 6 looking further at the results reported in kumar et al 2021 the authors are missing what seems like the interesting region wrt their claims it is only after more than 50x625k gradient steps that the performance drops appear why did the authors clipped the plots if the experiments were done for at least 200x625k gradient updates this could help conclude about the training stability the authors claim reporting in the paper eg p 2 giving rise to methods that train for longer reach a better solution and remain stable at this solution nevertheless i notice what looks like a significative improvement wrt the average return same goes for figure 4 the xaxis scale for 1 uniform replay is different from the other ones why it looks like cqldr3 suffers a performance drop and that with more gradient step the performances with or without the explicit regularization would be similar is that the case for the 1 uniform replay experiment at 200x625k gradient updates we evaluate dr3 in conjunction with cql on the harder d4rl domains antmaze kitchen domains section 6 we use the harder antmaze and franka kitchen domains for evaluating cql since these domains are challenging for cql appendix e2 wrt which metric those domains are harder than the other ones from kumar et al 2021 these tasks are especially challenging since they require composing parts of trajectories precise longhorizon manipulation and handling humanprovided teleoperation data in this case does the potentially poor performances are correlated to the coadaptive phenomenon reporting sranks after 2m gradient steps would help conclude while trying to put emphasis on stability what looks like the main focus of the paper i was expecting the authors to focus on environments for which previous methods suffer from performance drop and the underparametrization phenomenon such as antv2 hopperv2 and walker2dv2 as shown in kumar et al 2021 or hoppermediumv0 as shown in kumar et al 2020b i do not understand why the effect of dr3 on d4rl is discussed using brac and not cql nor rem this would have had more impact since the paper mainly focus on the effect of dr3 wrt cql and rem overall a rigorous statistical significance test to highlight significatively better results in the tables is missing minor revisionsquestionsremarks does light colored regions on the plots always refer to 95 ci does the underparametrization phenomenon and the drop in performances are specific to overparametrized nns what optimizer the authors are using to optimize their models this could be interesting since the theory developed in the paper assume an sgd optimization scheme it would be better if all the algorithms are referenced in the main paper eg cog has only its attached reference in the appendix writing this protocol is standard in atari seems a bit farfetched when the references point only to google related papers what does bold format mean in the paper tables some typos theta missing for some gradients p 12 a eoretical comparison p 13 in ou practical p 16 equation4 p 22 atleast p 24 wprked authors answering remarks and questions during the rebuttal period might lead to a higher grade but as is important claims of the paper are not wellsupported and the way the results are presented is suspicious the paper looks theoretically sound some questions remain regarding the experimental efficacy of the proposed regularizer claimed by the authors docsepthis paper first empirically notices the feature coadaptation phenomenon which will be exacerbated in tdlearning with outofsample next actions the authors theoretically show that the implicit regularization of noisy sgd encourages maximizing the inner product between the encoded features of current and next stateaction pairs which is responsible for the observed phenomenon thus the authors propose a novel explicit regularizer for offline tdlearning to encourage smaller inner products extensive experiments on both video game environments and robotics demonstrate the effectiveness of the proposed regularizer with various offline tdlearning algorithms as the testbeds strengths 1 the identified phenomenon is very critical when the consecutive stateaction pairs are encoded to have similar representations the learned value functions as well as corresponding policies would perform poorly 2 this paper provides theoretical evidence to explain the feature coadaptation phenomenon which meanwhile explains why it is severe for outofsample next actions 3 the empirical evaluation strongly confirms the effectiveness of the proposed regularizer weakness 1 there are some typos eg stableif in theorem 31 2 the organization can be improved further it is not easy to follow this story and there are many preliminaries about offline rl that are indispensable for readers to understand the discussions this is a good paper which studies an important problem from both empirical and theoretical aspects and brings in sufficient novel stuff i appreciate the contributions made by the authors and recommend this paper
### Summary:
|
the paper proposes an interesting hypothesis about deep nets generalization behavior inside rl methods it suggests that the nets implicit regularization favors a particular form of degeneracy in which there is excessive aliasing of stateaction pairs that tend to cooccur it proposes a new regularizer to mitigate this problem it evaluates the hypothesis and the regularizer empirically and it provides suggestive derivations to motivate both the reviewers praised the comprehensive empirical analysis the insights into learning and the combination of empirical and theoretical evidence the authors participated responsively and helpfully in the discussion period and addressed any concerns raised by the reviewers this is a strong paper it derives and motivates a novel hypothesis about an important problem and analyzes this hypothesis both mathematically and experimentally
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper presents a method that utilizes data augmentation for imagebased reinforcement learning the data augmentation is used to regularize the policy and function approximation in the proposed method in addition a method for automatically identifying effective ways of data augmentation is proposed the experimental results show that the proposed method outperforms the baseline methods the study shows that the regularization of policy and function approximation using the transformed images is more effective than training a policy by using the transformed image as a state regarding identification of effective ways of data augmentation the proposed method seems improve the performance of imagebased rl methods although the proposed approach is simple i have some suggestions for improving the paper the term for regularizing the policy is given in eq 3 i think that the first pias in the kl divergence should also have the subscript theta because it seems that the two policies are based on the same model likewise the first term in eq 4 should have the same subscript as the second term it would be good to show the jensenshannon divergence and the cycle consistency of rad in table 2 comments after discussions and the paper update i appreciate the authors efforts to improve the clarity and provide additional results i believe that the proposed method is now clearly presented and the claims are properly supported by experiments i raise the score to accept docsepsummary after discussion period after reading the other reviewers comments and corresponding with the authors i have become convinced that the authors proposed regularization method is novel and effective and would recommend this avenue of research be further explored yet it has also become clear to me that the authors claims on why their method works are not yet supported by evidence further i dont believe the authors proposed further ablation studies would fix the theory since such experiments dont address whether their method works by fixing problems with laskins work as the authors claim or because it provides a more direct way of enforcing invariance to transformation as i claim so were left with a difficult situation the method and the experiments are good while the theory is lacking in such a situation both acceptance or rejection seem reasonable yet as per iclr reviewer guidelines one should answer three questions for oneself what is the specific question andor problem tackled by the paper is the approach well motivated including being wellplaced in the literature does the paper support the claims this includes determining if results whether theoretical or empirical are correct and if they are scientifically rigorous since the theory is lacking and the approach is not well motivated and since the theoretical claims havent been rigorously supported i feel as per iclr guidelines the paper is not yet ready for acceptance initial review summary in this paper the authors introduce three approaches for automatically finding an augmentations for any rl task and a novel regularization scheme to make such augmentations work effectively positive aspects the papers language is clear and the authors provide a good overview of the problem of data augmentation for reinforcement learning furthermore they nicely explain why data augmentation for rl isnt as straightforward as augmenting data for supervised learning learning i believe that data augmentation could be a nice tool in the reinforcement learners toolbox and im glad to see a paper advancing the idea major concerns this paper has not provided sufficient evidence that the authors proposed way of doing data augmentation is effective in this paper there are two main novel methods for doing data augmentation insights in rl i will discuss my concerns with both methods separately policy and value function regularization the authors criticize the naive application of transformers in the ppos buffer as done in laskin et al 2020 saying that this changes the ppo objective while i agree that such a naive transformation as in eq 2 is problematic i fail to understand why application of transformation to states in the buffer would result in the eq 2 as the transformation would happen to the states being fed into both pitheta and pithetatextold resulting in an equation different from eq 2 one just needs to save the old policies and old advantage function so that pithetatextold can be applied to transformed states and not just use the actions from the buffer this would seem like a straightforward fix yet the authors have proposed a different regularization fix which judging the experiments does seem to work as shown in figure 2 i suspect it works for another reason since the regularization forces vs vfs and picdot s picdot fs i wonder if this isnt a method to allow prior knowledge to flow into the policy and value estimation if the transformations fi have been chosen such that one can be reasonably sure that true value and policy functions should be invariant to said transformations then by enforcing vs vfs and picdot s picdot fs one is constraining v and pi to fit the prior knowledge contained in fi so now were comparing apples and oranges since your method drac gets to incorporate this prior knowledge while ppo and rad dont and dracs good performance isnt surprising automatic data augmentation here given some candidates for data augmentation the authors propose three methods to discover which candidate work well unfortunately i dont fully understand the approach the authors are examining a metareinforcement learning setting where one wishes to find a policy which performs well not just on one mdp but on a whole distribution of mdps this leads to an innerandouter forloop like setting in the inner for loop the agent does multiple episodes with a single environment in the outer loop the agent gets new environments i had expected this innerouter forloop structure to pe present in the meta learning of augmentations and for the authors to clearly describe how knowledge about the effectiveness of augmentations is transferred from past episodes on one environment to future episodes in the same environment and how its transferred from past environments to future environments the only support given for these methods is the experimental results yet we see that the performance of drac and ucbdrac lie within a standard deviation of one another so the evidence of the effectiveness of ucbdrac over the simpler drac is weak at best minor comments tmssa is the transition function rmsa is the reward function here t and r are generally distributions and not functions eq 6 is confusing since f can refer to both ft function at timestep t and fi the ith transformation function do the update with something like ntf leftarrow nt1f 1 is a bit clearer since then its the number of times f has been pulled before timestep t docsep summary this paper tackles the problem of generalization in deep rl via data augmentation it provides a framework for automatic data augmentation based on ucb rl2 or maml when ucb is combined with regularization of the policy and value function so that their outputs are invariant to transformations such as rotation crop etc it shows improvements over similar algorithms on the procgen benchmark pros 1 the paper is well written and the proposed algorithms are straightforward to understand 2 ucbdrac is shown to be statistically superior to the tested baselines surprisingly when all procgen games are taken into account some existing methods such as randfm are actually worse than ppo 3 ablation studies are able to show that both ingredients of ucbdrac are important ucb is shown to find the best augmentation asymptotically and the drac is shown to be better than ppo and rad cons 1 the experiments do not compare the proposed algorithms to drq the algorithm proposed in kostrikov et al 2020 since it also tackles generalization in deep rl through data augmentation it seems that it should be included as a baseline can the authors explain why it was not included 2 the proposed algorithms are only combined with ppo it would be good to have some results for other actorcritic algorithms such as sac to verify if the sota behavior holds overall i would vote for accepting this paper due to the strength of its experimental results the proposed approaches are novel in the data augmentation for generalization in deep rl subfield although autoaugment cited in the paper also uses rl for choosing data augmentations in the supervised learning case further comments and questions 1 it may be helpful to rewrite some of the algorithms in section 33 in pseudocode as text is harder to read 2 is there intuition for why only ucbdrac leads to statistically significant improvements over ppo and not rl2drac and metadrac can it be shown that the former has lower sample complexity than the latter two 3 why is the cycle consistency percentage in table 2 always low is it some idiosyncrasy of the metric or due to the inherent variance in the trajectories update after reading other reviews and author responses i am happy to keep my score and support accepting this paper i agree with reviewer 4 that conducting a more detailed ablation study of the impact of regularizing both the policy and value function ie a comparison with an algorithm like that of kostrikov et al 2020 would improve the paper and hope that it will be included in the final paper docsepsummary this paper proposes an automatic data augmentation approach for rl tasks specifically it takes ucb for data selection and introduces two regularization terms for actorcritic algorithms policy and value function then this paper evaluated the approach based on the procgen benchmark and demonstrated that it outperforms existing methods it is also shown that the learned policies and representations are robust to irrelevant factors reasons for score on the one hand i feel that the proposed approach is incremental ucb for data selection with the actorcritic algorithm as the rl algorithm plus additional regularization terms it would also be ideal if more experiments can be included to prove that the proposed approach is effective in most rl tasks however on the other hand based on the current experimentation results the proposed method seems promising and can be useful for generalization in reinforcement learning pros 1 the proposal of the automatic data augmentationselection approach is useful given that the existing primary methods either rely on expert knowledge or separately evaluate a large number of transformations to find the best one both of which are expensive 2 the proposed approach achieves sota performance in the procgen benchmark 3 it shows that the proposed method is robust to irrelevant factors cons 1 i think the primary difference between the proposed approach and existing approaches is the additional regularization losses this would make the new approach seem incremental 2 there is a lack of performance comparison between the automatic data augmentation method and nonautomatic approaches 3 the data augmentation approach is only evaluated in the procgen benchmark it would be great if this paper includes more experiments to demonstrate the performance especially since the proposed method is intended for any rl task
### Summary:
|
as of now automatic data augmentation methods have mostly been proposed for supervised learning tasks especially classification this paper introduces automatic data augmentation to deep imagebased reinforcement learning agents aiming to make the agents generalize better to new environments a new algorithm called dataregularized actorcritic drac is proposed with three variants that correspond to different methods for automatically finding a useful augmentation ucbdrac rl2drac and metadrac promising results are reported on openais procgen generalization benchmark which consists of 16 procedurally generated environments games with visual observations further experiments have been added in the revised version strengths this work is among the first attempts that propose an automatic data augmentation scheme for reinforcement learning the paper articulates well the problem of data augmentation for reinforcement learning the experiment results are generally promising weaknesses although the experiment results reported seem promising there are missing pieces in order to help the readers gain a deeper understanding to justify more thoroughly why the proposed regularizationbased scheme works theoretical justification is lacking this is a borderline paper while it presents some interesting ideas supported empirically by experiment results the paper in its current form is premature for acceptance since a more thorough scientific treatment is lacking before drawing conclusions moreover considering that there are many competitive submissions to iclr i do not recommend that the paper be accepted nevertheless the authors are encouraged to address the concerns raised to fill the gaps when revising their paper for resubmission in the future
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
pros using rl to choose the simulator parameters is a good idea it does not sound too novel but at the same time i am not personally aware of this having been explored in the past note that my confidence is 4 so maybe other reviewers might be able to chime in on this point in theory you dont need domain adaptation or other sim2real techniques if you manage to get the optimal parameters of the simulator with this method certain attributes of the method were evaluated sufficiently eg the number of training epochs for each policy iteration the dataset size generated in each iteration and whether initialization was random or not in each iteration cons experiments were underwhelming and the choice of problemsparameters to tune was not the right one for the problem parts of the paper could be clearer quality i believe that although the idea is great but the quality of the experiments could have been higher firstly better problems could have been selected to showcase the method i was excited to see experiments with carla but was underwhelmed when i realized that the only parameter of the simulator that the method controlled was the number and the type of cars in the scene and the task of interest was a car counting task for which not much detail was provided this would have been much more interesting and useful to the community if more parameters including rendering parameters like lighting shading textures etc were part of the search space similarly the semantic segmentation task could have used more than one category but even for the one category there were no previous methods considered and the only comparison was between random parameters and the learned ones where we only see marginal improvement and what i perceive to be particularly low iou for the car although itd help to know whats the sota there for comparison for both vision applications i could help but wonder why the authors did not try to simply train on the validation set to give us another datapoint to evaluate the performance of the method this is data that is used for training the outer loop so it does beg the question of what is the advantage of having hte inner loop clarity the writing of the paper was clear for the most part however the experimental section could have been clearer i was wondering how modelhyperparameter selection was performed was there another validation set other than the one used to train the outer loop the proposed policy is dubbed its what does it mean its not clear what is a deliberately adversarial initialization could you elaborate the letter r is used to mean reward and rendering this is confusing similarly some symbols are not explicitly explained eg s generally section 23 is particularly unclear and confusing until one gets to the experimental section section 3 discusses the technique and states that we can thus generate or oversample unusual situations that would otherwise not be part of the training data i believe it is important to state that as the method is presented this is only true if the validation data is varied enough and includes such situations i believe this would be more applicable if eg rendering parameters were varied and matched the optimal ones also the method is presented as orthogonal to domain adaptation and other simtoreal techniques however i do not necessarily believe that this paper should be discussed outside the context of such techniques like domain randomization cycada pixelda etc even though these esp the latter ones focus on vision i do think it sets the right context originality as far as im aware noone has tried something similar yet however im not confident on this significance although the idea is good i dont think that the approach to select the simulation parameters presented in the experiments in such a way is significant i think that eg doing so for rendering parameters would be a lot more powerful and useful and probably a lot more challenging also i think that a single set of parameters which seems to be what the goal is in this work is not what one wants to achieve rather one wants to find a good range of parameters that can help in the downstream task docsepthis work makes use of policy gradients for fitting the parameters of a simulator in order to generate training data that results in maximum performance on real test data eg for classification the difficulty of the task rises from the nondifferentiability of the simulator quality the method is sound wellmotivated and presented with a set of reasonable experiments however and this is a critical weakness of the paper no attempt is made to compare the proposed method with respect to any related work beyond a short discussion in section 3 the experiments do include some baselines but they are all very weak clarity the paper is wellwritten and easy to follow the method is illustrated with various experiments that either study some properties of the algorithm or show some good performance on real data originality the related work is missing important previous papers that have proposed very similaridentical algorithms for fitting simulator parameters in order to best reproduce observed data for example httpsarxivorgabs180401118 httpsarxivorgabs170707113 which both make use of policy gradients for fitting an adversary between fake and real data which is then used a reward signal for updating the simulator parameters significance the significance of the paper is moderate given some similar previous works however the significance of the method itself regardless of previous papers is important docsepthe paper explores an interesting idea automatically tuning the parameters of a simulation engine to maximize the performance of a model that is trained using this simulation engine in the most interesting scenario the model is trained using such optimized simulation and then tested on real data this scenario is explored in section 45 the basic idea of optimizing simulation parameters for transfer performance on real data is very good i believe that this idea will be further explored and advanced in future work the present submission is either the first or one of the first papers to explicitly explore this idea and deserves some credit and goodwill for this reason this is the primary reason my rating is marginally above acceptance threshold and not lower the paper suffers from some issues in the technical formulation and experimental evaluation the issues are reasonably serious first it is not clear at all that rl is the right approach to this optimization problem there is no multistep decision making there are no temporal dynamics there is no longterm credit assignment the optimization problem is oneshot you pick a set of parameters and get a score once thats it its a standard blackbox optimization setting with no temporal aspect my interpretation is that rl is used here because its fashionable not because its appropriate the evaluation is very incomplete and unsatisfactory lets focus on table 1 which i view as the main result since it involves real data first is the optimization performed using the kitti validation set without any involvement of the test set during the optimization i hope so but would like the authors to explicitly confirm second the only baseline random params is unsatisfactory i take this baseline to be the average performance of randomized simulation but this is much too weak since the authors have access to the validation set during the optimization they can simply test which of the random parameter sets performs best on the validation set and use that this would correspond to the best set of parameters sampled during training its a valid baseline there is no reason not to use it it needs to be added to table 1 also 10 sets of random params seems quite low how many sets of parameters does the rl solver sample during training that would be the appropriate number of sets of params to test for the baseline and remember to take the best of these for the baseline the last few points really boil down to setting up an honest random search baseline i consider this to be mandatory and would like to ask that the authors do this for the rebuttal there are also other derivativefree optimization techniques and a more thorough evaluation would include some of these as well my current hypothesis is that an honest random search baseline will do as well as or better than the method presented in the submission then the submission boils down to lets automatically tune simulation parameters we can do this using random search its still a stimulating idea is it sufficient for an iclr paper not sure something for the reviewers and the acs to discuss as a group
### Summary:
|
this paper discusses the promising idea of using rl for optimizing simulators parameters the theme of this paper was very well received by the reviewers initial concerns about insufficient experimentation were justified however the amendments done during the rebuttal period ameliorated this issue the authors argue that due to considered domain and status of existing literature extensive comparisons are difficult the ac sympathizes with this argument however it is still advised that the experiments are conducted in a more conclusive way for example by disentangling the effects of the different choices made by the proposed model for example how would different sampling strategies for optimization perform are there more natural blackbox optimization methods to use the reviewers believe that the methodology followed has a lot of space for improvement however the paper presents some fresh and intriguing ideas which make it overall a relevant work for presentation at iclr
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper defines a fairness notion for a recommender system that is based on the independence of preference predictions to the user group and the item groups the paper defines a mutual informationbased mathematical expression to measure unfairness called equal experience metric and then optimize it along with a matrix factorizationbased collaborative filtering strengths the result due to chain rule is pretty interesting and neat in that joint independence of preference prediction useritem groups ensures multiple independence notions together the authors do a good job of giving examples of the definition in terms of all of the independence notions empirically the method proposed in the paper gives good fairness performance on all fairness metrics compared except one while the specific papers only specialize in reducing one form of unfairness at a time weaknesses the normative goal of the fairness notion is not wellmotivated in the paper from what i understand making the preference prediction independent of both the item group and the user group means that different base rates of preference for item groups among different user groups should not mean different rates of recommendation the authors give an example of science vs literature preference for male and female users where according to this measure science and literature would be recommended at the rate at which they appear in the item set because of the independence between y and zitem in my opinion this notion needs to be further motivated and included with a proper discussion about the scope of what item and user groups could mean for example if there are more literature courses to be recommended than science courses they might not have the same capacity or preference among the general user group as science courses while it may make sense to recommend science and literature equally to male and female users wouldnt recommending science and literature at proportional rates only make sense when they are equally preferred in the general population in the spirit of the above question i would like the authors to elaborate on their criticism of fairness notions based on the difference in recommendation accuracies paragraph 2 of introduction however later in the future work sec 5 the authors seem to advocate for equalized odds fairness notion which is an accuracybased notion the setup uses ythreshold as a preference prediction isnt the most used form of recommendation while something like the relative rankingordering of the items is the paper does not talk about how the defined fairness metrics would apply if predictions were used in terms of ranking the different items for each useruser type in the experiments section would it also make sense to use an importance weighting approach eg 1 to tackle the selection bias created by the data generation process 1 schnabel tobias et al recommendations as treatments debiasing learning and evaluation icml 2016 minor in paragraph 2 of the introduction the authors say female students exhibit low ratings on math and science subjects due to sampling bias it is not clear if the term ratings means the predictions or scores observed in a dataset however the other factor of societalcultural differences mentioned by the authors does explain the observation well perhaps the authors meant the lack of ratings instead of lower ratings in related work some literature from debiasing word embeddings could be relevant to include because those works also include an independencebased notion between embeddings of occupations and genderspecific words in the current work since matrix completion is used as the collaborative filtering method it also corresponds to user and item embeddings and the notions of orthogonality and independence might be relevant to mention or describe the paper tackles an important problem of fairness in recommender systems and it defines a fairness notion that contains both the user groups and items groups the metric defined is pretty straightforward to understand the kernelbased probability density estimation to compute the difference between conditional and marginal probabilities is not a very standard method but it seems to work for the purposes of the experiment overall the fairness notions are not very strongly motivated as highlighted above and the paper is missing the guidance around what groupings of items and users are meaningful for such a recommendation task docsepthe presented study argued that a fair recommendation should be independent of both user and item therefore the study introduced a new fairness notion ie equal experience and further incorporated this fairness notion as a regularisation term in the matrix completion framework to construct a fair recommender system the proposed method was evaluated with three datasets one synthetic and two real datasets the presented study did not analyse the diversity of the recommended items in the experimental results which the new proposed fairness notion aims to improve also it would further strengthen the study if the authors could analyse who could benefit from the proposed method eg females who enjoyed action or crime movies but did not have much historical data or others also as this study focused on the problem of rating prediction in recommender systems it would good to also test the effectiveness of the proposed method in the ranking setting the new fairness notion introduced by the presented study can provide certain new knowledge and insights on how to construct a fair recommender system and the effectiveness of the proposed recommendation method seemed to be supported by the experimental results however the study can be further strengthened if more indepth analysis on the recommendation results can be provided docsepthe paper is concerned with fairness in recommendations specifically there are groups of users and groups of items previous work has modelled fairness as the constraint of all user groups having the same accuracy or as the prediction probability being independent of the item group or the user group in this work the notion is generalized so that the prediction is independent of both the item and user group optimization algorithms are shown to solve this problem along with experimental results i think that the paper solves an interesting problem and generalizes previous work the main concern here seems to be with the weight of the contribution specifically the new notion of fairness is not very complicated and seems to follow from previous work further the optimization algorithms also seems to follow from previous work and are not novel if there were significant technical issues it does not seem to me that they were wellemphasized in the paper have the authors thought about including any theoretical guarantees for example convergence of the optimization or a guarantee that the fairness notion would hold at least approximately some minor representation issues 1eq2 on page 3 shouldnt hatm belong to a constraint set such low rank 2eq4 on page 3 i think mijhatmij is missing a square while the paper introduces a new meaningful notion of fairness the notion closely follows the previous work the optimization methods used also follow the previous work overall this makes the contribution of the paper very incremental docsepthis paper proposes a new notion equal experience to measure the fairness among groups in recommender systems and provides an method to optimize this new notion based on matrix completion experiments demonstrate the effective of the proposed optimization framework strengths the paper is well written and easy to follow the related works is well described especially the representative ones fair comparisons on these representative baselines are provided in the experiments it is interesting to utilize mutual information for fairness problems weaknesses i suggest to include der to be a baseline i am wondering if there are any difficulties when facing more than group of users and items as shown in the experiments the proposed method keeps receiving the worst rmse performance also from table 6 we can observe that when the proposed performs much better than the baselines for fairness comparisons it also performs much worse than others for recommendation comparisons there is tradeoff between fairness and accuracy which may need a more detailed analysis since i am not very familiar with recent work on fairness the proposed notion equal experience seems novel to me the authors provide the optimization method for the equal experience based on both traditional ml and deep learning techniques the results prove that the proposed method can receive better equal experience optimization performance compared to other baselines
### Summary:
|
this paper introduces a new unfairness metric for recommender systems based on mutual information and then develop an algorithm to account for this metric in matrix factorizationbased collaborative filtering the reviewers all agree that the proposed metric and algorithm are sound at a technical level however they have concerns regarding the motivation of the introduced metric as well as the experimental evaluation the rebuttal by the authors did not persuade the reviewers to reconsider their original assessment and they still argued that their concerns remained in the final recommendation the simplicity of the metric was not seeing as a weakness of the work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors present to formulate pseudohealthy synthesis in a discretized way aimed at prosthesis shaping second the development of personalized aortic root prostheses could be used the usage of a set of prosthesis types has a high potential for translation to clinical application and hence is of high interest for prosthesis manufacturers as well as clinicians relatively small number of aorta roots including ultrasound volumes of 24 there is a lack of ablation study evaluation of parameter changes and no external validation in addition there is a lack of evaluation method on technical and clinical purposes docsepthe main ideas of the paper are well presented shape reconstruction is not an end in itself as healthy and pathological shapes differ exact perpatient shape reconstruction is not necessarily desirable as taylormade prosthetics can be costly massproducing a couple of types of prosthetics and finding the right type of prosthetic needed for each patient could allow for cheap treatment the experiment used a good amount of metrics to inform on the results presented and the architecture is well presented as are the hyperparameters used for the experiment unfortunately the paper only presents one experiment on a very limited dataset such an application paper should and must be much more rigorous in its experimentation throughout the paper the authors make several claims on the potential applications of the method on prosthetic development notably in section 11 we present the first approach to formulate pseudohealthy synthesis in a discretized way aimed at prosthesis shaping the developed framework can be applied to a wide range of organ shape synthesis problems from a pathological prior and hence provides a new method for pseudohealthy synthesis in general however the presented work only looks at the very limited case of aortic roots with a very limited dataset such claims of potential general applications should be backed up by experiments on more than one dataset moreso the presented work is not compared against any other methods aimed at a similar task this is especially concerning since the presented work iterates on and is closely related to previous work by the same authors 12 as such it is confusing why the authors did not atleast compare the presented work to previous iterations of their own work adapted as needed and demonstrate how the improvements presented here are better than what was done before while 12 reduce the novelty of the proposed work lack of novelty alone should not pose a problem however lack of novelty should be compensated by clear improvements which are lacking here as no comparison is provided even though the paper presents itself as distinct because it uses classification instead of regression cf end of section 12 this is not enough to justify the lack of comparison also the abstract mentions besides a proofofconcept study on an exvivo porcine data set we provide a vast evaluation of hyperparameters including the number of identified shape types this sentence puts very little emphasis on the experiment and much more on the evaluation of hyperparameters this is confusing as hyperparameter search is expected and not really a contribution moreso the actual hyperparameter search was lost in the text sections 22 and 23 same as the methods components description if emphasis was truly to be put on the thoroughness of the hyperparameter search it should have been given its own subsection but mostly it should have been accompanied with much more details while table 1 offers the considered hyperparameters it does not reflect how each impacted not only the rmse but other reconstruction metrics some doubts must also be raised concerning the experimentation protocol notably thus we performed a 10fold montecarlo crossvalidation 80 training 20 test for each combination of hyperparameters examined given in table 1a we trained on the autoencoder on the training data propagated the test data through the full network and computed the average root mean square error rmse between the output and the original test images in each fold this implies that the hyperparameters were found by using the test images as a validation and that no images were held out to provide the final metrics reported if this is the case it is unacceptable and the experimentation protocol must be revised to keep the test set separate from the validation and training sets if this is not the case then the sentence and similar ones throughout the text should be reworded to avoid misleading the reader finally despite being well presented the motivations behind turning this problem into a classifcation problem instead of a regression problem are not backed up by results for example the optimal k was found to be 4 which is quite low this implies that only 4 shapes are enough to represent the whole gamut of possible aortic roots could this be backed up by histological studies however even the authors themselves might suggest that classification poses a problem as section 32 includes additionally the small difference between the classification model and the typification with optimally assigned prosthesis types shows that the classification works robustly and that most of the error relates to the discretization of the prosthesis shapes this sentence attributes the error to the discretization of the prosthesis shape which is the core idea of the proposed work it seems the proposed work could be improved in several ways by not being a classification problem moreso no work is done to assess the coherence of the clusters the autoencoder does not include a term in its loss to promote tight and distinct clusters nor is any assessment of the cluster shapes with a tsne projection for example provided all in all as mentioned previously what the proposed work lacks in novelty it does not make up in experimentation or analysis 1 j hagenah m mehdi and f ernst generating healthy aortic root geometries from ultrasound images of the individual pathological morphology using deep convolutional autoencoders 2019 computing in cardiology cinc singapore 2019 pp page 1page 4 doi 1023919cinc4984320199005819 2 hagenah j khl k scharfschwerdt m ernst f 2019 cluster analysis in latent space identifying personalized aortic valve prosthesis shapes using deep representations proceedings of the 2nd international conference on medical imaging with deep learning in pmlr 102236249 docsepthe paper provides a nice and novel approach to a well studied problem the evaluation of the method is adequate and without any major logical gaps paper is well organised and easy to follow there were no major language issues the papers motivation ie why perform the task via a discreetclassification method and not a regression one could be reiterated towards the end and been put into context with the final evaluation the paper is lacking in comparison with other methods the authors identify a series of methods that are regression based but do not put their proposed solution into context in terms of performance there is little discussion about the limitations of the method any discreet method could potentially suffer from generilizability issues when facing a case that does not conform nicely to the prior identified types an evaluation along these lines would increase the strength of the paper considerably docsepmethodology is well explained and good references extensive experiments show the great potential of such method well written paper in terms of explaining the main motivation behind this work and the proposed solution some mathematical symbols are not well defined convae latentdim it is not commonly used in deep learning papers the use of a single mathematical letter is highly required it is not good to use a full word healthy inside a mathematical symbol such as the one denoting the reconstructed healthy image the main figure is poorly designed it does not show the main contribution of the work
### Summary:
|
the paper receives unanimously positive comments from four experts who are both knowledgeable and independent it presents an unsupervised shape typification and type classification method for personalized prosthesis shaping the reviewers hail at the potential of the proposed approach for clinical translation earlier there was a major concern regarding the limited evaluation but the authors provided an updated version to improve the quality per reviewing comments which appease the reviewers concerns hence i recommend its acceptance some typos taylor tailor lies the groundwork lays
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
a 2d segmentation pipeline was proposed in the article with a reference module and a propagation module a method of how to choose pseudo labels was tested overall using 2d segmentation model plus video processing technique is not very suitable for medical image processing many models have been tested on medical image segmentation while 3d segmentation models had much better performance comparing with general 2d segmentation model low baseline value in table 5 with deeplabv3 model was also proved this point advise to start with some models commonly used in medical image analysis docseppros the results show that the proposed semisupervision really matters authors put an effort to implement such a complex method cons results are not sota no comparison with 3dconv modelsdocsepin this paper the authors adopted mask propagation refinement and uncertainty estimation for ct volume labeling to perform semisupervised abdominal organ segmentation the manuscript is basically complete suggestions or deficiencies 1 in table 2 and table 3 the authors should describe the patch size which seems to be 512512 2 in table 2 and table 3 the authors should give a more detailed description of random initialization 3 in figure 5 the authors should point out which image each row comes from 4 tables 2 and tables 5 are too wide 5 the authors should divide section 4 into different subsections 6 the authors should add segmentation efficiency results and segmentation efficiency analysis in section 4docsepsummary this work applies windowing ct for data augmentation employs deeplabv3 and transunet for 2d segmentation task and adopts cps for semisupervised training a twostage pipeline is proposed including the reference module and propagation module which is clearly explained in section 2 this work is a wellwritten paper and its experiments are sufficient and substantial suggest improvements as discussed in the reference module section embedding the slicess relative position on the axial dimension as a feature vectors and input them to both networks transunet and deeplabv3 to learn how are slice position embedding features fused with feature maps and how to train the embedding features it is not explained clearly and there is no ablation experiment to prove its effectiveness
### Summary:
|
please adjust the window width and level of ct images to 40 and 400 respectively in all figures reviewers raise many concerns and suggestions please address all comments in the revised manuscript
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper exploring neighborhood memorization problem and proposes a neighboraware data augmentation method for node classification task representation learning for node classification usually has classimbalance issue some labels might cover a few number nodes this work empirically indicates out that the learned node representations will be influenced by the neighbor label distribution in particular those nodes with minor class can have the neighbor memorization problem that is the target nodes embedding might mainly represent the neighbor nodes but not itself empirical analysis in this work supports this claim clearly then a data augmentation method based on this observation is proposed experiments with classimbalance setting are conducted to demonstrate the superiority of the proposed augmentation method concerns 1 the notations in this work are not easy to follow authors use many kinds of symbols like minor target seen or unseen to represent the types of nodes however its very difficult to tell their difference in the manuscript could you please explain whats the target node and its difference to the nodes with class should the minor nodes be the target nodes please check the manuscript again and consider to present the types of nodes in alternative way 2 its difficult to draw connection between the proposed method and the solution to address the neighbor memorization problem we can only see an augmented data by mixing up the node feature and neighbor nodes could please explain how to created virtual label for the synthesized node and why the augmentation method can solve the neighbor memorization problem 3 could you please consider baselines which can deal with heterophily network 1 the neighbor memorization seems to be similar with the definition of heterophily where the target usually has different labels from the neighbors references 1 jiong zhu et al graph neural networks with heterophily aaai 2021 this work studies an interesting problem but the presentation is not clear expressing the motivation docsepthis paper addresses the class imbalance problem for node classification in a graph and points out some issues faced by existing gnns and methods to address the same it nicely depicts the problem of overfitting to minor classes and neighborhood memorization problem for class imbalance through experiments it proposes n approach graphens which can work with any message passing gnn to resolve the problem of class imbalance for node classification experimental results show the merit of the proposed algorithm working with multiple gnns gcn graphsage and gat on both synthetically imbalanced dataset and realworld imbalanced dataset for node classification the paper has multiple strong points and few weak points strong points 1 the paper motivates the problem very well the experiments in section 3 show the gap that existing conventional approaches for class imbalance problem fail to address for graphs thus it shows very well why graphspecific solutions for class imbalance problem are needed 2 the related work and baselines used for experiments are quite exhaustive to the best of my knowledge 3 experimental results are quite promising they clearly show the merit of the algorithm and its components for imbalanced node classification 4 the source code is attached i have not run it though weak points need clarifications 1 though i appreciated the motivation and problem formulation of the paper the proposed algorithm graphens is very heuristic in nature different steps of the algorithm ie neighbor smapling and node mixing do not have a strong objective i agree that these steps are simple and also intuitive but what and how advantage it is gaining for node classification objective is not clear it would have been great if some objective cost function can be formulated and these steps could have been deduced by solving that at least authors should give more justification on their need for node classification 2 table 1 shows improvement on the manually imbalanced citation datasets i am curious to see the performance of graphens on the original citation datasets it would be great if authors can compare the performance using gcn gat and graphsage with without graphens 3 what is the training validation and test size for supervised and semisupervised learning setup i also think that supervised learning task is also a semisupervised one as it is a gnn based nodelevel task on a graph can authors please clarify that 4 figure 1b shows that the gap of performance between all classes and the minor class around epoch 200 is almost the same for all the algorithms the training accuracy is also very similar for all classes and the minor class from figure 1a does it mean that overfitting to minor classes is not resolved in graphens on a similar note the gap between blue bar and red bar is almost the same for all the algorithms in figure 1c however graphens is doing really well to minimize the gap between blue bar and red bar for the neighborreplacing experiment in figure 1d can authors comment on this and throw some light on the actual reason for the superior performance of graphens i am still voting for acceptance since the problem motivation and formulation is done very nicely and experimental section is also strong the proposed algorithm is simple and intuitive but not that rigorous docsepthe paper proposes an imbalanced classification strategy for gnns unlike traditional imbalanced classification nodes on a graph are dependent on its neighbors and simply oversamplingreweighting the minor class instances would not work in particular the authors recognize and analyze the severity of the neighbor memorization problem which is identified as the key cause of overfitting in gnns the proposed method alleviates the neighbor memorization problem by synthesizing ego networks extensive experiments are conducted to evaluate the proposed method strong points 1 the proposed model is intuitive and technically sound neighbour sampling is important in graphs given the neighborhood memorization problem 2 i like the preliminary analysis on the importance of neighbour sampling putting the neighbor memorization problem in a quantitative manner ie section 3 3 experiments are extensive with strong empirical results i only have one main concern the target node is picked from a distribution based on the log of the number of nodes in the class but not much rationale is offered on why this particular distribution is chosen is there any guiding principle would the model benefit from more elaborately designed distributions the paper is well written and technical sound with strong results docsepthis paper investigates the problem of classimbalanced node classification on graph to address this the authors first conduct a case study on a dataset to analyze the underlying reason for inferior performance on minor class nodes then a model named graphens which consists of two key components is proposed to deal with the imbalance issue experiments on several datasets show that the proposed model can outperform the baselines however there are several major concerns in the paper 1 a case study is conducted to prove the hypothesis that we hypothesize that it is in fact more serious to overfitting to neighbors of minor nodes than to overfitting to the node feature itself and this study fundamentally lays the foundation for the following part of this paper however there are several issues in this case study 11 a benchmark dataset pubmed is employed to conduct the case study however on different datasets node representations calculated by gnns may be influenced differently by the nodeself features and the neighboring information due to the distinct characteristics of datasets therefore only employing one dataset here cannot demonstrate the existence of this phenomenon on different datasets with different distributions 12 two traditional baselines are utilized namely reweighting and oversampling approaches however many advanced approaches are proposed recently to deal with this problem only comparing with the most conventional approaches is less convincing to prove this hypothesis 13 in the section of overfitting to minor classes based on figure 1 a claim is provided that this result implies that the existing approaches are prone to memorize minor class nodes however this conclusion is not quite reasonable firstly the case study only resorts to two conventional approaches and they cannot represent all the existing approaches for this problem secondly from figure 1 we can only observe that the existing approaches are prone to overfit on minor classes and the performance on test of them on minor classes is not good compared to the proposed graphens thus the conclusion is overclaimed to some extent 14 in summary of the case study we can achieve a conclusion that for a minor node its neighboring information might provide a larger influence on the performance than its own features however it is hard to say the neighboring information is the key factor to impair the performance of minor nodes therefore it seems that the case study cannot comprehensively provide the fundamental basis for the proposed model 2 in the introduction a claim is presented that nevertheless gnns trained with graphsmote still suffer from neighbor memorization when the number of minor nodes is limited why it still suffers from neighbor memorization more details should be given 3 the proposed model graphens contains two key components namely neighbors sampling and saliency masking on features from the main contribution of this paper we can see that the neighboring information is the key reason that impairs the performance of minor nodes however from the ablation study sect 54 we can see that these two components generally have a comparable influence on the performance the authors should give more explanations 4 the illustration of sect 41 is a bit confusing especially the utilized notations for example is vminor or vtarget an example node or a group of nodes 5 in the experiments in order to generate the imbalanced settings on the citation networks for the minor classes only two labeled nodes are available for training the extreme scarcity of minor classes is not a reasonable setting and might import high bias or uncertainty into the experiments which may damage the persuasiveness of the experiments accordingly and even the case study 1 the case study is not convincing enough to lay the foundation for the model 2 more details should be given for some confusing points
### Summary:
|
although reviews were initially a little polarized they trend toward accepting the paper after rebuttal and discussion the most negative review raised issues of datasets baselines and experiments and various details that they find confusing these concerns were not shared by the other reviewers for the most part following a detailed rebuttal the most negative reviewer ended up siding with the more positive reviewers
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a heuristic nonsafe screening rule to deal with l1regularized estimation problems such as linear regression and logistic regression the proposed method can be viewed as a generalization of the strong rule used in the glmnet which only used firstorder information the paper reports experiments on both synthetic data and realworld data to illustrate the performance of the proposed method strengths the paper addresses an interesting problem in l1regularized estimation for linear regression and logistic regression screening rules are an effective strategy to speed up such estimation although the proposed rule is heuristic in nature the simplicity in its formulation and the effectiveness shown in the experiments offers some advantages of the proposed method this is also a notsocommon approach that makes use of secondorder information for screening experiments are exhaustive six wellknown alternative methods are compared on a wide variety of synthetic and realworld data weakness the authors may consider carrying out experiments on other settings covered by the proposed method such as poison regression and elastic net it may also be interesting somewhat orthgonal to understand how the proposed method performs or scale compared to sgd based approach on very large datasets limitations are discussed in the paper docsepa hessian screening rule for lasso and its generalized linear model extension for logistic regression was presented to take advantage of the highorder information for more efficient screening specifically in cases with highly correlated predictorscovariates the proposed hessian screening rule together with several speedup tricks has been shown to be effective in both simulated and real datasets the proposed hessian screening rule extends the strong rule and working set by taking advantage of the highorder information for more efficient screening specifically in cases with highly correlated predictorscovariates the hessian screening rule together with several speedup tricks has been shown to be effective in both simulated and real datasets it is not clear based on the current presentation why the hessian rule can be less conservation from section 3 in addition to theorem 31 some theoretical analysis for that may help further improve the quality of the submission the actual final screening in fact was based on the modifications as described from line 133 to 151 it may be interesting also to have ablation comparison to see clearly what led to improved efficiency since the screening rules are not safe in particular for logistic regression in addition to investigating the efficiency the authors may also need to provide the performance comparison with respect to both predictor selection and model prediction accuracy in the real data experiments the authors may want to provide some explanations why the hessian rule performs significantly worse on arcene and rcv2 datasets for which p is much larger than n especially for arcene it is clearly not the case when p is similar as n as discussed in section 5 finally there are language problems in the submission for example in line 188189 page 5 this is not a surprising find in line 257258 page 7 we also stop the path whenever the number of coefficients ever to be active predictors exceeds p the number of coefficients can be equal to p but will never exceed p the authors may need to improve the presentation of the submission na docsepthis paper studies the predictor screening rules over the lasso optimization problem it proposes a hessian screening rule which utilizes the secondorder information this rule is effective not only in screening but also in accurate warm starts updating the secondorder information has high computational complexity and to deal with it this work replies on the sweep operator in the experiments the proposed rule is compared to many baselines and outperforms them significantly on both simulated and realworld data originality previous works on screening rules overlooked the study on the secondorder information thus i think the direction of this work novel the proposed rule takes the warm start and the hessian matrix computation problems into account and resolves them soundly besides the insightful discussions on the proposed screening rule are provided they are helpful to understand the rule and to differentiate this approach with previous methods therefore i think the contributions of the work novel quality i am satisfactory with most contents of this paper the paper presents a clear overview of the question and previous rules both in words and math the proposed rule is based on the hessian matrix and is actually in a form of the secondorder taylor approximation speeding up the hessian matrix computation is based on the sweep operator and the warm starts also benefit from the hessian matrix these arguments are demonstrated in the experiments as far as i checked the theoretical analyses have no problem however i have a concern with the study method normally people select the predictors for better fitting accuracy but this paper relies heavily on the time cost and the minimum number of active predictors to measure the performance i am curious about why not including fitting accuracy clarity the paper is clearly written for me i can easily follow the contents on the approach and the experiment significance the idea of the approach is sound and practical performance is better than baselines however because just the lasso problem is involved i am afraid that the audience will be very limited therefore i think the significance of this paper is limited yes docsepthe paper proposes a screening strategy for l1 sparse modelings the basic idea is to use a prediction of an optimal solution as a function of the regularization parameter derived from optimal conditions which is aka solution path the authors further propose combining working set selection screening by predicted solution with socalled strong rule to make screening more efficient overall the paper is easy to follow and technical quality is fine the purpose is clear and the procedure is written in detail however a critical issue is that the idea of main proposal hessian screening is not novel approaches based on a similar idea have been studies in the context of the path following though it is not fully mentioned detailed comments are as follows closely related papers in the path following literature are missed for example since the following two papers contains conceptually quite similar approaches the difference should have been discussed in detail though currently nothing is mentioned rosset2004 s rosset following curved regularized optimization solution paths neurips 2004 park2006 m y park and t hastie l1 regularization path algorithm for generalized linear models journal of the royal statistical society series b statistical methodology 2006 the proposed algorithm can be seen as a socalled predictorcorrector method in general eg discussed in park2006 further park2006 also discusses the working set selection based on the update equation 6 therefore most importantly the idea using theorem 31 to predict the variable c working set criterion of the next lambda has been already known therefore i do not think the concept of hessian screening is novel rosset2004 also discusses a similar idea of approach hessian based update based on an essentially quite similar theoretical property to theorem 31 further this paper also provides the error analysis of the predictor combination with strong rule and additional adjustment would be novel but its technical significance is a bit marginal because these are quite simple heuristics techniques in the updating the hessian paragraph have been also known the same techniques repeatedly discussed in the path following literature the warm starts has also been widely known eg park2006 minor comments since theorem 31 has been widely known it should clarify more explicitly rather than noting it only in the footnote in section 5 limitations were discussed
### Summary:
|
this papers proposed a hessian screening rule for lasso and its generalized linear model extension for logistic regression the proposed screening rules have been demonstrated to be effective in both simulated and real datasets the idea is novel and the evaluation is convincing the authors mention that extensions to mcp and scad may also be possible even though the objective may not be convex a brief discussion will be helpful
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work proposes a differentiable datamodel cosearch framework autocog for gnns which formulates the problem as a bilevel optimization process and enables the endtoend discovery of graph structure and gnn architecture the experiment results validate the effectiveness of autocog to search for deep networks for graphs under either homophily or heterophily this paper provides an interesting perspective for automatically designing highperforming gnn architectures searching for better graph structure and model simultaneously however given that there already existed another paper 1 with the similar motivation and method im afraid the impact of this submitted work on the field of automl will be limited 1 graph differentiable architecture search with structure learning neurips 2021 the proposed approach is technically sound however some of the authors claims are not supported by either theories or experiments the presentation of the work is clear strengths 1 the presentation of the paper is clear 2 the experimental results of the proposed framework on datasets under both homophily and heterophily is better than or competitive with the baselines which demonstrates its effectiveness 3 the analysis with regard to the network depth and the augmented graph provides interesting insights weaknesses 1 the major concern for this paper is its limited novelty the idea of datamodel cosearch for gnns has already been proposed by a previous work 1 with the similar motivation problem formulation and approach as autocog however there is no discussion with this piece of important related work 2 the experiments are not extensive enough to support the authors claim first the graph nas baselines eg 23 and the gnn designed especially for graph heterophilyeg 45 are missing besides performance gain the necessity of searching for graph augmentation eg increasing the robustness to noisy graph structure lacks enough validation and the effectiveness of the tricks in 322 is not validated by the corresponding ablation study 3 there lacks the theoretical or empirical analysis with regard to the model complexity and the search efficiency refs 1 graph differentiable architecture search with structure learning neurips 2021 2 graphnas graph neural architecture search with reinforcement learning ijcai 2019 3 search to aggregate neighborhood for graph neural network icde 2021 4 geomgcn geometric graph convolutional networks iclr 2020 5 generalizing graph neural networks beyond homophily neurips 2020 considering the limited novelty and the unsolid experiments i recommend a rejection of the paper docsepthe authors propose a neural architecture search framework with data model cosearch for gnn especially under heterophily conditions the neural model search and input graph topology is optimized simultaneuously towards this the authors design a bilevel optimization framework the idea is interesting and important especially for graphs under heterophily the authors compare their work on different datasets with different degree of homophily the experiments establish superior performance of the proposed architecture the idea is interesting and can help in improving gnns performance by finding effective model parameters this is interesting especially in case of graphs under heterophily which are challenging to model the technical contribution to solve this problem is interesting the experiments are designed to compare with different settings of heterophily further ablation studies are included to justify importance of components the paper is very well written definitions of homophily etc are clearly defined the motivation is crisp overall the work is good the problem tackled is interesting the method is innovative for the task in hand experiments are performed on a wide number of datasets under different setting there is a small doubt on reproducibility due to nonavailability of code and clarity on parameters of baselines also i am not very confident on the baselines used are there works that tackle heterophily specifically that need to be compared with overall the paper is good just need some clarity on the setup of baselinestheir parameters etc docsepthis paper proposes a joint framework to search both model architecture and graph structure the searching problem setup is well defined to solve the problem of directly applying gdas to gnn search the progressive search space is introduced to ease the search difficulty and identity mapping is introduced to relieve the oversmoothing and overfitting issues the experimental analysis is comprehensive and the results are significant this paper can bring some insight with two aspects 1 the joint searchingoptimization of both architecture and data augmentation 2 the new automl approach for graph learning the main components including the model search space graph augmentation mechanisms optimization bilevel optimization and some stabilize tricks are sound and well described the experiment setup is clear the designed mothed is evaluated with several ablation studies and the results look significant compared with several sota gnn methods however the search efficiency is not evaluated and there lacks the comparison with other automl methods for gnn such as graphnas gao et al and snag zhao et al the presentation in this paper is smooth and easy to follow but there are several minor issues line 25 trained line 41 because of line 79 analysis on line 99 have inconsistent of overxxx and overxxx positive 1 novelty this paper proposes a joint framework to search both model architecture and graph structure which is new to the literature 2 technically sound the main components including the model search space graph augmentation mechanisms optimization bilevel optimization and some stabilize tricks are sound and well described 3 experiment results are analysised in different aspects negative 1 lack of efficiency evaluation as known efficiency is an important problem in automl since this paper searches for both model and data the effiiency issue should be seriously considered 2 lack of comparison with automl baselines in section 2 the authors mentioned several nas for gnn approaches but it seems that the main comparison in experiments does not include these methods 3 code is not provided and running environment is not clear overall this paper brings something new to the automl and gnn literature the paper is sound and wellwritten but additional experiment is needed so i recommend a revision to add some new experimental comparison in terms of effieincy and baselines
### Summary:
|
the authors propose a bilevel endtoend optimization process that encompasses both discovery of graph structure and gnn architecture initial concerns regarding closely related work missing baselines and lack of efficiency evaluation have been addressed no major concerns are left and all reviewers recommend acceptance due to the works novelty and empirical rigour i agree with their assessment and am convinced that the provided work would contribute to the diversity of problems presented at automlconf
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose a method to train graph neural networks more efficiently and hence make them amenable to largesized graphs the main idea is to use hash maps on the graph nodes more specifically the authors propose to map the convolution matrices n x n and the feature matrix n x d to c hash buckets following this the proposed algorithm only needs to access the cdimensional sketched convolutional matrices and feature vectors this leading to sublinear complexity the authors first present a framework based on count sketch and tensor sketches and derive feature update equations that only depend on the sketched data however using these updates in a straightforward manner leads to instability during training to address this the authors propose to use locality sensitive hashing lsh and develop a practical algorithm for sketching based gnn training the proposed algorithm then studies for its efficiency and performance across a few different datasets and baseline comparisons are provided on average the proposed algorithm seems to perform better compared to the one other sublinear time baseline algorithm in terms of efficiency and performance in terms of efficiency the algorithm outperforms other non sublinear algorithms and also has similar performance strengths 1 the presented algorithm is interesting and the usage of lsh in the context of gnns is novel although the presented ideas are based mainly on existing work on using lsh for deep learning 8 and 9 the present work differentiates itself by applying those ideas to develop a framework to train graph neural networks 2 sublinear complexity is particularly attractive and the presented results show that the proposed algorithm is good at taking advantage of the redundancy across graph nodes by using hash maps the advantages of the algorithm in terms of efficiency and training are clearly shown in the experiments provided 3 the paper is technically sound and has a good flow of ideas weaknesses 1 one of my main concerns is the presentation of the paper in my opinion the main contribution of the paper is a discussion of how lsh can be used to train gnns as opposed to using tensor sketching which has been allocated a lot of real estate in the paper on many occasions the details of the main algorithm are deferred to the appendix for example the paper would greatly benefit from having algorithm 1 in the main body in my opinion this algorithm and the other details presented in section c are more important than the presented error bounds in section 32 which could have been in the appendix instead 2 my second concern is regarding some of the experimental results a in fig 1a why were the weights chosen to be identity matrices more elaborate experiments with the learnt weights will make these results more compelling b table 1 should include results for gat as well in general gats are more likely to face complexity issues while training because of the expensive attention operation i am curious to see the effect of the proposed algorithm on gats 3 another observation is that on many occasions the authors use verbal explanation of techniques rather than equations and a mathematical presentation for example lines 252265 might be better explained using a series of equations the same can be said about most other subsections presented in section 33 in general the paper need some restructuring to address the following points a provide more details of the actual algorithm in the main paper especially for online learning of sketches and avoiding on in loss evaluation b the presented error bounds are not very informative and could be moved to the appendix 4 the paper could include more references for related work given that the field of sampling and sketching for graph neural networks is quite wellstudied see for example 1 calandriello daniele et al improved largescale graph learning through ridge spectral sparsification international conference on machine learning pmlr 2018 2 zheng cheng et al robust graph representation learning via neural sparsification international conference on machine learning pmlr 2020 3 srinivasa rakshith s et al fast graph attention networks using effective resistance based graph sparsification arxiv preprint arxiv200608796 2020 the authors have discussed the limitations of their work docsepscaling gnns to large graphs is an important area of research this paper proposes sketchgnn which uses polynomial tensor sketch pts and local sensitive hashing lsh to sketch the graph convolution operations in gnns with sublinear training complexity to be specific pts is used to estimate the nonlinear gnn update rule and lsh is used as a datadependent method for learnable sketches experiments on several graph datasets ogbnarxiv reddit and ogbnproduct proves the low memory and training time complexity of the proposed sketchgnn method which could achieve comparable performance with respect to other strong benchmarks including graphsaint and vqgnn ablation tests over sketching ratio c are also provided strength this paper is wellwritten and provides a nice way to incorporate tensor sketch and lsh into gnns which might inspire future work in this area some adaptations are incorporated to boost the performance of the proposed algorithm including learnable coefficients for the nonlinear activation function and datadependent hashing with learnable lsh weakness as shown in table2 and table3 graphsaint uses comparable train memory but longer epoch time and could achieve better performance than sketchgnn on average so is the proposed sketchgnn method good enough as an alternative for graphsaint as mentioned in line 398399 the needed sketch ratio cn for sketchgnn to achieve competitive performance reduces as graph size grows however only 1 sketch ratio are reported for each graph dataset it would be nice to see results from multiple sketch ratios for each dataset to confirm this statement yes docseptraining gnns on large graphs is a long standing challenge this work proposes a sketchbased algorithm whose training time and memory grow sublinearly with respect to graph size by training specifically the author provides a novel protocol for sketching nonlinear activations and graph convolution matrices in gnns the author further utilizes lsh for improving the quality of sketches strengths 1 to the best of my knowledge the sketing method proprosed in this paper is novel i really like the idea of propagating the sketches instead of merely storing sketched activations in the forward pass 2 the proposed techinique can be applied to different kinds of gnns 3 the theoretical analysis is correct and insightful although i would like to note that it has a gap to the empirical results weaknesses 1 i suggest the author tone down the claim sketchgnn is the first sketchbased training algorithm for gnns and discuss the difference to the similar work exact 1 in fact 1 already applied the random projectionbased sketching to the node embeddings if you take a close look to fig 1 in 1 the main difference of sketching between this work and 1 are 1 sketchgnn applied the sketch during the forward pass and propagate the sketched node embeddings layerbylayer while the sketching in 1 only affect the backward pass since it use the original note embeddings for propagation during the forward pass 2 sketchgnn and exact sketch the note embedding along different axis 1 estimates cxw as cxwapprox cxwr1r1t where r1inmathbbrdtimes r is the sketching matrix with cleq d for sketchgnn it estimates cxw as cxwapprox cr2r2txw where r2inmathbbrctimes n however 1 cannot accerlate the training process because they cannot propagate the sketched embeddings like sketchgnn if i was wrong i would be more than happy be corrected 2 based on the weak1 i checked the performance gap between 1 and this work i found that 1 has much less accuracy drop eg sketchgnn has 1 accuracy drop on the ogbnarxiv while 1 is less than 05 my guess is that because this work introduces errors and gets accumulated as it passes layerbylayer during the forward pass theorem 1 in this work is actually indirect to the optimization process in the sense that it does not charactize the gradient dynamics in fact if you apply any sketching technique during the forward pass then the gradient is provably biased 2 see eq 11 in 2 and its surrounding explanation so i suggest discussing the gradient bias into the theoretical analysis which is not very hard to do but will improve the soundness of this work 3 the overall algorithm is too complicated in the sense that it has at least four additional hyperparameters 1 the truncation order of the power series for appoximating the activation function 2 sketch ratio cn 3 the subset size b for lsh 4 the update intervel tlsh for lsh too many additional hyperparameters highly limit the practical usage of the method because the cost of extra hyperparameter tuning grows exponentially 4 important what do you mean by train memory in table 1 from l368 it seems to be the peak memory usage during training i checked the codebase which is based on pyg i believe the reported data is wrong in the sense that the fullbatch gnn baseline on ogbnarxiv should roughly requires 1g memory roughly 700mb for activations to train 3 5 what do you mean by epoch time is it the time for training one epoch for graphsaint and vqgnn each epoch contains serveral iterations while the full graph counterpart only have one iteration per epoch could you make this part more clear also what is the epoch time of sketchgnn versus graphsaintvqgnn on ogbnproducts 6 the authors do not discuss about the compatibility of batchnorm and dropout i believe them can be applied to the sketched note embeddings however i do not think they are useful in this case especially for dropout because sketched embeddings are denser compared to the original one 7 one last question is about the activations i pretty like the idea of overcoming the nonlinearity by approximating the activation function as a power series besides sigmoid could the author do an ablation study for approximating relu function using learned coefficents since we almost do not use sigmoid as the activation function in gnns typos l234 ldots at last i would like to say i like the idea of this paper and i think the idea is elegant if my questions are discussed properly i will raise my score update after reading the rebuttal most of my questions are addressed accordingly i would like to raise my score to 7 1 exact scalable graph neural networks training via extreme activation compression 2 faster neural network training with approximate tensor operations 3 gnnautoscale scalable and expressive graph neural networks via historical embeddings see above docsepauthors tackle the problem of scalability of gnns to very large graphs they propose sketchgnn method that approximates computation of a typical messagepassing gnn in time that is sublinear of the number of nodes this is achieved by compressing the graph ie its adjacency matrix or generally a convolution matrix and the node feature matrix by using count sketch and devising an approximation to a gnn layer computation that is built on polynomial tensor sketch with locality sensitive hashing their approach is applied to multiple gnn kernels gcn gat and graphsage on 3 datasets ogbnarxiv ogbnproduct and reddit sketchgnn shows strong classification performance competitive with vqgnn however graphsaint is comparable or better in terms of runtime efficiency sketchgnn significantly improved over compared models on the evaluated mediumsized datasets strengths motivation this paper tackles an important problem scaling gnns to graphs of millions or billions of nodes has clear industrial applications clarity the paper is very well written and easy to follow correctness i have read through and believe the method derivation to be correct however i do not have a deep expertise in sketch and lsh methodology originality the proposed method achieves similar goal to vqgnn but by different means it builds on existing polynomial tensor sketch and locality sensitive hashing devising online learning of sketches such that the whole stack of gnn layers is trainable without realizing explicit representation of all n nodes weaknesses limitation authors do not perform experiments on mag240mlsc contains 240m nodes i believe their approach would still not scale to graphs with hundreds of millions of nodes as the sketch ratio would need to be very low which may not be practical i would like to see a discussion from the authors on this point achieved prediction performance is comparable or lower to graphsaint a samplingbased approach compared to vqgnn the prediction performance appears to be equivalent minor comments a diagram illustration would further improve readability of the paper typos line 131 sorely solely line 234 ldots command in summary the proposed method is a valuable contribution albeit its practical limitations should be further discussed in the experiments section and conclusion method limitations are not explicitly discussed nor any potential negative societal impact
### Summary:
|
the authors propose the use of sketch gnn based on compressing relevant matrices and sketching typical gnn operations via hashing to enable better scaling of graph neural networks to very large graphs the reviewers are all in favor of accepting the paper with three accepts and one weak accept and therefore i recommend its acceptance i encourage the authors to take into account the reviewer comments as they already indicated they will do during the rebuttal period when preparing the camera ready version
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper motivates and provides a model to generate video frames and reconstructions from nonsequential data by encoding timecamera position into the model training the idea is to allow the model to interpolate and more importantly extrapolate from frames and learn the latent state for multiple frames together the same techniques are also applicable to 3dreconstruction jump is very closely related to gqn with the main difference being that the randomness in jump is learned better using a global prior the evaluation is reasonable on multiple synthetic experiments including a 3dscene reconstruction specially created to showcase the consistency capabilities in a stochastic generation paper is mostly clear but more priority should be given to the discussion around convergence and the latent state to me the 3dreconstruction usecase and experiments are more convincing than the video generation interpolation between frames seems like an easier problem when specifically trained for on the other hand video algorithms trained on sequential prediction should be able to go forward or backward in time moreover jumpy prediction throws away information the middle frames that might lead to a better latent state the experiments also show certain frames where there seems to be a superposition of two frames in this aspect sssm is better despite having worse video quality for video experiments prediction of more complex video with faraway frame predictions would solidify the experiments the narratives seem somewhat limited to show what kind of advantage nonsequential context gives you reliable convergence less variance of training progress of the method seems to be the strongest argument in favor of the jump it is also unclear whether having a global latent variable is why it happens more discussion about this should probably be included considering that jumpy prediction seems to be the cause of this better evaluation of the latent state might have presented a better understanding of what the model is really doing with different samples for example what is the model causes some frames to look like superpositionsdocsepthis paper proposes a general method for indexed data modeling by encoding index information together with observation into a neural network and then decode the observation condition on the target index i have several concerns regarding the way the paper using indices and the experimental result the strategy this paper use for indexed data is to encode all data in a blackbox which can be inefficient since the order of temporal data or the geometric structure of spatial data is not handled in the model these orders can be essential to make reasonable predictions since they may encode causal relations among those observations and certainly cannot be ignored another critical problem for this paper is that the relative time scale are not explicitly modeled in the context my worry is that when putting all those informative data into a blackbox may not be the most efficient way to use them on the other hand experiments in this paper look quite artificial since sequential and spatial modeling have multiple reallife applications it would be great if this method can be tested on more real dataset this paper does show some promise on sequence prediction task in a long range especially when the moving trace is nonlinear a reasonable uncertainty level can be seen in the toy experiments and the sample quality has some improvement over competitors for example jump does not suffer from those multimode issues these experiments can be further strengthened with additional numerical results for now this paper does not convince me about its method for modeling general indexed data both in their modeling assumption and their empirical results in my opinion there is still a long way to go for challenging tasks such as video prediction this paper proposes an extreme way to use indices but it is still far from mature docsepthis paper presents a method for predicting future frames of a video or unseen views of a 3d scene in a jumpy way you can query arbitrary viewpoints or timesteps and consistent way when you sample different views the scene will be consistent they use a vae that encodes the input video in a permutation invariant way which is achieved by summing the perframe latent vectors then they sample a latent vector using a draw prior this latent vector can then be used to render the videoscene from different timesviewpoints via an lstm decoder they test the model on several toy datasets they compare to video prediction methods on a dataset of moving shapes and 3d viewpoint prediction on a 3d mnist dice dataset pros the idea of developing new methods for viewpoint and video synthesis that allow for jumpy and consistent predictions is an important problem the paper is fairly well written the design of the model is reasonable it is a natural extension of vae viewpointfuture prediction methods cons all of the experiments were done on toy datasets these are also not wellestablished toy datasets and seem tailored to debugging the model so it is not particularly surprising that the method worked since the main contribution is not very novel from a technical perspective it is more about designing a model from existing wellestablished components this is a significant limitation the paper suggests with experiments that gqn generates predictions that are less consistent across samples but it is not clear exactly which design decisions lead to this difference why is this model more jumpy and consistent than gqn the paper claims that jump trains more reliably than several video prediction methods in figure 5 yet in the 3d viewpoint synthesis task they suggest that jump had trouble with convergence ie we ran 7 runs for each model and picked the best 67 runs for each model 1 run for jump failed to converge this is confusing for two reasons first why was this evaluation protocol chosen ie running 7 times and picking the 6 best if it was a posthoc decision to remove one training run then this should be clarified and the experiment should be redesigned and rerun second is the implication that jump is more stable than video prediction methods but not necessarily more stable than gqn for viewpoint prediction the paper should consider citing older representation learning work that deals with synthesizing images from multiple viewpoints for example m tatarchenko a dosovitskiy t brox multiview 3d models from single images with a convolutional network eccv 2016 there is insufficient explanation of the badj baseline what architectural changes are different the decision to use draw instead of a normal vae prior is unusual and not explained in much detail why does this improve the visual fidelity of the samples overall the paper does not present enough evidence that this model is better at jumpyconsistent predictions than other approaches it is evaluated only on toy datasets if the technical approach were more novel and if it was clearer where the performance gains are coming from then this could be ok but it seems to be a fairly straightforward extension of existing models
### Summary:
|
this paper proposes a probabilistic model for data indexed by an observed parameter such as time in video frames or camera locations in 3d scenes which enables a global encoding of all available frames and is able to sample consistently at arbitrary indexes experiments are reported on several synthetic datasets reviewers acknowledged the significance of the proposed model noted that the paper is wellwritten and the design choices are sounds however they also expressed concerns about the experimental setup which only includes synthetic examples although the authors acknowledged during the response phase that this is indeed a current limitation they argued it is not specific to their particular architecture but to the task itself another concern raised by r1 is the lack of clarity in some experimental setups for instance where only a subset of the best runs are used to compute error bars and this subset appears to be of different size depending on the experiment cf fig 5 and the fact that the datasets used in this paper to compare against gqns are specifically designed overall this is a really borderline submission with several strengths and weaknesses after taking the reviewer discussion into account and making hisher own assessment the ac recommends rejection at this time but strongly encourages the authors to resubmit their work after improving their experimental setup which will make the paper much stronger
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper addresses an important symmetry in metalearning namely the context data consists of a set of datapoints in arbitrary order the model should thus be permutation equivariant to their order at the same time the data itself may have its own symmetries eg rotation which the network should likewise be equivariant to the authors follow a theorydriven approach proving in thm 2 that a function with these two types of symmetries may be factored and represented by a composition of functions reflecting each symmetry individually they then design a neural process np model equivcnp which reflects this result other works have used permutation equivariance and translation equivariance in nps but this is the first to incorporate other symmetry groups as a weakness the method description in section 4 is often imprecise and unclear as noted in the specific points below the method is somewhat novel it may be described as a combination of two existing methods permutationequivariant nps garnelo 18 and lieconv finzi 20 the experiments achieve good results but could be much more convincing they concern fitting only functions on lowdimensional space report only one metric and have limited comparison to baselines in particular it does not seem necessary to frame image reconstruction as a metalearning task however the strength of this paper is the theoryguided model design the theorem proved is wholly appropriate to the task and informs the model design very nicely in particular it makes use of the realization that dc to cxy can be gequivariant that is the symmetry transformation can be inferred from context and transferred onto the prediction function thus i suggest acceptance 6 specific point and questions sec 21 weiler cesa is costly i do not agree however cost is defined this method is on par or better than cnns sec 21 learning the objective from the data i do not follow can you explain further how this applies to equivcnp sec 21 how do the symmetries in this paper compare to those considered in maron et al 2020 on learning sets of symmetric elements which also considers permutation and lie group equivariance together defn 3 it seems here we are giving all the orbits the same weight should there be a measure on orbits considering the so2 action on the plane should not the circular orbits be weighted by circumference sec 42 lieconv can handle lie groups this is true but many other architectures such as steerable cnn can as well sec 42 we define distance in the lie group since we have lifted from the space x is it important to consider differences between measuring distance on x vs g sec 42 qi qj how do you define distance between orbits sec 42 since compact groups have a biinvariant metric why not use that instead of duv page 5 first eqn according to which distribution are uiqi sampled page 6 para 1 it is not clear exactly what precisely is meant by discrete or continuous data page 6 although we want to convolve ez i do not follow this sentence can you clarify it page 7 mnist already contains diverse group equivariance if this is the case then the equivariant model should still have good performance but perhaps with less improvement over the baseline correct page 8 why not test so2equivariant model on test data which has only been rotated does not the success of r times so2 over the other groups simply show that the others have misspecified or perhaps underspecified inductive biases page 11 second equation should be psig1 x g1 x since it is a leftaction this will not change the rest of the proof page 11 proof of ii implies i feels incomplete can you define rho mathcalh here minor points sec 3 first equation pi zn ldots the subscripts should be pi1i not pii one can see this in a simple example or from the fact pi is a leftaction sec 3 by multz is sup over zm but contains zn sec 41 lifting into k elements how do we know the stabilizer is finite sec 42 note that not all lie groups are matrix groups as seems to be implied here sec 42 last eqn on page 4 the integral can be written over g since the support of gtheta limits to the neighborhood of u bottom page 4 footnote triangle not trigonometric page 5 first eqn sum should be over vj qj since the support of gtheta is limited to a neighborhood of 1 it may not be necessary to assume exp is surjective since exp is always surjective in a small neighborhood of 1 sec 43 first eqn the first line is a product of density functions but the second is a product of distributions this is unclear sec 43 equivdeepset is not introduced sec 43 a single output per one input is this backwards page 6 line 3 phi with psi sec 5 two questions should three questions page 11 proof switches between x and s make consistent page 11 what do the brackets mean this notation should be defined update based on author feedback i am grateful to the authors for their detailed replies to my questions i understand some of the minor points better and am happy they have revised some unclear parts overall i think this paper has some real strengths theoryguided design important problem novel methods i do feel similar to r3 that the experiments and applications could have been far more convincing weighing these strengths and weakness i still tilt slightly towards accept 6docsep 1 summary this paper presents equivcnp which extends conditional neural processes cnp to incorporate symmetries of the data eg rotation and scaling the approach utilizes a combination of lieconv finzi et al 2020 and deepset zaheer et al 2017 to achieve the equivariance in the data space and permutation invariance across the samples in a dataset they provide empirical results on a 1d regression task with synthetic and 2d image completion tasks using digital clock digits dataset which they constructed 2 decision i am recommending a weak reject for this paper first the experimental section does not sufficiently demonstrate the efficacy of the proposed approach second i find the organization of the paper hard to follow at times 3 supporting arguments 1 the papers motivation is interesting and intuitive just as convcnp gordon et al 2019 extended cnp with translation equivariance this work proposes to extend cnp with group equivariance it is generally difficult to build equivariance properties into deepnets just in the data space alone cnp requires the additional permutation invariance hence i think the work is of interest to the community 2 the main weakness of this paper is the experimental section for example convcnp gordon et al 2019 from last years iclr evaluated on the following datasets a 1d synthetic dataset plasticc dataset mnist svhn celeba32 celeba64 zsmm on the other hand this work conducted experiments on a 1d synthetic time series data and a digital clock dataset where the template is shared across train and test sets 3 the author explained mnist is not suitable for evaluating equivcnp with group equivariance translation scaling and rotation i am not convinced by this argument i think the paper can be strengthened if there is an experiment demonstrating that on a real dataset equivcnp is beneficial 4 furthermore the image completion results do not compare to any prior benchmark i dont see a reason for not comparing i think the organization makes the paper difficult to read at times for example on page 5 they mentioned rho phi psi which are defined back on page 2 maybe reorganize such that the modeled conditional distribution in sec 43 is at the top of sec 4 similar to gordon et al 2019s paper organization 4 additional feedbacks adding equation numbers and referring to them in the text may help the readers to understand the paper better docsepthis paper proposes a member of the neural process family to empower the conditional np with group equivariance ability the idea is to replace the original encoder part of conditional np with a group convolutionbased encoder which is similar to convcnp gordon et al 2019 i am not familiar with the group convolution part but the neural process it looks lieconv is used as the main group convolution tool but such tool was proposed in another paper finzi et al 2020 the authors did not highlight the contribution made based on lieconv the improvement or change on cnp is not significant but the new group equivariance ability is definitely a new and useful feature of np the experiment results verified the proposed idea this paper is the first to introduce the group convolution to np to my best knowledge one minor suggestion is that algorithm 1 is all about group convolution without any computation or inference of the np part considering the title of the algorithm it would be better to include some details of np or change the title docsepthe paper provides an extension of convolution conditional neural processes cnps to more general lie group equivariant cnps the development of the theory seems sufficiently clear to someone more familiar with the field however for newer readers it seems important to be familiar with background concepts and prior work this is not a penalizing point but rather just an observation the major concern with the body of work is the fact that the experiments seem lacking in application to more realistic data and scenarios for instance group invariance is not studied in the context of image classification which would be a direct application where transformations need to be accounted for though the image completion task seems interesting more complex transformations are not handled which leaves this work somewhat incomplete indeed the authors agree to this point in the discussion para how would this method scale to other more complex transformation groups such as illumination object 3d pose etc although it might seem easy to brush off these concerns as future work given that there has been a lot of work in these areas research that can address these other more challenging yet realworld practical cases need to be addressed atleast in part in a bosy of work
### Summary:
|
this paper provides a natural combination of conditional neural processes with lieconv models it is a good step forward for stochastic processes with equivariances while there is still room to improve the experiments the authors provided a good response to reviewers and the paper is a nice contribution
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this article solves the decpomdp problem based on maac and qmix algorithms when the agents are homogeneous and optimized only through team rewards it is easy to learn similar policies for each agent this makes the multiagent algorithm finally converge to a locally optimal joint policy if each agent can complete different parts of the overall goal the joint policy that converges to is obviously better many current works model the above problems as task assignment or role assignment problems however the agents with different roles in these works are basically rulebased and the tasks are all manually defined there are also some works that unsupervisedly generate different strategies and roles by introducing diversity constraints but the generation process has nothing t o do with the task in order to solve the shortcomings of the above methods this paper proposes a marl method based on reward shaping to encourage the division of labor between agents and at the same time introduces two regularize terms to solve the problem of too similar agent policies at the initial training stage at the same time reward shaping and reinforcement learning are optimized simultaneously forming a bilevel optimization problem the paper also designed three tasks that emphasize division of labor to verify the effectiveness of the algorithm 1 neither the maac nor the qmix algorithm on which the paper is based has good scalability although the independent learning algorithm is simple it can achieve better performance on many tasks and has good scalability this paper should additionally use independent learning algorithms as baselines and apply the intrinsic rewards proposed in this paper to independent learning algorithms 2 the three tasks used to verify the algorithm in this paper are all specially designed with a strong emphasis on division of labor i think the paper should additionally explain the limitations of the algorithm such as which scenarios will be more effective and which scenarios will limit the learning ability of the agent 3 the optimization process of bilevel problems is very unstable the algorithm proposed in this paper contains many hyperparameters and the sensitivity of the algorithm to hyperparameters should be shown in the experimental partdocsepthe paper discusses an analysis of the emergence of individuality in a multiagent system based on reinforcement learning the emergence of individuality is based on intrinsic reward the intrinsic reward is assigned to each agent by forcing them to be different according to a given distribution the reviewer struggles to see this mechanism particularly insightful this does not appear to the reviewer as emergence of individuality his looks more like the emergence of different behavior which can be seen as individuality but it is more like forcing different behavior not individual behavior as interpreted normally when we study societies in my opinion strength according to the reviewer any study that sheds a light on human phenomena using reinforcement learning is fascinating weakness the model used for studying the emergence of individuality is not convincing in my opinion in fact the authors essentially impose the emergence of individuality from outside by giving a reward for being different the reviewer wonders if the emergence of individuality at the end of the day might actually emerge just by considering individual learning models i would say that this is what you observe in many multiagent systems with individual learning models what is the main different in this case this is not completely clear to the reviewer the choice of the mechanisms maac qmix etc is not sufficiently discussed by the authors the goal of the evaluation is unclear the goal of this evaluation is not the emergence of individuality in my opinion in general i would say that the question that the authors try to address is indeed a fascinating one however the reviewer really struggles to understand the choice of the mechanism selected by the authors the reference to biology are not convinced the authors say biologically the emergence of individuality is attributed to innate characteristics and experiences however as in practice rl agents are mostly homogeneous we mainly focus on enabling agents to develop individuality through interactions with the environment during policy learning is it not something that happens if you have individual models in any case it seems to me that the individualitydifferences are artificially imposed on the system itself overall the contribution of the paper is not significant in my opinion since unfortunately the work does not really provide insights about emergence of individuality which is the main stated goal of the paper questions 1 how do you map the mechanism used for the emergence of individuality to real situations this appears in a sense as your goal but it is difficult to see how it is possible to interpret the results of the simulation considering it as a simulation of a realworld society 2 in the introduction the authors say analogically the emergence of individuality should also be essential for multiagent cooperation i would claim that this is difficult to prove by analogy as stated by the authors 3 the goal of the evaluation is not completely clear in fact you simulate a variety of games etc however it is unclear how this is related to the fundamental problem of the paper which is the emergence of individuality 4 why do you use positive distance what is the motivation of this choice 5 the discussion about qmix is not completely clear why is it necessary how do you link this with the goal of the paper is there any clear mapping with realworld situations docsepthis paper tackles the problem of exploration in multiagent rl formulated as a decpomdp the authors propose to shape the reward using the output of a classifier that tries to determine which agent saw a particular observation moreover the authors propose some regularization schemes to break ties early on in the training then it is shown that the proposed reward shaping term can be integrated into two popular marl algorithms that use centralized training maac and qmix the integration into maac is relatively straightforward because maac uses independent critics whereas the integration into qmix is more involved due to the qfunction mixing step i believe this is a very important problem being tackled by the authors even in singleagent rl exploration is a major issue this only gets even more apparent in the multiagent setting i believe that the paper was wellwritten and the introduction made it very clear what exactly the papers contributions were which i greatly appreciate unfortunately im not sure that i believe that the proposed bias is truly useful i understand the intuition we want agents to specialize their observations such that it is easy to predict which agent is receiving any particular observation however wont this be counterproductive in most practically interesting domains for example consider a team of robots working together in a factory or a household they will constantly be changing their environment as they take actions toward operating the factory equipment or cleaning the kitchen which means the observations they receive will always be changing but the proposed reward shaping mechanism in this paper would fight against this progress because it would encourage the agents to engage in trivial behavior just to be able to see the same observations over and over in my mind it would be important to consider the dynamics of how the classifier pi oi is changing over time another option could be to simply encourage the agents to learn different policies maybe measured via kldivergence of pagent 1a s and pagent 2a s i believe that this bias would be sufficient to solve the example presented in figure 1 does this bias seem reasonable and if so how does it compare against eoi basically more broadly i would have liked to see experimental comparisons that convince me that eoi is the correct bias to use versus other natural biases whereas the current experiments only seem to compare against nonshaping methods like roma and hl some other questions 1 how significant is the fact that you are ignoring secondorder effects in solving the bilevel optimization in appendix a2 have the authors conducted preliminary experiments to prove that it doesnt make much difference i understand that this practice is standard in methods like maml but it would be nice to verify that it doesnt matter in this setting either 2 looking at the shaped reward computation r alpha pi oi it seems like if the classifier were naive and simply outputted a uniform distribution you would still be giving positive intrinsic reward in that case might it make more sense instead to consider the divergence between pi oi and the uniform distributiondocsepthis paper contributes a method based on rewardshaping to encourage the emergence of distinct agent behaviors in fullycooperative multiagent reinforcement learning marl within the paradigm of centralized training with decentralized execution they propose to learn a classifier that predicts agent identity given the agents observation and use the classifier probability as an intrinsic reward that is added to the environment reward as such there is a positive feedback loop whereby agents are rewarded for being distinguishable and hence generate data that is easier for the classifier to predict correctly they provide two regularizers to help kickstart the positive feedback loop when agents policies are similar at initialization the paper implements the method called eoi on top of two base marl algorithms using three illustrative environments where optimum team performance requires distinct behaviors among agents the paper shows that 1 eoi outperforms the two base algorithms and other baselines that address exploration and diversity 2 the two regularizers improve performance and help the intrinsic reward to serve as an inductive bias 3 agents do perform distinct behaviors they also show comparable performance on a benchmark starcraft ii micromanagement task i give a rating of 5 for the following reasons classificationbased intrinsic reward have been used for diversification in marl in previous work lee et al 2020 yang et al 2020 but this paper provides a different formulation along with effective heuristics to speed up the initial training of the classifier the experiments are clear in showing the benefit of eoi and eoi regularizers when built on top of the two base methods however there are main points of concern i am willing to raise my score if the main points below are sufficiently addressed this paper incorrectly positions itself against the previous works lee et al 2020 and yang et al 2020 in section 2 the authors say none of these studies appropriately link behavior diversification to the success of the cooperative task however in section 3 the authors say that eoi directly correlates individuality with the task by intrinsic reward making it seem that eoi differs from lee and yang in that regard this is not true the intrinsic reward in eoi is the probability of correct classification of an agent identity the intrinsic reward in lee and yang are based on correct classification of a latent skill variable neither agent identity nor a latent skill variable are directly linked to task success the intrinsic rewards in eoi and in these previous work encourage the emergence of distinguishable behavior none of them have any direct link to task success the authors either need to explain their claim in more detail or amend the paper another main concern is about the case where eoi is built on top of qmix the authors define qpi oi qaioi which takes in the vector of individual actionvalues qaioi and define a td error loss with the intrinsic reward pio in the first equation in section 33 the td error serves purely to estimate the expected cumulative intrinsic reward under the current policy it does not find some qfunction that optimizes the expected cumulative intrinsic reward however the initial formulation on page 3 where the total reward is the environment reward plus the intrinsic reward implies that the agents should maximize the expected cumulative intrinsic reward as well therefore it is not clear at all how the intrinsic reward is being maximized in the qmix case points that did not affect the decision but where the paper can be improved 1 all equations should be numbered for easier reference 2 first sentence in abstract needs a language check 3 the introduction says agents tend to visit their own familiar observations it seems that this has a negative impact on the exploration by an arbitrary agent authors should include more discussion 4 section 33 a more succinct way is to say that assigning the intrinsic reward to the specific agent bypasses the multiagent credit assignment problem 5 in section 33 it is not clear what the authors mean by saying that the individual value in qmix has no actual meaning please be more precise 6 at the end of section 33 the authors say that due to offpolicy learning pphiio is recomputed for each sampled batch is it recomputed purely because the current classifier is presumably more accurate is there any potential problem with doing so in the context of offpolicy learning 7 all plots need thicker line width including the legend colors need to be more distinguishable from one another currently it is hard to see which curves correspond to which algorithms edit score increased from 5 to 6
### Summary:
|
this paper introduces a method to increase diversityindividuality of agents in a marl setup based on intrinsic rewards coming from a classifier over behaviours reviewers tend to agree that this is an importantinteresting problem which is related to exploration a central problem in reinforcement learning several reviewers point out that the paper is well written i appreciate that the authors have been responsive to reviews and have answered andor addressed several points of concern of the reviewers the proposed method performs well on the experiments carried out reviews still point out several things that could be improved the experiments mostly report reward curved and only few results are actually clearly pointing out the individuality between agents the fact that this method outperforms the baselines is good but does not prove individuality and may simply be due to the authors spending more time on the tasks or other undiscovered phenomenon a reviewer is concerned that this extra reward could encourage trivial behaviours and it seems clear that it will if the relative weight of the intrinsic reward is too high this should be discussed more finally a reviewer points out that classifierbased intrinsic reward for diversity already exists in published works and that this paper is incremental work the average score for this paper is very close to the acceptance threshold but based on the reviews i recommend to reject this paper for iclr 2021 i am confident that when the authors address further the reviewers concerns and improve the experimental results this paper will be published in a future venue
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper introdcues nasbench360 which evaluates different architectures and nas methods on in total 10 different tasks this evaluation shows that different nas methods which perform very well on computer vision tasks do not transfer with a high performance to other domains this paper also evaluates the architectures from the nasbench201 search space on 2 diverse tasks showing that these architectures do not generalize well lastly this paper compares tenas a zeroshot search method with gaea on several tasks resulting in varied performance over these considered diverse tasks this paper tackles an important fact that nas methods should be evaluated on different domains to indeed show superiority of a proposed nas method the experiments underline the importance of this tasks and the claimed contribution most of the selected tasks are reasonable and sufficiently different for this evaluations the paper is well written and clearly structured the provided code and url are clear the main concern is that this paper claims to introduce a benchmark whereas this paper mainly introduces an evaluation of different nas methods on different task a benchmark should provide detail information about the found architectures and their performance in their provided documentation in the best case also the learning curves or the weights for future research some of the considered datasets only have a small size especially the datasets which are used for the nasbench201 evaluations docsepthis paper presents a benchmark suite which is a collection of datasets tasks to enable evaluate of different search spaces and search algorithms the datasets also include precomputed performance of nasbench201 architectures on two of the tasks based on the tasks collected the paper presents experimental results of several nas algorithms on multiple search spaces and tasks the authors conclude that the analysis by previous papers on computer vision tasks do not generalize to diverse tasks in addition the zerocost proxies perform inconsistently as well a diverse collection of tasks with scripts provided to evaluate and reproduce results of different nas algorithms and search spaces comprehensive experimental results showing the shortfall of exisiting algorithms and search space on generalization as the authors already mentioned this benchmark is an organization of tasks such that users can evalute different nas algorithms and search spaces since it is not a precomputed tabular benchmark i wonder if this work can be called a benchmark also i am interested to see if the authors have provided an api or standardization to allow people to contribute new search algorithms search spaces and precomputed data docsepthis work presents a benchmark for evaluating nas methods on different tasks this work has done more evaluation and analysis however similar work already exists and some of their tasks are more reasonably selected 1 this article is very well organized and written 2 the article gives a more detailed analysis and experiments the biggest problem idea is similar to transbench by huawei in cvpr1 1 the multitasking selection of this work seems less scientific compared to transbench 2 lack of evaluation of oneshot nas zeroshot nas 3 this work has actually been proposed for some time but no other nas work has been reviewed on it 1 transnasbench101 improving transferability and generalizability of crosstask neural architecture searchtransnasbench101 docsepthis work raises a concern about recent nas research overfitting on computer vision cv tasks and attempts to alleviate the issue by providing nas researchers a benchmark that assesses their works viability for tasks beyond widely popular cv tasks and instead for tasks generally conducive to the use of convolutional neural networks instead the primary contribution of the authors is a collection of tasks including datasets and performance metrics which they have named nasbench360 that aims to provide a standardized baseline for comparison of any nas algorithms performance across multiple task domains a secondary component of their work is a dataset of precomputed neural network configuration performances for two of the 10 tasks included nasbench360 in the same vein as and supplemental to that provided by nasbench201 on 3 cv tasks the authors use their proposed benchmark to evaluate a number of nas search spaces and algorithms in order to demonstrate that many existing stateoftheart works in the field fail to generalize well to tasks other than cv they also present an analysis of the viability of using nas for researchers on constrained budgets working on specific research tasks that could potentially benefit from employing nas but do not necessarily need to currently some further analyses have been mentioned in the paper but are missing from the appendix overall a well written coherent paper which makes a valuable contribution to the scientific community once the missing analyses have been added in i would be happy to increase the rating of the paper correctly identifies and addresses an issue with existing nas research namely overfitting on a handful of cv tasks sensible and valid analyses of the presented work and the proposed hypotheses opensource code has been made available for facilitating reproducibility benchmarkdataset hosting appears reliable and the licenses are open the paper has been written well and is easy to understand there are some incomplete and missing analyses in the paper unless i somehow missed it the analyses mentioned in section 522 that should have been present in the appendix were absent there is room for improvement by conducting an internal validation of the diversity of the chosen tasks thereby justifying their applicability to the experiments conducted here ie to testing how badly nas algorithms generalize the expert models mentioned in appendix d would have made for a very decent baseline for demonstrating this by illustrating the drops in performance in an expert model when it is used for a task that it has not been designed for i must admit however that i am unaware of the computational costs involved in conducting such an analysis the authors have a very rich dataset at hand which could have been used for a much more interesting and indepth statistical analysis than has been presented missing out on a lot of potential for the paper going beyond comparing only the final performances of nas algorithms across tasks but also including an analysis of their behaviours in multiple dimensions see additional feedback for an example docsepthe paper proposes nasbench360 nb360 a diverse set of tasks and a collection of metrics for neural architecture search nas nb360 extends beyond popular computer vision benchmarks and some precomputed performance is provided for two tasks the paper empirically shows inconsistent performance of several modern nas algorithms across tasks of nb360 demonstrating its usefulness in evaluating the robustness of nas methods 1 the question of how well nas algorithms generalize across tasks is an important one for both practical applications and scientific research 2 the dataset covers a range of applications dimensionality learning objectives as well as scales 3 code data models seeds and environment are released to facilitate reproduction 4 empirical results show many modern nas methods are not robust as they perform inconsistently across tasks it identifies an important limitation of many modern nas methods which may facilitate future research to address them 5 resourceconstrained experiments provide some potentially counterintuitive insights for practitioners that they may be better off using a fixed handcrafted domainspecific model 6 the paper discusses some limitations of the work and provides some insights regarding future improvement 1 perceiver io needs a certain scale to perform well while one can argue there is value in testing it at a very small scale it is not entirely a fair comparison 2 it is helpful to spell out that the aws p32xlarge has the 16gb version of v100 3 natural language processing nlp is another important and popular field for modern machine learning which includes nas research is there a good argument for not including nlp tasks 4 figure 4 left panel is misaligned
### Summary:
|
nasbench360 enables the evaluation of nas methods across a diverse set of tasks and domains and the authors experiments demonstrate that nas methods are not generally robust across all domaintasks the authors also make a strong argument that a nas benchmark should be a combination of a search space and a search algorithm challenging preconceived notions of what a benchmark is for this research community existing precomputed tabular benchmarks do not necessarily allow for innovation of the search space overall the benchmark and evaluations behind nasbench360 will benefit the broader research community and are significant improvement over existing work the authors should consider adding nlp tasks to strengthen the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors contribute to the recent research on whether neural network training in particular sgd favors minimal representations in which irrelevant information is not represented by deeper layers they do so by implementing a simple neuroscienceinspired task in which the network is asked to make a decision by combining color and target information importantly the networks output is conditionally independent of the color information given the direction decision so the color information is in some sense irrelevant at the later stages using this the authors quantify the relevant and irrelevant information in different layers of the neural network during training interestingly the authors show that minimal representation are uncovered only if the network is started with random initial weights information is quantified using a simple decoder network the article is clearly written and has a simple in a good way and interesting message however i also have some criticisms especially regarding the conceptual underpinnings when any neural network is predicting a deterministic function f x y all input features are irrelevant to the output distribution when conditioned on the output itself in other words the minimal representation in a deterministic task is simply the output itself the situation is different when the task involves predicting a nondegenerate probability distribution pyx in which case the minimal representation ie the sufficient statistics can have an arbitrary amount of information in the information bottleneck community this was mentioned in httpsarxivorgpdf170300810pdf section 24 and explored in httpsarxivorgabs180807593 in motivating the paper the authors appear to confuse two types of irrelevant features 1 when an input feature is useless for prediction ie changing it does not change the predictions and 2 when information about an input feature is independent of the output distribution when conditioned on the output for a deterministic prediction task all features type 2 but not all features are type 1 the authors have the following text we believe this task captures key structure from deep learning tasks for example in image classification consider classifying an image as a car which take on various colors a representation in the last layer is typically conditionally independent of irrelevant input variations ie the representation does not change based on differences in color if i understand the example this is building off the intuition that color of car is irrelevant because it is a type 1 feature not useful for prediction in fact it can be conditionally independent because it is type 2 moreover in the authors task color of checkerboard is not type 1 it is very relevant for the output changing it changes the output but it can also be conditionally independent since it is type 2 given the above arguments the degree to which features are conditionally independent in middle layers does not necessarily reflect how useful they are for prediction i have two other more minor comments 1 the notion of direction information is somewhat confusing as one can think about two kinds of direction information 1 information about which targets ie directions correspond to which colors which is provided as part of the input and 2 information about the final reaching direction ie the output given the points made above if i understand correctly information about which targets correspond to which colors is just as irrelevant as the color information when conditioned on the output i would suggest referring to the second kind of information the one mainly discussed in the paper as output information 2 the authors should probably cite and may be interested in httpsarxivorgabs200912789 neuroips 2020 which also proposes to estimate mutual information using a practical family of decoders docsepthis work introduces a notion of usable information in neural network representations essentially decodability of information by a neural network and suggests that learned representations are minimal discard taskirrelevant information when training begins from random initialization but not necessarily when beginning from other initializations pros the definition of usable information is reasonable and likely useful for future analyses though see below the questions addressed by the paper are interesting important and are in need of thorough empirical study cons the tasks used in this paper are very simple with evenodd mnist classification being the hardest task considered and most analysis is conducted on an even less complex task which is similar to a simple xor it is very hard to know whether the papers conclusions would generalize to tasks of interest to the machine learning community or even to other simple tasks with different structure it is not clear from the pretraining experiments whether the negative results are due to pretraining or just the scale of the weights if networks were initialized randomly with mean std taken from the pretrained network would they also not learn minimal representations it seems that the results of the paper must necessarily depend on several hyperparameters which were not explored for instance if the learning rate in early layers is set sufficiently small the network should learn these simple tasks without minimal representations the result about generalization correlating with minimality was not confirmed on mnist it is not clear whether this is because the result does not hold on mnist or simply because the authors did not test it transfer learning is known to be helpful in some practical settings is this framework able to account for situations when transfer might be helpful as well as harmful more discussion of this is needed overall given that this is an empirical paper no new theory is provided it is important for the experiments to be extensive and comprehensive the experiments in this paper though they touch on interesting ideas are not thorough enough to convince a reader of the authors broader claims docsepbroadly this work is an attempt to understand how neural networks can form generalizable representations while being severely overparameterized this work proposes an information theoretic measure called the usable information and use it to quantify the amount of relevant information in different layers of a neural network during training the key idea is that in order for the information represented in one layer to be usable by the next layer it should be decodable by a simple transformation affine elementwise nonlinearity pros significance the usable information is a variant of mutual information which replaces the expensive conditional entropy term with a crossentropy loss that is more readily computable from a neural network a computationally efficient measure for information has a potential for being a generally useful tool in the broad community quality i like the approach of this study which resembles how natural science tries to understand the function of a complex system such as the brain ie the real neural network empirically the choice of the task was also appropriate it provides a good intuition about the relevant vs irrelevant information as well as a bridge to neuroscience studies that may lead to insightful discussion clarity the paper is clearly written and easy to follow in most places cons originality one thing i expected to find in the paper was some review of other information theoretic measures that were proposedused in the context of neural networks proposing a cheaper alternative for the mutual information itself cant be a new idea although if it is that would be worth noting too it would be fair to include a discussion along this line and perhaps point out the properties of usable information that makes it particularly appealing clarity the presentation of fig 4 is not clear to me i which plots belong to which axis and what is the third curve val in black ii a positive correlation with the minimality of the representation and generalization performances this sounds vague can you quantify additional comments does sgd always lead to equivalent representations or does sgd trace a path through parameter space that leverages structure present in the initialization i dont understand this sentence in the introduction also related it would be nice to add a sentence or two to unpack the idea of implicit regularization through sgd in fig 2d why do the four marker types appear somewhat separated although not by a large margin eg red x green o green x then red o the observation about nonrandom initialization is very interesting in this example keeping the old information does not seem to compromise the performance of the network in the current task do you think this is generally true for neural networks or could there be a regime where retaining information about a previously relevant but no longer relevant information has an actual cost overall i think this is an interesting paper that presents a promising approach toward the understanding of how informative representations are formed by training one of the most fundamental questions in deep learning update most of my questionscomments are addressed in the revised version of the paper and the author responses i maintain my support for acceptancedocsepthe paper studies how initialization and the implicit regularization of sgd affect the training dynamics of neural networks in terms of minimality and sufficiency of learned representations the main findings are that 1 sgd with random initialization learns almost minimal and sufficient representations and 2 sgd with an initialization that contains information about irrelevant factors fails to converge to minimal representations increasing the chance of overfitting these findings are interesting useful for understanding neural networks relevant to the iclr community but lack evidence of generality task choices most of the experiments in this paper are done on the checkerboard task in this task one is given a checkerboard each cell of which is either red or green one of these colors appears more is dominant and the task is to decide which color is dominant the key aspect is that the input also contains 2 targets left and right with one being green and one being red instead of directly predicting the dominant color the subject should pick the target whose color matches the dominant color importantly the color arrangement of targets is picked at random the authors also consider the task of predicting whether an mnist digit is odd or even these two tasks are too simple which restricts the generality of conclusions i suggest to consider harder tasks for example classifying cifar100 images where the target is the superclass 120 and the irrelevant factor is the exact class 15 additionally it would be interesting to consider cases when the training data is such that there is a small mutual information between the irrelevant factor and the target will sgd with random initialization find a solution that has even smaller mutual information with irrelevant factors ie sacrificing sufficiency for minimality network throughout the paper only fully connected networks are considered additionally the last hidden layer always has 20 units for generality it would be better to consider also larger and more modern networks such as resnets activation function saxe et al 1 showed that the choice of activation function is crucial when judging about compression in late stages of training the presented paper can be improved by considering other choices of activation functions the role of sgd the implicit regularization of sgd arises from its stochasticity the findings of this paper suggest that this stochasticity has a key role in finding minimal representations this should be verified by comparing to standard gradient descent ie batch size number of examples minor comments 1 the description of the checkerboard task starting at the last paragraph of the first page can be improved 2 did you consider using a more powerful decoder in fig 2c for example we see that later layers have more usable information about the direction this means that there is a room for strengthening the decoder ps i am willing to increase the score if the authors address the above concerns about generality update thanks for the rebuttal it addressed my main concerns the new results on cifar10 and cifar100 with fine and course labels match the results on the checkerboard task this increases the generality of the main claims the new experiments also confirm that the level of noise in sgd has a key role in finding minimal representations furthermore they show that when training with sgd with enough amount of noise the usable information with fine labels increases initially and then decreases this improves our understanding of the phenomenon introduced by 2 which was later debated by 1 for these mentioned reasons i updated the rating from 5 marginally below acceptance threshold to 7 good paper accept references 1 saxe andrew m et al on the information bottleneck theory of deep learning journal of statistical mechanics theory and experiment 2019 2 ravid shwartzziv and naftali tishby opening the black box of deep neural networks via information corr abs170300810 2017
### Summary:
|
this paper proposes that we can understand the evolution of representations in deep neural networks during training using the concept of usable information this is effectively an indirect measure of how much information the network maintains about a given categorical variable y and the authors show that it is in fact a variational lower bound on the amount of mutual information that the networks representations have with y the authors show that in deep neural networks the usable information that is maintained for different variables during training depends on the task such that task irrelevant variables but not task relevant variables eventually have their usable information reduced leading to minimal sufficient representations the initial reviews were mixed a common theme in the critiques was the lack of evidence of the generalization and scalability of these results the authors addressed these concerns by including new experiments on different architectures and the cifar datasets leading one reviewer to increase their score the final scores stood at 3 7 7 7 given the overall positive reviews interesting subject matter and relevance to understanding learned representations in deep networks this paper seems appropriate for acceptance in the acs opinion
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this work provides the novel approach for searching flow architectures compared to the standard approaches used to find the best deep architecture in standard network networks this problem is more challenging due to the need of invertible transform and requirement for determinant of the jacobian to be easy to calculate to solve the problem the authors propose to apply the weighting for candidate transformations in order to enforce invertible properties of the model the authors suggest to use the mixing distribution approach instead of mixing the base transformations they formulate the problem of learning best weights and show that optimal solution for soft weights is not optimal for binarised versions therefore they propose to optimize the upper bound of the proposed loss function instead further they show how to deal with the problem with larger number of layers some experiments are also performed to show the quality of the approach with respect to the baseline that is expertbased selection strengths the problem considered in this work is novel and important for the community that works with flows the problem is significantly more challenging than standard architecture search the proposed solution is wellmotivated the theoretical claims seems to be correct the flow of the paper is easy to follow and each step of the proposed approach is justified the novelty and contribution of the paper is high in my opinion the idea of using mixing the probabilities and the application of upper bound instead of direct optimisation is nontrivial weaknesses the empirical evaluation of the proposed method should be more extensive the selection manual flow baseline is a bit tricky to be how far is the manual approach from the optimal combination of transformations on validation set it would be also beneficial to create the baseline where the transformation are selected randomly such a baseline would deliver the information what is the gain on nll with respect to the random approach i would suggest also to take under consideration evolution approach that optimize the binary vectors that stays behind the selection process as reference method my second concern is about the generalisation of the proposed approach to other types of flows the approach seems to be generic and scale to any possible transformations but the experimental evaluation is mainly focused on autoregressive flows is it possible to adopt that approach to various types of layers that represent the dynamics in cnf it would be also beneficial to see what is the quality of that approach for various cnf layers in experimental part the third concern is about the limitation of the approach due the fact that complexity grows exponentially and some decomposition methods are essential to apply architecture search effectively for this part it would be interesting how much we loose during the decomposition process concluding the problem consider by the authors is novel and worth investigating the approach seems to be adequate the evaluation is limited but i think the paper is worth accepting docsepin this paper the authors proposed to adapt a differentiable architecture search formulation liu et al 2019 based on learned weighting of an ensemble of modules to automated search for normalizing flow architectures the authors made several adaptations to the original approach for the normalizing flow problem due to the invertibility constraints that prevent direct linear summation of different transform operations furthermore the authors proposed to optimize the full network using an approximated upper bound of the kl divergence instead of directly optimization the authors proposed two methods to decompose the optimization problem grow method which is more straightforward and greedy and block method that alternatively adjusts each block the authors experimentally compared their proposed method with manually specified architectures across various datasets including power gas hermass miniboone and bsds300 the results seem mixed as the searched model outperforms the manual model in some contexts but not others pros overall the paper is well written and easy to follow the main idea is to adapt the differentiable formulation for the nas towards normalizing flow architecture optimization and the authors made interesting theoretical contributions for reformulating the mixtureofexperts setup towards normalizing flow models as well as for proposing novel optimization strategies overall the proposed method is sound and coherent cons the experimental evaluation seems relatively weak though the optimized architectures seem to consistently have lower train forward and inverse costs the test performance is mixed and in many scenarios worseperforming than the manually design architecture the search space that the authors experimented with is extremely limited limited to only planar flows and radial flow with the weights between the two transformations being the only architectural hyperparameter learned not accounting for other hyperparameters such as number of stacked flows the network complexity number of feature layers etc for each network which has little generalizability to more modern and useful architectures eg realnvp glow ffjord on the novelty side though the authors made problemspecific adaptations for the differentiable architecture search algorithm for normalizing flows the main idea is very much adapted from liu et al 2019 and somewhat marginally novel as a summary the overall novelty of the proposed approach is rather incremental the search space is too restrictive and limited to be practically useful for designing practical nf architectures only searching for a binary choice between two architectures not accounting for other more complex bijective transformations and the experimental results are insufficient to illustrate the usefulness of the proposed approach for improving nf architectures docsepthis paper proposes a dartslike method for searching automated normalization flow models instead of directly using the output ensembles which leads to infeasible flow models this work proposed distribution mixture to guarantee that the supernet is always a valid flow model the upper bound of the loss function is optimized jointly with resource constraints experiments on smalltomedium scale datasets valid the effectiveness of the proposed method strength 1 this is the first work using nas to optimize flow models 2 although the proposed method is based on darts it requires some efforts to make it work on flow models such as distribution mixture weakness 1 the paper is a bit difficult to follow if the reader is not very familiar with flow model especially the following things need to be further clarified in great detail a how did you get eq 7 from eq 6 in eq 7 wkprodiwi so the right hand side does not contain k at all b the proof of upper bound optimization should be further clarified it is hard to follow at least to me 2 the upper bound argument seems questionable while i could understand that we need to binarize the weight alpha in order to get a simple and valid flow model i still cannot understand why optimizing the jensens upper bound is a good or better idea at least from the experiments it seems that binarization is still a necessary step 3 the proof of proposition 1 seems questionable the second geq is not obvious 4 the experiments focus on density estimation problems it is somewhat insufficient i would expect more realworld applications and comparing to strong baseline methods 5 table 1 need improvements for example it is better to explicitly align with one cost and then compare the test score for now it is difficult to compare results since they have different costs 6 the proposed method almost directly follows darts although the distribution mixture is novel it is still more or less an incremental improvement to my best knowledge this work is the first work aiming to design more efficient flow models via a dartslike nas method however there are still many issues to be addressed this work can be made significantly stronger if the aforementioned issues are well addressed docsepin this paper the authors present an automated normalizing flownf architecture search method the method employs a mixture distribution formulation that can construct an optimal flow model with n layers of transformations from the transformation set besides the authors introduce a blockwise optimization method to deal with exponentially growing optimization complexity in the experiment the authors proved the effectiveness of the optimization method which via approximate upper bound and autonf has a better performancecost tradeoff than handtuned sota flow models positive points 1 in this paper the authors propose an automated normalizing flow architecture search method which can find the best distribution for each layer from a set of given distribution sequences 2 when constructing each layer of the model the authors used a weighted summation of probability density instead of each distribution to ensure the reversibility of the model and the simple calculation 3 the authors optimized the model via approximate upper bound instead of using kl divergence between the target distribution and the mixed flow so that the model can go out of the local minimum sssssss negative points 1 the novelty of this paper seems limited the authors directly apply nas techniques to the field of normalizing flow nf it would be better to clarify whether there are some technical contributions regarding to the search algorithm 2 there seem some errors in eqn 4 the mixed transformation should be the weighted sum of tij rather than ti 3 eqn 8 should use the symbol of expectation instead of directly using capital e 4 as mentioned in section 32 the global minimum may not be the desired architecture why can optimizing the upper bound find the desired architecture it seems that both cases suffer the same issue 5 since this paper is essentially a nas paper it is necessary to compare the proposed method with existing nas methods eg darts a enas b mnasnet c 6 the performancecost tradeoff seems to depend on a parameter lambda that needs to be manually adjusted thus the impact of lambda should be investigated in the experiment section reference a darts differentiable architecture search iclr 2019 b efficient neural architecture search via parameters sharing icml 2018 c mnasnet platformaware neural architecture search for mobile cvpr 2019 it is unclear the essential technical contributions compared to existing nas methods
### Summary:
|
this paper proposes two methods to learn the architecture of normalizing flows models their framework is inspired by liu et al 2019 which uses ensemblesmixtures with learnable weights for architecture search the application of these ideas to nfs requires a trivial modification to respect the invertibility constraint which consists in building a mixture model over all possible sequences of compositions of transformations from a fixed set the paper proposes to use an upperbound to the forward kl instead of the fkl directly the reasoning is that this will lead to a pure model after optimization that is the mixture weights will be in 0 1 mathematically this simply corresponds to treating the mixture as a latentvariable model and performing mapinference over discrete latent variables assuming that all mixture components have the same prior weights in the mixture the experimental results across various datasets are very mixed and the family of transformations considered in the experiments is quite restricted
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a normalization method for monocularbased depth estimation before computing the regression loss it is based on scaleandshift invariant loss instead of computing the median and doing the normalization globally it split the depth map into multiple size patches and compute scaleandshift invariant loss for each patch they propose two strategies to generate the patches consisting of spatial domain splitting and depth domain splitting in the zeroshot setting the results show the model with the proposed normalization can outperform all baselines 1 this paper is well written it is easy to read 2 the proposed idea is clean and simple we can easily apply it to other depth estimation models 3 the improvement on the zeroshot monocular depth estimation task is significant weakness 1 table 1 does not show the ablation when combining the different domains it is unclear if combing all of them can get the best performance 2 the proposed method can be applied to all monocularbased models have the authors tested the loss on fully supervised monocularstereo depth models this paper only presents the results in a zeroshot setting 3 the proposed method is very simple so the contribution might be limited to improve the contribution it is important to show the generalization of different models and datasets none docsepthis paper presents a novel normalization technique for learning monocular depth estimation networks with various sources of datasets that may have different depth scales unlike the previous approach that learns with the imagelevel normalization which often disregards the finegrained depth difference this paper presents a multiscale depth normalization that hierarchically normalizes the depth representations based on both spatial information and depth distributions experimental results have shown the superiority of this method even though the technique is relatively simple the idea of using multiscale normalization inspired by conventional normalization methods such as alexnet sift and hog is very simple but effective dividing the normalization candidates as spatial and depth axis is interesting and makes sense the stateoftheart performance is attained the authors well mentioned the limitations of this paper docsepthe authors introduce a novel strategy for monocular depth estimation that focuses on both global structure and finegrained details their implementation is effective at preserving finegrained details via the spatial and depth domains using hierarchical normalization ultimately performing well in terms of accuracy even against stateof art benchmarks from before our work was published an interesting solution to a renowned problem in monocular depth estimation clearly motivated by the shortcomings of existing sota the presentation is nice for the most part the language is okay but should be further improved if accepted for publication weaknesses seems the authors ran out of space with many means to improve figures and tables to increase the datatoink ratio figure 34 why not share x and y axes labels and make graphs easier to see in less space add future work to the limitations section be consistent with style different tables use different schemes some use arrows others use colors avoid orphans single words taking up entire lines like line 205 lack of depth in analysis for the many ablations conducted a lack of limitations in the ability to be extended to mvs lack of demonstrating the final reconstruction results just depth images depicted docsepthe paper studies zeroshot monocular depth estimation one way to improve zeroshot performance is to train on a combination of datasets the problem with this strategy is that different datasets has different scales and shifts one way to address this issue is to train with a scale and translation invariant loss ssi 22 this paper argues that the ssi loss is normalized over an entire image a more effective way is to normalize over a small local window or a certain depth range this paper thus propose a hierarchical depth normalization training loss which helps their model to achieve stateoftheart performance in zeroshot depth estimation on five major benchmarks strength this paper proposes a simple yet very effective version of the scaleshiftinvariant loss for training depth estimation network this loss proves to help in improving zeroshot depth estimation table 1 i like this idea weakness it is not very clear if the improvement really comes from the proposed hierarchical depth loss table 2 the ours model differs from sotas in the training data the number of training samples and the network architecture the conclusion would be more solid if only one thing is changed at a time for example it is unclear if it is fair to claim that our method outperforms previous methods by a large margin on multiple benchmarks with fewer training data than recent stateoftheart methods such as leres 33 and midas 22 because it could be a result of using higher quality training data the authors have adequately addressed the limitations and potential negative societal impact of their work
### Summary:
|
this paper addresses the problem of training a monocular depth estimation network from variable sources of data as opposed to only using a single scaling factor as in existing work the authors propose local schemes for normalising while the proposed approaches are conceptually simple they result in a nontrivial boost in performance both qualitatively and quantitatively and will likely be of interest in the field of monocular depth estimation the reviewers were broadly in support of this paper this areachair agress and recommends acceptance however the authors are strongly encouraged to incorporate the valuable comments and suggestions from the reviewers into the revised text minor comments fig 2 a is not clear and should be revised to make it clearer what it is trying to communicate retitle section 51 to limitations add the new results for nyuv2 the two supplementary videos are not very informative should consider using different examples
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper considers the natural class of algorithms namely aggregators with gaussian noise for distributed sgd with differential privacy dp and byzantine resilience br previous results shows vn rightarrow br rightarrow convergence of sgd the authors first show that aggregators with gaussian noise algorithms satisfy dp but violates vn necessarily so approximate vn is proposed theorem 2 shows approximate vn rightarrow convergence proposition 2 shows the above algorithms satisfies approximate vn with certain parameters with the combined bound corollary 1 the authors observe and then verify by experiments that larger batch size is beneficial and in particular more beneficial than when dp or br is enforced alone the paper is clearly written and provides a reasonable solution to this very interesting question however i do have several comments 1 i wonder if there is any difference between the byzantine model in the paper and the adversarial contamination model in the robust statistics community see papers cited below in my opinion they are the same as in both models a fraction of data are allowed to be arbitrarily changed in particular the interesting part of byzantine seems to be lost since the aggregator is always trustworthy 2 if they are the same then i would like to see a modular solution based on for example the following aggregating the gradient accurately implies sgd convergence starting from here features like robustnessbr assuming the answer to my q1 andor dp can be added to the aggregator there are robust and accurate aggregators with much nicer theoretical guarantee eg this paperhttpsepubssiamorgdoiabs10113717m1126680 theres actually a big line of work there is also accurate robust dp aggregatorshttpsarxivorgabs210209159 based on what is presented in the paper i dont see a necessary interplay between robustnessbr and sgd convergence so i think a modular solution is possible and better potentially encouraging deeper observation for practice in particular the aggregators used in the paper seem to be variants of geometric medians and other versions of medians which is one of the central objects of robust statistics the adaptation seems to be rooted in the vn condition which doesnt seem widely used as claimed in the paper almost all references of it are written by roughly the same group of authors the notion of alphafbr also seems to be restricted to the same group following this line of work may not be the best approach and i highly recommend the authors to explore the connections and make use of existing powerful results rather than reinventing suboptimal wheels of course all these comments rely on the answer to my q1 which i might have misunderstood i would love to hear the reply and change my rating correspondingly 3 its unclear to me why rhs of the bound in corollary 1 is decreasing in b the relevant part in this bound looks like lnbc1b2 i tested with a few different c and some make it increasing while others make it decreasing it is important to justify this point since it is one of the most important practical implications of the theory in this paper not recommended as its current form since it seems to have missed a deeper connection and hence a much nicer solution the batch size observation also doesnt seem to be firmly supported however the question is interesting so i look forward to its future form docsepthis paper combined differential privacy and byzantine resilience in the distributed sgd algorithm the authors provide a simultaneous theoretical guarantee of dp and br by retuning the algorithm both theoretical results and numerical experiments are conducted to show the effectiveness strengths this paper is well organized and easy to follow the targeted problem is interesting and important the theoretical contribution of this paper is providing the interplay between dp and br weakness 1 in assumptions 13 it requires the empirical loss function q to be differentiable is that possible to extend current results for nonsmooth loss say the absolute derivation loss 2 as it has been claimed in the abstract by carefully retunning the learning algorithm how to choose the learning rate gamma1dotsgammat is the proposed method robust with a certain range of learning rates 1 in assumptions 13 it requires the empirical loss function q to be differentiable is that possible to extend current results for nonsmooth loss say the absolute derivation loss 2 as it has been claimed in the abstract by carefully retunning the learning algorithm how to choose the learning rate gamma1dotsgammat is the proposed method robust with a certain range of learning rates docsep the paper studies a federated learning setting combining differential privacy dp and byzantine robustness br the paper shows that the variance norm condition for byzantine resilience needs to be relaxed as we can expect when dp is added and then shows an adapted theorem which allows for both dp and br the paper also presents a number of preliminary experiments and some interesting open questions the paper is wellwritten and understandable the result is novel but not very surprising i didnt check every detail but at a high level the results seem correct detail in algorithm 1 please explain the meaning what and importance why of without replacement in the sampling depending on the distribution d this concept may be easier or more complicated to interpret the paper is mostly sound and brings a contribution even if not very surprising docsepthe paper considers the problem of distributed learning via sgd when a fraction of workers are byzantine and the rest want their data to be kept private the authors consider a genericnaiive combination of byzantine resilience br and differentially privacy dp in algorithm 1 they show that the vn condition is incompatible with gaussian noised sgd but propose an approximate vn conidtion that can be realized by noisy sgd they then establish a convergence guarantee for the algorithm and conduct numerical experiments strengths proposition 1 and especially proposition 2 are interesting the problem is interesting weaknesses general points measuring privacy loss in terms of privacy loss per iteration is not meaningful privacy loss accumulates so it is very unusual and not informative to measure the loss of each step instead epsilon should capture the privacy loss of the full algorithm t rounds to quantify the risk of inference attacks on the model the significance of the vn condition is not adequately explained how exactly is this condition relevant to br the convergence results are not very interpretable seem to have incorrect units writing needs a lot of work specifics 0 abstract rendered invalid when workers enforce dp is not accurate for two reasons you only show this for a particular dp protocol namely the gaussian mechanism showing it fundamentally for any dp mechanism would be much more interesting also what you show is in terms of the vn condition which is not explicitly connected to br 1 introduction trusted server and honest but curious server are both used to describe the set up which is confusing usually trusted server means honest and not curious ie not a privacy threat byzantine should be informally defined or briefly described early on before it is used many times dwork et al 2014 is a strange choice for citation for a sentence about private ml there are many other more relevant works from 2014 to present also especially when considering neural networks does not seem to add anything and im not sure i agree that neural nets are special in this sense abadi et al 2016 is the wrongincomplete citation for dp sgd bassily et al 2014 and song et al 2013 considered dp sgd earlier theorem informal should state conditions on loss need to define or informally explain at the very least approximated vn before using it in a theorem paragraph following theorem is too much detail for not having defined vn parameters have very little impact in most settings when considering dp or br separately is not clear figure 1 is the number of iterations fixed 2 problem settign common dataset what does this mean are the m points dividedpartitioned among n workers or all workers have access to d by far the most widely used approachis the differentially private version strong claim made without any evidence also there are many ways to provide dp besides gaussian noise so there is no single the dp version of sgd we are mainly interested inperstepprivacy why this is not nearly as meaningful if t goes to infinity then essentially the algorithm provides no privacy at all but your epsilon might still be small which is very misleading def 2 explain intuition provide an example of a br gar section 3 algorithm 1 clarify the presentation when you loop through honest and byzantine workers it seems as if the analystcurator who is implementing the algorithm knows which workers are honest and not which is clearly not the case contextualize theorem 1 the privacy properties of gaussian mechanism and subsampling are wellknown so this theorem is not at all novel this should be stated also the log term in the denominator can be tightened can take s2 approx c2 log1deltaepsilon2 m2 whats the significance of vn condition how does it relate to br when epsilon and delta are nonzero strange sentence because s increases as epsilon and delta decrease theorem 2 why doesnt kappa appear units appear to be wrong should provide comparison to aliastarh allenzhu li 2018 and the references therein also should compare to dp optimization rates corollary 1 misleading because epsilon is not the actual privacy budget of full tround algorithm so first term should also scale with t this remark applies to experiments too the paper makes some good progress towards an understanding of dp and br in sgd via proposition 1 and 2 however the convergence results are not very clean and not properly contextualized the writing is rather poor and the epsilon issue is a big one i cannot recommend acceptance in the current form
### Summary:
|
the paper considers the natural class of algorithms namely aggregators with gaussian noise for distributed sgd with differential privacy dp and byzantine resilience br previous results shows vnbr convergence of sgd the authors first show that aggregators with gaussian noise algorithms satisfy dp but violates vn necessarily so approximate vn is proposed theorem 2 shows approximate vnconvergence proposition 2 shows the above algorithms satisfies approximate vn with certain parameters with the combined bound corollary 1 the authors observe and then verify by experiments that larger batch size is beneficial and in particular more beneficial than when dp or br is enforced alone in the formulation an important baseline of robust mean aggregation diakonikolaskamathkanelimoitrastewart2016 and even more relevant baseline of robust and dp mean aggregationliukongkakadeoh21 are somehow missing one would assume that directly applying these wellknown techniques might give the desired dp and robust sgd the field at the intersection of differential privacy and robustness has evolved quite a bit recently and tremendous technical innovations are happening given the relveance of the proposed problem to this line of work one should make the connections precise and explain the differences
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper considers the online risk minimization setting to reduce the memory cost and accelerate training this paper proposes a new sketching algorithm ie storm based on the locality sensitive hashing lsh technique which mainly includes two steps sketching and training in particular it provides two storm surrogate losses for linear regression and classification respectively besides it also provide theoretical analysis to justify the use of sketching to estimate the surrogate loss with high probability experimental results illustrate the effectiveness and efficiency of storm over other baselines here is the detailed comments pros 1 overall i think this paper is wellwritten 2 in experiments the proposed method storm achieves promising performance although the datasets and models are simple 3 theoretical analyses are provides to justify the use of sketching to estimate the surrogate loss with high probability cons 1 the technique novelty is limited since previous work coleman shrivastava 2020 has used lsh to approximate kernel density estimation on streaming setting 2 for the proposed storm there are so many approximations sketching be lsh surrogate loss function approximate the original loss and heuristic approximate optimization problems besides it is specialized for the linear model thus these features makes it complex and not general for other problems and models 2 why to prefer an asymmetric lsh over the symmetric one please give more specific explanations in the online learning setting 3 why to use these surrogate loss function although authors give intuitive explanation more formal theoretical supports might be better moreover in my opinion the proposed surrogate loss function can also be used in the original data without sketching experiments can be done to see whether it is specified for this problem and can justify the effectiveness and efficiency of lsh sketching more fairly by comparing them 4 this paper lacks theoretical analysis about generalization and the convergence property of the heuristic optimization algorithms 5 it also lacks the experimental results in the streaming setting wrt the time step minor comments this paper seems a bit rushed since there are many typos for example in sec 21 the head of the second line should not be mathcald besides the reference format seems strange overall i think while the novelty is limited the proposed method is complex with many nonclear approximations which makes it not general docsepthe paper proposes an online sketching algorithm storm for empirical risk minimizationparticularly for linear regression loss functions which only needs to save the integer count values in the sketch the author also gives an empirical evaluation with ridge regression and classification the paper proposes an online sketching algorithm storm for empirical risk minimizationparticularly for linear regression loss functions which only needs to save the integer count values in the sketch particularly it uses the localsensitive hash functions to divide the space into several regions and save the integer counts values over the regions for the data points then the author shows that how the count values related to a class of the surrogate losses for the ell2 loss and how to evaluate a hypothesis htheta using such sketch and how to optimize the theta finally the author gives an empirical evaluation with ridge regression and classification overall i think the method the paper proposes is interesting and the author provides the corresponding analysis the paper is wellwritten and easy to follow up the empirical results also show the advantage of this method however i have the following concernsquestions 1 the conclusion section and the experiments are focusing on the regularized erm problems while most of the theoretical analysis seems to be for the nonregularized version is the analysis still hold for the regularized erm case 2 the relatedwork section of the linear sketch seems to be incomplete and inaccurate in my understanding the sketchandsolve technique also holds for the ridge regression for the case when n d see eg the theorem16 of 1this paper also studies the case when n d but it seems can not be supportive for the streaming setting i think it should be added to the experiments as a baseline if i do not have some misunderstandings 3 i am a little confused about the theoretical advantage of this method in my understanding for the case when n d the linear sketch can get a good approximation in roughly od2 space while storm needs on2 hence the advantage of the storm is for the case when n d is the above statement correct 4 i am curious about the empirical comparison for the nonregularized version and i think it can be a good supportive partnot necessary because the theoretical analysis also holds in this case and we will have more baselines at this time reference 1 haim avron kenneth l clarkson david p woodruff sharper bounds for regularized data fitting as stated in the previous section i think the method this paper proposes is interesting while there are still some perspectives that can be further improved hence currently i tend to vote a 5 for this paper i am willing to raise the score if i have some misunderstandings or some of the concerns can be addressed docsepthe paper considers approximately computing erm problems for real data with an emphasis on regression and classification in the very low space regime most experiments are below 1kb this is done my maintaining a standard locality sensitive hashing table but with specialized hash functions that allow for computing a surrogate for common regression and classification erm objectives due to the extreme space constraint optimization is done with a heuristically characterized sgd a significant amount of experiments is presented quality and clarity the paper is clearly written with very few typos listed below nice intuitions are regularly given and i especially liked the intuition paragraph in section 3 good writing the problem of approximate erm with low memory is theoretically compelling and i like the idea of finding approximate erm solutions using less memory than a single sample but i dont understand when this memory constraint is actually realistic when can a machine compute hash functions for the all data but cannot store a single sample or o1 samples in memory i ask because the experiments are not evidently compelling but they do use very very little space so when is this extreme setting compelling theoretical basis there are some odd points in the theory given behind their algorithm and data structure other than using nonstandard hashing functions what distinguishes the storm sketch from the race sketch they seem the exact same to me theorem 4 shows that the storm optimum recovers the least squares optimum for the linear regression problem you also use storm for classification do you have any theory to justify why storm should work for classification theorem 5 has this sumx in mathcal d sqrtkxtheta term how should i interpret this it is just the standard deviation of the race sketch why not just upper bound this by n in the numerator when is this numerator much smaller than n also is this reuslt theorem 5 mentions the medianofmeans estimate which is formally defined in the appendix but i would recommend adding it into the body of the paper why is estimator mentioned if it doesnt seem like you use it in the experiments or optimization do you use medianofmeans in the optimization or experiments is the choice of tanh on page 6 totally heuristic my review of the theoretical results and data structure design is that the results are believable and seem correct but lack technical novelty some of this eg thm 5 is not obviously distinct from prior work eg thm 1s variance some of what i would like to see is not shown eg correctness of using tanh classification version of thm 4 weak overall experimental evidence the experimental evidence continues to be a bit confusing i dont really understand why this is showing that storm is a good model in practice there are 12 plots showing statistical error vs amount of memory used but very few of these plots tell a compelling story in the following i refer to plots in the lefttoright then toptobottom order so the 4th plot refers to the first plot on the second row plots 2 3 4 5 6 7 9 10 11 and 12 suggest that as memory capacity increases storm does not approach the true solution instead the variance gets small while the bias remains large plots 3 4 6 7 10 and 12 even further show that storm returns a model whose error is closer to a naive mean than an actual least squares solution plots 8 and 9 dont have a benchmark to show what is achievable with unbounded memory plot 9 is fine since nearperfect accuracy is possible but plot 8 sees maximum accuracy 075 and its not obvious this is competitive with a larger memory model plots 1 and 2 dont have legible yaxes theres too few written ticks and for some reason only these two plots are on a loglog scale so it seems that experimentally storm uses very little space to be slightly better than computing a naive mean of the data i dont see how this is compelling especially since im not clear why the restriction on the memory has to be so intense as far as i am aware 1 mb of data is very very cheap today so whats the utility of working on the order of kilobytes if the error of the model is so huge my conclusions overall review is in the summary of the review section minor notes use citep instead of cite when adding a citation that doesnt naturally flow in the grammar this is eg most of the citations you use in the introduction page 1 end of linear sketches paragraph why is it that sketching cannot accommodate regularization if we can approximate axb for all x then we can also approximate axb x right 1 page 1 end of first paragraph the bill allcock 2001 citation seems weird the citation in the references also seems formatted wrong should this be breshanan et al 1 page 3 first line of section 22 it should be a locality sensitive hash lhs function not a locality sensitive hashing function capitalization and hash is more common than hashing but that ones personal preference 1 the section 3 intuition block is written really nicely props 1 page 4 bottom equation p isnt defined and its not intuitive at all that some parameter should be floating in the exponent 1 page 5 thm 2 equation same as above 1 page 6 in the sentence we now have the the left symbol is missing in the inline math 1 page 7 thm 5 should be let hat gtheta be the instead of let hat gtheta is the 1 page 8 end of figure 5 caption specify here that you increase the feature space to attempt to linearize the problem not just to introduce spurious dimensionality which is kinda how it reads as is 1 page 9 table 1 caption last sentence the dimensions in brackets not the dimension in bracket the paper has weak theoretical and experimental components the theory doesnt obviously show anything new of sufficient generality the data structure appears to be the same as in prior work on lsh the experiments suggest that using the storm estimator is only slightly better than returning the mean of your data i dont see the strengths of this paper anywhere i got more and more confused about this paper the more i read its well written but i dont see the substance beneath the nice writing docsepin this paper the authors focus on reducing the cost of conventional erm training and propose a new framework storm which integrates the approaches of locality sensitive hashing and sketches besides the authors use specific optimization algorithms to solve that this framework cant be directly applied to popular classification and regression tasks the experimental results prove that the proposed procedure can achieve better results compared to the baselines in the operational environment of low performance and low memory strengths impact the proposed flexible method reduces the scale of data set and the dependence of model training on hardware performance which may be helpful for distributed learning in the future experiment the author has done relatively sufficient comparative experiments including classification and regression under linear and nonlinear conditions with several datasets writing overall the manuscript is wellwritten and addressing a relevant problem by proposing an interesting learning method suited to edge devices reproducibility key resources eg proofs code data are available and key details eg proofs experimental setup parameters selection are sufficiently welldescribed for competent researchers to confidently reproduce the main results weekness experiment important details missing which can help the paper quality significantly including the mse and accuracy criteria time complexity and space complexity of different methods should also be discussed during inference is the proposed approach less computational expensive than baseline approaches it would be worth reporting the runtime eg in seconds or computational complexity or number of learnable parameters parameters the proposed approach storm is heavily based on some of the hyperparameters for instance p there is no empirical evidence that how the performance of the model will vary by changing p besides the value of p is different for different datasets ablation experiments are helpful to validate the robustness of the selected value for p instead of the two regression experiments for discussing p values with similar datasets in fig 6 description in general there is a lack of detailed explanation of replicates and statistical methods used in the study performance although this edge computing method reduces the dependence on hardware the reduction of accuracy seems to have a great impact from the experimental results in this paper can it be compared with more similar methods to highlight the advantages of this paper some results in figure 5 are worse than the benchmark method is it necessary to have some discussion the paper contributes some new ideas to reduce the compuation complexity of erm the experimental evaluation on storm is adequate and the results convincingly support the computaion strategy of locality sensitive hashing and sketches
### Summary:
|
the paper proposed a sketching algorithm for empirical risk minimization erm for linear regression and classification the technique is based on lsh with nonstandard hash functions the reviews indicate that the paper is well written and easy to follow however there are several concerns raised regarding its quality a major one regards the novelty of the paper mtpw the technique novelty is limited since previous work coleman shrivastava 2020 has used lsh to approximate kernel density estimation on streaming setting 7lqm my review of the theoretical results and data structure design is that the results are believable and seem correct but lack technical novelty other than using nonstandard hashing functions what distinguishes the storm sketch from the race sketch an additional concern is a claim of weak experimental evidence there seems to be a need for more thorough experiments isolating different components rather than the system as a whole and in addition the bottom line results provide only a slight lift over a naive baseline 7lqm the experiments suggest that using the storm estimator is only slightly better than returning the mean of your data whether it is the case that the techniques should be improved or that these concerns could be addressed by improving the presentation of the paper the conclusion is that the paper now is not ready to be published
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper provides a new perspective on the study of qvalue decomposition methods in the field of multiagent reinforcement learning in order to maintain decentralized execution while avoiding learning nonstationarity during the training phase centralized training with decentralized execution paradigm is widely studied the authors pinpoint a gap between local and global observation which leads to the fact that local actionvalue functions cannot represent the right q values such a representational error may accumulate temporally the authors use a supervised learning method with the label being action values conditioned on global observations to solve this problem and found it can improve the performance of multiple marl methods significance the paper discusses an interesting issue in ctde valuebased multiagent learning methods that are ignored in the previous literature as far as the reviewer is concerned the paper is interesting and the contribution may inspire many marl researchers quality 1 the major concern of the reviewer is that the experimental results cannot fully support the analysis in the previous sections figure 3 does not make sense a smaller sight range must hurt the performance but it may simply be because they have little information about the other agents and enemies it cannot be guaranteed that the discussed issue in this paper leads to this performance gap 2 another drawback of the paper is that some results are not surprising for example it is wellknown that q functions conditioned on local observations are less accurate than those based on global information and such an error will certainly accumulate temporally it would be much more interesting if the authors could bound this error the reviewer thinks the paper has made many interesting findings and does not see obvious negative societal impacts docsepthis work analyzes the current igm decomposition framework in ctde problem especially the lossy in value decomposition and the accumulation of lossy in the training process based on the analysis this work proposes a method combined with imitation learning to solve the lossy between global observation and local observation of q s u sum qsi ui in value decomposition the method part includes an expert agent trained using offpolicy algorithm with global observation and a learner agent trained using supervised algorithm with local observation through alternately training the training lossy of learner is alleviated strength the paper is well organized and well written and the analysismethod part is very detailed and the definition is more formal it is a relatively novel perspective to integrate imitation learning into the framework of ctde to alleviate value decomposition error weakness the flaws of iqmbased method is widely studied this paper has spent a lot of space on analysis but most of the analysis is not deep enough and i think the analysis result is not innovative enough the experiment part is insufficient the setting of the experiment is zero sight smac which is obviously more conducive to the setting of this method i think adding some experiments with normal sight in smac will be better this paper analyses the value decomposition error problem but the method of this paper is of limited help in solving this problem docsepthis paper analyzes the error accumulation in multiagent qlearning and focuses on the error caused by partial observability regarding this problem this paper integrates a dagger module to distill the fullobservability policy to the partialobservability policy strength this paper studies an important problem and the idea is novel and makes sense weakness although i can roughly capture the main idea of this paper the notation and definition of many core concepts are not clear see questions the writing quality is far from a conference publication the discussions on background literature and related work are limited i do not penalize my evaluation by this point but i strongly encourage authors to include a related work section in the paper as this paper is titled by rethinking igm it would be better to connect with other papers studying the algorithmic property of vd methods maybe in other aspects eg 13 1 castellini et al the representational capacity of actionvalue networks for multiagent reinforcement learning aamas 2019 2 wang et al towards understanding cooperative multiagent qlearning with value factorization neurips 2021 3 huang et al multiagent qlearning with subteam coordination aamas 2022 docsepthis paper rethinks a popular principle in cooperative marl called individual global max igm decomposition discovers the limitation of the igm condition and proposes an imitation learning strategy igmda to address this problem due to partial observation local observation may not provide enough information thus the igm condition will be a lossy decomposition this paper trains an expert using a global state during centralized training and distils a decentralized agent using imitation learning empirical results show that igmda can outperform stateoftheart igmbased approaches with zero sight view the reviewer will list the main strengths and weaknesses as follows strengths igm is a very popular principle in the valuebased marl algorithm this paper rethinks the limitation of the igm principle which is very interesting and novel due to the partial observation igm is hard to keep the consistency between local and global greedy action selection this paper proposes a teacherstudent model to utilize the global information during centralized training to address this problem which is an appropriate way to handle it weaknesses 1 to address partial observation of igm conditions there is another way to enrich the observation of each agent using interagent communication 1 by discussing this branch of related work the position of this paper would become clearer 2 proposition 5 indicates that when multiple samples satisfy the same local observation tau imitation learning will take the average of these samples the problem is that if the policies on these samples are completely different imitation learning may lead to a bad policy ie the policy distillation will fail due to insufficient information the authors should discuss this case to justify the soundness of igmda 1 wang t wang j zheng c et al learning nearly decomposable value functions via communication minimizationj iclr 2020 the main concerns are listed in the weaknesses the authors should discuss the examples presented in the reviewers question to justify igmda
### Summary:
|
this paper revisits the notion of individual global max in multiagent reinforcement learning in particular considering how to address the fact that individual greedy actions may not be globally optimal in cooperative settings overall the general sentiment is that this is interesting work with a useful contribution but that the paper could be further improved there were some specific concerns regarding the experimental results which the authors answered in the rebuttal the results for sight view 5 are particularly relevant given that they identify a setting in which the system is not extremely partially observable but where their algorithm still provides benefits i also note that the papers presentation is less polished than it could be in a number of places for example table captions are sometimes brief missing punctuation graphs are somewhat hard to read the equation in prop 5 should be indented
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors consider encoding schemes for neural architecture search nas while current encodings model the data flow through the architecture operations by encoding each operation the current authors propose a new encoding based on the idea that different operations need different encodings since each operation has its own separate weights that are learned throughout architecture training specifically they perform multiple forward and backward passes on the architecture and output the trained embeddings as the encoding they also give symmetrybreaking techniques similar idea as random initialization they test their technique on nasbench101 201 301 and enas strengths intuitive and wellperforming the tagates intuition ie operations should not be encoded the same way because they have trainable parameters makes sense and tagates is an intuitive solution to this the symmetrybreaking technique also makes sense and overall these techniques are shown to achieve strong performance on four benchmarks they released code with a detailed readme so reproducibility is high good ablations ablation for number of time steps of tagates and ablation for symmetrybreaking technique the anytime performance prediction is a very interesting application of tagates weaknesses no nas experiments all of the experiments compute rank correlation or precision for performance prediction to show the usefulness of tagates in nas the authors should run full nas experiments similarly the method is only compared to four other performance prediction models mlp lstm gcn and gates but the authors could include even more for example there are other encodingbased works such as nasbowl contrastive embeddings for neural architectures does unsupervised architecture representation learning help neural architecture search cate and some of these works learn architecture embeddings which is similar in spirit to tagates so it is important to discuss and compare to them the specific tagates approach seems to be specific for each search space based on all of the details in section b3 so it would need to be tuned for each new search space repeatedly refer to gates and dvae as stateoftheart techniques but these have been out for more than two years there are other methods now such as the ones listed in the second point above the runtime of their method should be included in the main body as well as total cpu andor gpu usage no discussion on limitations or societal impact see below the authors did not discuss societal impact of their work and i also did not see a discussion on limitations remember that the neurips page limit was increased from 8 to 9 pages to give space for the questions in the author checklist so these sections should be taken more seriously docsepthis paper recommends a new gcn encoder based performance predictor built on top of the gates encoderpredictor while most encoders assign the same embedding for a given operation irrespective of their position in the architecture tagates proposes a predictor that captures the training process of the architecture and assigns different embeddings for a given operation given an adjacency matrix representation of the architecture a and the input information e for each time step it computes the information flow of a through a gcn similar to the gates paper this simulates the forward propagation of nn training then this information is transformed by passing through an mlp and the resulting output along with at is in turn passed through the gcn with different weight parameters for information propagation to simulate backpropagation of the training they compute derivates and use chaining rule in their second information propagation the output of the backward propagation from the final timestep is the architecture embedding similar to gates they have a pairwise ranking loss and mse loss but rather than predicting the final accuracy tagates predicts the accuracy for every half epoch and full epoch time steps this is seamlessly learnt because tagates is trained for every timestep further to avoid the operations from having the same embeddings initially they use some symmetric breaking techniques such as 1 adding random noise 2 add an vector comprised of zerocost metrics such as gradnorm snip etc only initially or at every time step strengths tagates is able to predict the accuracy for every epoch and not just the final accuracy so in addition to using it as standalone for nas this would be useful to select models that perform well by training for fewer epochs where we have limited training budget tagates is able to out perform all the other baselines when trained on limited data weakness despite their new training strategy and improved correlation metrics tagates is not able to rate the topk architectures significantly better than gates as indicated by precision k the range of accuracies of architectures found in ndsdarts is a lot lesser than nasbench 201 so in general it would be good to demonstrate that the surrogate models work effectively there nao was also initially developed on the darts search space see the weakness and questions section while the idea of training the surrogate model that captures the training dynamics of the model is an interesting idea it would be good to understand if it is really essential early stopping methods some of which are listed in 1 are able to predict the accuracies at a given epoch by just extrapolating the learning curves for your accuracy prediction at half of the training epochs it would be compare against those for table 3 docsepthe authors present a novel encoding scheme for neural network architectures based on the gates method they propose a significant change to the gates approach that trains a different embedding for each of the operations even if they are of the same type to achieve this they present a series of methods for symmetry breaking that allow the embeddings for each operation to vary independently they perform an exhaustive series of experiments verifying and ablating their claims to show that their approach tagates is indeed superior to the alternative encoding approaches they consider strengths this is a wellwritten paper the prose is clear and the arguments are comprehensible enough detail is provided to understand the approach suggested and to understand the experimental setup the extensive experiments seem to indicate that indeed tagates is a superior embedding to the suggested alternatives the area explored by the paper is extremely relevant as having better encodings for architectures can lead to better downstream models and better analysis of architectures based on the encodings the authors show the added benefits of tagates for anytime performance evaluation that can be extremely useful when partial learning curve information is required weaknesses they do not reference other nn encoding schemes like those suggested in 1 their argument that the architecture should include variation based on the specific parameters of the model seems suspect while clearly using different embeddings for the same operations does seem to work well the argument that the architecture should incorporate this seems incorrect since most of the time the specific parameters of the operation do not seem to be relevant when designing architectures the authors dont include the runtime of tagates while i assume that the runtime is comparable to gates it would help to know the specific run times of their approach and compare it to the other approaches references 1 white c neiswanger w nolen s savani y 2020 a study on encodings for neural architecture search advances in neural information processing systems 33 2030920319 the authors have adequately addressed the limitations of their work
### Summary:
|
the paper proposes a new encoding scheme for neural architecture search all reviewers agreed that operations with different trainable parameters need different encodings the paper is clearly written and wellmotivated it will shed great light on future work to consider the trainable parameter for nas as suggested by dxbo and zsxx the paper still needs polish it should improve the competitor methods especially providing other strong nas baselines and nn encoding schemes
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary and contributions the paper provides a novel approach to normalizing flows it models a normalizing flow as the gradient map of a convex potential function the gradient estimation and model inversion that are the computationally expensive part of arbitrary complex nfs are formulated as cheap convex optimization problems it proves that the proposed cpflow is a universal density approximator and also shows that it is an optimal transport ot map reasons for score i vote for clear acceptance the idea of convex potential flow sounds very interesting and the way the computational complexities are very useful for designing more advanced nfs the paper is very well written strength 1 the paper is very well written and structured and is easy to read for a wide audience it also provides a good review of main papers in the literature 2 the motivation for using cpflow is well explained 3 the claims are well supported by theoretical proofs and empirical studies 4 the gradient estimation and model inversion that are the computationally complex part of arbitrary complex nfs are formulated as convex optimization problems that gain advantage of fast converging and cheap optimization algorithms and it also leverages the efficient hessianvector product computation 5 the experimental results show the proposed flow can perform competitively with much less number of parameters additional feedback and questions 1 after reading the paper it is not quite clear why we do need an optimality in monge sense theorem 4 and what is the point of optimal transport in this work it is worth expanding or adding more insights to the motivation given in the introduction by the notion of rearrangement cost 2 orthogonality or lipschitzness constraints are mentioned without citation in introduction 3 what is so special about the softplus type nonlinearity in theorem 3 can we use other nonlinearities such as the symmetriclog derived in 1 that are monotonic and differentiable by construction 4 cpflow looks more expressive than naf in toy examples but why is it outperformed in density estimation as a more insightful comparison i suggest comparing the transport cost of the naf with that of icnn in section 52 figure 4 as naf is outperforming icnn in all the benchmarks and is universal 5 it is worth comparing the number of parameters of the optimal cpflow in table 1 with the available methods to have a better understanding of models flexibility maybe cpflow can achieve the sota if the number of parameters are normalized 6 i wonder if cpflow can outperform the residual flow which uses a similar gradient estimator routine if its number of parameters in table 2 is increased 7 compared to realnvp cpflow requires an optimization solution per each training update also as noted in the paper the computational cost of the cpflow is less than residual flows as it saves the spectral normalization process so how the speed and convergence rate of cpflow is compared against the benchmarks assuming all are using the same hardware eg gpus and cvx opt are implemented in gpu it looks better to have a sense of it in one of the experiments 8 i am willing to see the randomly generated sample of the cpflow especially to compare its localglobal stability against the residual flow due to its lipschitz constraint as discussed in 2 9 a schematic architecture of the icnn model helps better understand it ref 1 m karami d schuurmans jascha sohldickstein daniel duckworth laurent dinh invertible convolutional flow advances in neural information processing systems neurips 2019 2 behrmann jens et al understanding and mitigating exploding inverses in invertible neural networks arxiv preprint arxiv200609347 2020 docsep summary the authors introduce cpflows a way to parameterize normalizing flows by constructing an inputconvex neural net with softplustype activation functions and considering its gradient as the flow they add a quadratic term to ensure invertibility using convex optimization techniques their method only needs access to convex optimization solvers they show that this architecture is universal that is starting from a measure mu there is a sequence of cpflows converging weakly to a desired distribution nu they also prove that the constructed flow converges pointwise to the optimal brenier map for euclidean cost they perform a set of experiments on synthetic and realworld datasets and show their method delivers its promises strongweak points the ideas described in the paper are simple and easy to understand the paper is generally wellwritten with exceptions detailed below the ideas for computing the trace of log hessian and its gradient are neat however i am not sure if they are novel and not present in the literature all the results are asymptotic the convergence results are weak convergence to the optimal map theorem 4 is pointwise and convergence of the distributions is weak theorem 3 there are lots of tricks used here and there computing the trace of log hessian and its gradient to reduce computational complexity but nothing explicit is computed theoretically nor presented experimentally i am not sure if theorem 2 and theorem 4 are already known or not i might not recall the reference but i am pretty confident that they have been existing in the literature in the experiments looking at table 1 naf always outperfrom cpflow and no explanation has given for why is it the case the same is for table 3 where sylvester outperforms cpflow in every dataset i have decided to give a 5 to this paper as theoretically it does not add significantly to the current theory of ot and nfs experimentally i feel that the performance is probably marginally better than the other methods if it is not worse additional feedback fixing the following errors improves readability of the paper section 1 paragraph 3 line 4 the network has not been introduced and not clear where it is referring to same line 7 universality and optimality is not clear in what sense it becomes clear later page 2 line 2 the network architecture still is dangling section 2 two lines above universal flows does not make sense ordinary differential equations orthogonality or lipschitz constants are not nn architectures same two lines above optimal transport there is no reference why under very mild conditions a coupling exists same optimal transport the problem described is not monges problem its monges formulation for optimal transportation page 3 invertibility and the activation changed from s to g suddenly page 4 lines 11 12 are hard to understand why nans creep in page 4 3 why the terms are in red page 5 sigmoidal flow activation is called s while in the formula it is sigma same flows with potential the last two lines i do not see what the contrast is made against page 6 section 5 i do not see why the notation changes from z to h page 7 i do not know how to interpret the middle plots for fmathrmiafx and fmathrmcpxdocsep summary and contributions this paper presents cpflow an alternative formulation of normalizing flows via the gradient of a strictly convex function concretely an optimal transport map is constructed for the quadratic transport cost this map can be constructed via the gradient of a strictly convex function owing to breniers theorem the strict convexity of the potential function ensures that the gradient map is injective and surjective ie invertible this leads to relations with normalizing flows invertible maps that can be trained via exact maximum likelihood the authors borrow techniques from convex optimization to invert the gradient map and to compute the logdeterminant of the hessian and its gradient quantities that are required for density evaluation and gradientbased training using arguments from optimal transport literature the authors prove the universality and optimality of the gradient maps when they are realized using inputconvex neural networks icnns experiments in various settings of density estimation and variational inference have been presented that demonstrate a competitive performance of cpflow against competing normalizing flow models strengths this work makes a significant contribution to the body of literature on optimal transportbased generative models by parameterizing the flow as the gradient of a convex potential function rd rightarrow r this method improves the parameter efficiency of invertible models moreover the network architectures although restricted to icnns can be much more flexible than in conventional normalizing flows where networks have to satisfy restrictive conditions to ensure invertibility theoretical results also demonstrate that icnns have the capacity to learn transport maps between arbitrary continuous probability measures the theoretical results are supported by the empirical evaluation which despite lacking scale is sufficient to demonstrate the potential of the proposed method weaknesses the primary weakness of this work is that some directly related works that utilize breniers theorem to learn transport maps via icnns have not been discussed in sufficient detail apart from flowbased setups other works have also explored the construction of transport maps using icnns which have not been discussed see for example 1 2 3 moreover the experiments lack comparisons with these works or a discussion of why such comparisons are not possible another recent related work is 4 a potential concern is the efficiency of solving a quadratic optimization problem for computation of the gradient of the logdet of hessian for each gradient step did the authors compare the runtimes of the different methods 1 korotin alexander et al wasserstein2 generative networks arxiv preprint arxiv190913082 2019 2 taghvaei amirhossein and amin jalali 2wasserstein approximation via restricted convex potentials with application to improved training for gans arxiv preprint arxiv190207197 2019 3 makkuva ashok vardhan et al optimal transport mapping via input convex neural networks arxiv preprint arxiv190810962 2019 4 finlay chris et al learning normalizing flows from entropykantorovich potentials arxiv preprint arxiv200606033 2020 additional feedback suggestions include a proper discussion of related works that use icnns to parameterize the brenier potential and compare with these methods experimentally if possible questions is there a reason that the authors did not include a nsf ablation in table 2 did the authors investigate the quality of generated samples for the image datasets using fid is etc as the poor quality of images is a known problem for flowbased models the authors have addressed most of my concerns i believe the paper is a good contribution to the literature on normalizing flows therefore i firmly vote for acceptance docsepsummary this paper proposes the flow based representation of a probability distribution so that the corresponding density remains tractable in particular the pushforward map that generates the desired distribution is characterized by the gradient of a strongly convex potential function the invertability of the mapping as well as the jacobian of the mapping is hence guaranteed by such a convexity property of the potential function the proposed cpflows are proved to be universal density approximators and are optimal in the ot 2wasserstein sense comment it seems to me that the proposed cpflow model is quite similar to a set of recent works that use the icnn to parameterized the convex brenier potential of 2wasserstein this limits the novelty of this paper on the other hand i think one contribution of this paper is to study how the gradient of the log determinant of the hessian of the potential function can be computed which relies on a standard technique the hutchinson trace estimator another contribution of this paper is to study universality of the cpflow model as a density approximators since any convex function yields an affinemax representation which is a special case of the icnn the breniers theorem implies the universality concerns 1 unlike the traditional flow models which consists of a sequence of simple transformations it seems to me that cpflow is not exactly a flow model as it only involves a single but complex transformation which of course contains multiple layers correspondingly the computation involved in the flow model inversion gradient evaluation are substantially more expensive 2 the presentation of the paper focuses on the description of the model however i would suggest the authors to add a few paragraphs to briefly discuss some problems that takes the cpflow model as a subroutine eg variational inference or maximum likelihood loss this would make the paper more selfcontained and it will also justify the need to have a traceable density 3 more on the writing of the paper some derivations to show that why 3 is important are appreciated eg det fracpartial f1xpartial x frac1detnabla ff1x again it would be good for a broader audience if the paper is more selfcontained 4 how to efficiently implement the batch computation of the gradient of the model what do you mean by inverting a batch of independent inputs is as simple as summing the convex potential over all inputs the authors partially addressed my concerns therefore i raise my score to 6
### Summary:
|
the main contribution of the paper is a novel parametrization of normalizing flows using ideas from optimal transport theory this new parametrization allows viewing a normalizing flow as the gradient of a convex function which leads to an efficient method for gradient estimation for likelihood estimation using only access to convex optimization solvers the paper is overall wellwritten and provides a clean analysis theoretical results from the paper are supported by experiments the paper was overall viewed favorably by the reviewers
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
summary this paper presents dialograph which introduces graph structures to encode the relations of strategies and dialog acts among utterances in a dialog history this paper empirically demonstrated the efficacy of dialograph on noncollaborative negotiation dialog tasks and the model is evaluated automatically and humanevaluation strong points 1 training to control for the pragmatics of the negotiation dialogues has been less studied and it is a crucial and interesting topic to build a logical dialog system 2 this paper is well structured and well written 3 introducing gat and asap to model strategies and dialog acts is novel and the interpretation of their results is interesting 4 this paper also provides soundly experimental results and comparison with other sota models weak points 1 the dialog task problem setting seems unrealistic a tuple of utterance dialog act and strategy at turn i is given and based on the previous tuple sequence the model predicts the next response dialog act and strategy however in reality the users dialog act and strategy are hidden 2 also the sets of dialog acts and strategies are different depending on user and system i agree this paper followed the task as previously defined but it seems awkward 3 hedtransformer vs dialograph those two models show similar experimental results according to the experiment configuration the hedtransformer used 6 decoder layers whereas the dailograph used 2 graph layers those two models were fairly comparable with respect to the number of parameters or the proposed method based on gatasap does not effective as much as transformer because both are basically based on attention mechanisms 4 in table 4 the average words per turn is required to be separately reported depending on users and bots questions 1 how the bot decide the price to offer does it solely depend on the language model ie the decoder 2 how the bot encode the listed price 3 in table 4 why hed got a remarkably higher score on natural measure than other models 4 please address and clarify the weak points abovedocsepthis paper proposes a endtoend dialogue system that leverages graph attention networks to model complex negotiation strategies the main contributions that the author claims are to model negotiation strategies through a gnn and using these learned strategies to predict future strategies and generate a response leads to better negotiations the end to end model contains a traditional hierarchical encoder to obtain contextual representations along with a structure encoder that is designed to model strategies and dialog acts and obtain structural representations the decoder is a simple gru that produces the response by conditioning it on the contextual and structural representation along with the previous word strength weakness 1 the aspect of generating a response based on the prediction of negotiation strategies and dialog acts is an interesting approach 2 the results from table 1 show that transformers perform comparatively or even better than dialograph what is the overall gain on using the gnn if the transformers match their performance 3 the results in tables 2 3 are hard to interpret prior research has shown that bleu and other automated metrics are not good enough to evaluate the performance of dialog systems why is the bert f1 score for dialograph in table 2 and 3 different what is the major difference 4 what were the definitions provided for persuasive coherence natural and understandable why is there a huge dropoff on the metric for naturalness from hed to other models how many participants were recruited questions 1 what is the rationale behind 5 negotiation classes was any ablation done with this to determine the optimal number or how this split affects the outcomes 2 what is the overall size of the vocabulary during decoding 3 how were the 4 outcome dialog acts labeled across the dataset was this done through a human annotation process 4 what decoding strategy was used suggestion 1 section 24 has inconsistencies in the usage of notations please fix those issues 2 can the bolding of scores in table 1 be made more consistent and highlight only those scores that are higher and lower based on the metrics being represented after reading the authors response i thank the authors for answering all the questions that been raised by the fellow reviewers looking at the responses and changes made to the paper i have increased the score from 5 to 6 after the authors clarified the issues i had with the paper overall this paper demonstrates the effectiveness of using a gnn for negotiation dialogues i feel that this approach can be applied for any noncollaborative dialog settings and the claims of interpretability make this approach better docsepthis paper deals with the problem of natural language generation for a dialogue system involved in complex communication tasks such as negotiation or persuasion the proposed architecture consists of two encoders one for the utterance and the other for dialogue acts and negotiation strategies the decoder is an rnn that converts the encoded vectors to the output utterance each utterance is first passed through bert to get an utterancelevel encoding the sequence of utterance encodings is then passed through an rnn to generate a conversation level encodings the negotiation strategies and dialogue acts in a conversation are represented using a nodeedge graph where the nodes are one of the n different strategiesacts and there exists an edge from node a to node b if an utterance with strategy a precedes any utterance with strategy b the entire architecture is trained in a multitask setup where the loss function accounts for both the predictions of the model and generated language the proposed architecture is evaluated on the craigslistbargain dataset and compared against zhou et al 2020 the paper is very clearly written and the experimental work has sufficient detail to ensure reproducibility the main contribution and the novelty of this paper is in the use of graph neural networks for encoding dialogue acts and negotiation strategies this choice was mainly because it helps with better interpretability of predictions and this is demonstrated anecdotally in section 5 the proposed model shows better performance in three different metrics when compared to sota from zhou et al 1 prediction of dialogue acts and negotiation strategies 2 on the downstream task of dialogue generation and 3 human evaluation to quantify the quality of generated language there are a few aspects of the paper unclear to me and could use more insight from the authors 2 the input to the gnn is a nodeedge graph where the edges exist between dialogue acts or negotiation strategies based on their precedence order in the conversation it would be useful to explain why the authors chose this type of representation what other types of representations were considered 2 the authors use two different encoders for dialogue acts and negotiation strategies would it make sense to have a graphical representation that captures both dialogue acts and negotiation strategies simultaneously 3 from my understanding it seems the dialogue acts were annotated in the original work of he he 2018 and strategies are obtained based on the models and rules published in zhou et al 2019 i am not sure if the model evaluation should entail predicting negotiation strategies which are in itself predictions of a different model minor commentsquestions 1 how were the traintestdev splits done 2 if i am not mistaken it seems like the model uses predictions from time step t to predict and generate for time step t1 does this mean errors in one of the earlier timesteps could lead to more errors in subsequent timesteps 3 i think it would be clearer if you were to use a singleletter variable for strategies it helps for better readability 4 references please use the peerreviewed version of the paper as opposed to the arxiv version when available eg he he et al 2018 5 how many conversations were used in human evaluations docsep summary this paper proposes to dialograph a model that utilizes graph attention networks to learn the pragmatic relations between strategy and dialogueact in negotiation dialogues and achieves better performances than previous baselines the use of graph attention networks also enables a better interpretation of the relation between strategies quality the motivation to introduce pragmatic information into negotiation dialogues is clear the model is straightforward and effective the experiments and analyzes are through but the idea of directly applying graph attention networks is not super exciting clarity the paper is in general written clearly and easy to follow with a few points to improve listed below 1 in 24 pj is the positive weight associated with the particular strategy is this pj the probability of the jth strategy occurring in ut1 mentioned before 2 in section 5 we notice that as soon as the first propose at u5 happens the strategies completely change and become independent of the strategies before the propose point its not clear to me why the strategies are independent looking at figure 3 and also why is the weight in figure 3 equal to 1 and the sum of outgoing edges bigger than 1 whats the max and min and whats the cutoff threshold for the edges in figure 3 3 in section 4 several results are in bold if they have statistically insignificant differences its not clear which results are significant and which are not from the tables and the descriptions 4 id appreciate more dialogue examples in the appendix 5 how hard is it to optimize the graph networks is the training stable originality the model is not very novel as graph networks have been used a lot in dialogue models significance dialograph could be potentially useful for other negotiation systems but it requires both strategy and dialogueact annotations which may be hard to obtain pros 1 the interpretation of the learned strategy graph is useful but that section needs a bit more work in clarity 2 the use of graph attention network improves the negotiation dialogue system cons 1 the model is applied on one negotiation dataset only and requires both strategy and dialogueact annotations which may not be available in other datasets making the model not so generalizable to other tasks 2 from the results it seems the major performance jump comes from using the embeddings from pretrained language models transformer bert etc 3 its not clear which results are significant and which are not 4 it seems there is no connection between the strategy structure encoder and the dialogueact structure encoder maybe adding a connection between these two modules would give a better performance since dialogueact and strategies can be related also are they related at all if not why do we need both labels typo missing a period in human evaluation effective negotiation system our model also
### Summary:
|
in the context of constructing negotiation dialogue strategiespolicies the authors explore the use of graph attention networks gats for determining the sequence of negotiation dialogue acts specifically leading to a 1 hierarchical dialogue encoder via pooled bert gru encoding 2 gat over dialogue strategiesacts many technical details around graph usage 3 gru decoder while a relatively straightforward replacement relative to similar architectures with other structural encoders they provide a sound endtoend training strategy that is shown to perform well on the buyerseller negotiation task via craigslistbargain dataset where they demonstrate sota performance pros studying the pragmatics component of negotiation dialogue strategies has received recent interest and this seems a good milepost that demonstrates mainstream methodological approaches for this task ie this is a good baseline for future innovations the paper is wellwritten in that it is easy to understand intuitively while having sufficient detail to understand the details the empirical results appear promising and meet the standard within this subcommunity showing improvements with automatic and human evaluation cons this builds on existing datasets which are known to have undesirable properties eg automatic evaluation small number of dialogue datasets use of explicit dialogues acts etc while it still meets the standards of this subcommunity it still isnt a completely convincing task while the use of gats is novel in this setting and they get it to work within the overall architecture this is something that many people are likely trying at this time so there isnt an exciting disruptive step here the empirical results while satisfactory from a quantitative perspective even in reading the appendices it isnt clear that these are significantly better from a planning perspective or if it is just pattern recognition gains evaluating along the requested dimensions quality the underlying method is fairly straightforward and the authors incorporate uptodate gatrelated methods to get this to work in this setting the empirical results are sound if predicated on the general quality in this subcommunity where you have the standard machine translation evaluation problem for meaning vs lexical closeness to mitigate they use bertscore and human evaluation which is at the higher end of what can be reasonably expected clarity the paper is written clearly overall especially if considering the appendices where there is significant detail related to empirical evaluation it isnt easy to intuitively interpret the results but this is again par for the course additionally i believe the authors did a good job responding to reviewer concerns originality while all of the reviewers agreed that the approach was novel in this setting one of the reviewers explicitly pointed out that using gats in negotiation dialogues isnt that exciting and i mostly agree i view this as something that somebody would have done and will serve as a good baseline although i think this subfield is going to need more datasets to continue progressing significance as stated above it is a good baseline that i think many are likely thinking of as the tod community has been doing this for a bit now however it is done well honestly i agree with the reviewers that this is a somewhat borderline paper mostly due to it being a fairly obvious idea and the nature of the subfield making it not entirely clear if the improvements are due to knowing the target performance while training or due to the methodological advance personally i am convinced but it isnt totally clear that being said it is a wellwritten paper and i think the reviewer issues were sufficiently addressed thus i would prefer to see it accepted as i think it will be a strong methodological baseline for this problem which hopefully will accumulate more convincing datasets and standard evaluation
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the authors propose a dynamic pruning algorithm by combining multiple methods like early exit and a topk loss function the experimental results show a good performance over the vgg network whereas the performance when using resnet architectures decays a little bit overall i think it is an interesting approach but the novelty is minimal the authors combine multiples techniques that already exists in the literature although the combination is done in a clever way all techniques are barely explained making the reader to go to the original papers to properly reproduce their approach regarding to the experimental setup too much work has to be done before publishing it there is no comparison against other dynamic pruning algorithms like 1 or 2 furthermore it is only tested on huge networks that have a bad accuracy vs size ratio i suggest the authors to include both more datasets and networks finally the results obtained are not impressive if we compare them against the papers mentioned above 1 liu z xu j peng x xiong r 2018 december frequencydomain dynamic pruning for convolutional neural networks in proceedings of the 32nd international conference on neural information processing systems pp 10511061 2 lin s ji r li y wu y huang f zhang b 2018 july accelerating convolutional networks via global dynamic filter pruning in ijcai vol 2 no 7 p 8 although the idea is interesting the novelty is minimal besides that more work has to be done in the experimental section to prove this new algorithm can achieve stateoftheart results in dynamic pruning docsepthis paper focusses on a dynamic filter pruning technique that reduces the longest path latency of inference while using explainable ai xai to help with determining pruning criterion the approach uses an early coarse prediction branch that is used to perform a topk classification this branch is added to the middle section of the neural network if this branch either predicts the output class with high prediction then the rest of the computation is skipped else this branch predicts the topk classes for this input and executes the rest of the part of the nn while executing this part only filters useful for predicting these classes are loaded during execution to determine the set of filters useful for a certain set of classes the paper uses xai to determine importance criterion for these filters for each class the authors evaluate their work on cifar 10 and cifar 100 benchmarks on vgg1116 and resnet20 architecture while comparing against a dense benchmark and a simple static pruning technique weakness 1 this work was evaluated on cifar 10 and 100 without evaluation on a larger dataset like imagenet the usefulness of this work is harder to evaluate 2 this work also does not compare against strong alternatives like prior work in dynamic neural network pruning eg see a and b for relevant related work or compare against state of the art static pruning techniques see c or more recent efficient architectures like mobilenet or efficientnet the one static pruning technique compared against is l1 pruning using a oneshot framework this is a really weak benchmark to compare against without comparison against these alternatives its really hard to evaluate the effectiveness of this technique 3 the motivations of using xai for pruning criterion is also unclear given that the added understanding of importance was not used anywhere during the work 4 the authors also dont motivate the reason for importance of reduction of longestpath latency of inference via discussing and showing this element to be a limitation of various prior work further not all dynamic inference techniques suffer from this issue eg see a questions for the author 1 what is the importance of xai based filter importance criterion wouldnt other type of filter pruning criterion also help with this objective 2 one of the arguments that the authors make is that this work reduces the longestpath latency of inference however the author do not provide any data showing what the worst case scenario for prior works looks like and why is that a problem worth solving suggestions for improvement 1 the authors should strengthen the motivation of this work see weakness 3 4 there are a lot of open questions regarding the choices for the architecture made in this work that are not backed by specific data showing that the reasons for the said decisions are reasonable 2 there is a huge body of work in the domain of dynamic filter pruning see related work in a and b some of these work eliminate the need of optimizing for worst case latency and provide strong benchmarks to compare against a httpsarxivorgabs181005331 b httpsdlacmorgdoi10114534173133429380 c httpsarxivorgabs200303033 the motivations in this paper are weakly established further the results in this paper have not been compared against relevant benchmarks or on more established benchmarks given these issues i recommend rejecting this paper docsepthe authors propose a set of steps in order to reduce the latency and the flops computation effort of neural networks they add early exit prediction layers a topk prediction layer and dynamic pruning based on the set of classes from the topk prediction layer the pruning uses statistics precomputed from a validation dataset in order to decide which filters are to be used for which class the statistics are obtained from deeplift scores strengths good readability except for the xai section which is to brief easy to understand concept faster on cifar100 weaknesses incremental novelty it is a combination of existing ideas no speed gains on cifar10 experimental evidence is done using only two networks and only cifar datasets what is needed add a second modern network postresnet add a more complex dataset than just the sketchy cifar10 datasets eg lsun coco or the smaller pascal voc or similar whatever size fits the budget as long as the images are of a higher resolution than 32x32 have a higher intraclass variability than cifar table 1 shows what topk performance for the topk prediction heads analyze the impact of the k in topk on the performance of the whole model after pruning the paper would strongly benefit from more extensive experiments some missing details weaknesses section on pruning xai is somewhat too brief and needs a clearer explanation of the process they store relevant filters precomputed for each class or on what outputs else are the deeplift scores initialized based on the coarse prediction the xai dynamic pruner selects the filter kernels of pruned convolution layers that need to be used how do they combine filters for the classes from the topk prediction for k1 do they use only k1 how much do you prune how do you decide how much to prune this is even more so important because for some reason cifar100 same resolution results in a much largerslower model table 6 vs table 7 is it just because of a 10x increased number of output classes and thus retained filters in the oversized linear branches of vgg16 language minor the importances are obtained once in the start and during prediction of the sample the ranking of filters is done this sentence can be misread as the importances are obtained once in the start and during prediction of the sample it needs an and or better a rewrite why table 7 lists reference model when the text states a statically pruned baseline question with what loss and labels the confidence in the exit branch is trained citations consider to cite httpsieeexploreieeeorgdocument9502445 instead of the arxiv for 20 also please consider to cite httpswwwsciencedirectcomsciencearticlepiis0031320321000868 httpsarxivorgabs210306460 its a paper which is a bit below the perceived acceptance level it is an empirical paper therefore for a high ranked conference it should come with a larger experimental evaluation and clearer details on how the xai was used and how much was pruned most problems have higher complexity than cifar10100 docsepthis paper proposed a new dynamic filter pruning method that utilizes explainable ai along with early coarse prediction in the intermediate layers of a cnn the early coarse prediction branch is trained using deep topk loss and the coarse prediction is used to dynamically select cnn filters relevant for those classes the filters relevant for all output classes are obtained in prior using explainable ai the dynamically pruned model by the proposed method is trainable and easily deployable on the various end hardware 1 incomplete experimental results 11 experimental results on more and more various datasets eg imagenet networks eg mobilenet efficientnet resnet50 and configurations eg various flops reduction ratio should be included 12 it is necessary to show whether the proposed method is effective not only for classification but also for more practical detection and segmentation networks 13 as with other pruningrelated sota papers in order to prove the superiority of the proposed method all experiments should be compared with various sota studies section 47 contains almost nothing 14 recently activation functions with negative values such as swish and mish are widely used in general pruning is vulnerable to activation functions including negative values due to the influence of shift parameters so please show whether the proposed method has robust properties to these activation functions 15 as you know finetuning requires a significant training cost so many filter pruning techniques that achieve excellent performance without finetuning have been proposed it is necessary to show whether good performance can be maintained even if finetuning is removed from the proposed method 2 novelty of the proposed method it is difficult to find a clear novelty in the proposed method because it is mostly based on existing methods and only a few changes have been made 3 it is difficult to agree with the argument of this paper because the motivation of this paper is not sufficiently presented to compensate for the fact that the experimental results of this paper are not sufficient to prove the superiority of the proposed method the motivation of this study needs to be clearly presented at the beginning for enhancing the novelty of this paper 4 to support the sentence our proposed architecture for dynamic pruning can be deployed on different hardware platforms analysis and experimental results for more diverse hardware eg embedded gpu board fpga asic should be accompanied 5 recently many kernel pruning studies have been attempted and therefore it would be good to explain the advantages of the proposed method compared to kernel pruning targeting the kernel in the filter 6 the relationship and role of explainable ai in the proposed method are not well explained and it is not clear why the authors performed studies belonging to instancewise dynamic models among the three categories the contribution needs to be made clearer in addition the experiment part that supports the contribution needs to be thoroughly supplemented please address my concerns in main review through the rebuttal process
### Summary:
|
summary this paper presents a technique to reduce the worstcase latency of inference the key idea is to use a combination of early exit and filter selection to achieve its results the filter selection predicts the topk classes for the input and using that indication uses the filters that are the most relevant using deeplift to refine the result strengths from discussion the idea is interesting early exit mixtures of experts one potential interpretation of the filter selection here as well as pruning are interesting mechanisms for neural network efficiency there may be new opportunities to find synergies in their combination weaknesses from discussion the clarity of writing could be significantly improved particularly in the description and illustration of the constituent techniques figures such as those in httpsarxivorgabs200813006 that clearly present the constitution of various layers in particular would help there are relevant and applicable baselines that a comparison would contextualize the strength of the approach as per reviewer vkuc examples imagenet experiments appear to be within reach of this experimental apparatus ie without extreme cost hence such experiments would validate the applicability of this approach to practice a small point arose that longestpath inference was not motivated work on optimizing tail latency httpsresearchgooglepubspub40801 may be helpful contextualization here recommendation my recommendation is reject the work here is a very promising start for a new idea though requests for additional experimentation and baselines can be illdefined recommendations here scaling of the results to imagenet as well as comparing against baselines in the literature as per reviewer vkucs examples would provide much stronger scoping for this work
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper introduces a class of software called automatic propagation ap an extension of autodifferentiation software as well as a specific instance of it called proppo ap allows users to implement custom learning algorithms into the base unit of a propagator that can be reused the authors note that many ml software packages adhere to the same paradigm of a surrogate loss to be used with backpropagation while other forms of calculations are not optimized for or considered as they are not the standard as such they design ap to be a general message passing framework for ml the basic unit of proppo is a propagator that implements both a forward mode which performs computation and stores any necessary data for the backward mode and backward mode of computation which performs backward computation sending and recieving messages to other nodes the whole process is managed by a propagation manager the utility of proppo and the ap general system is demonstrated by the implementation of total propagation and gaussian shaping gradient algorithms for mc gradient estimation which are then shown to perform in terms of gradient accuracy orders of magnitude better than the standard surrogate loss autodiff method for certain classes of problems strengths the paper is well written and motivated the contribution is important and original the experimental results are thoughtful and validate the claim that there are situations where total propagation algorithm can be more accurate gradient estimator than those that can be achieved by standard ad software weaknesses the paper does not seem to link to any open sourced code which would be a nice addition to the paper to see some examples of how the more sophisticated propagators could be implemented the authors have addressed some limitations of the work ie that it is ultimately up to the user to use the framework well to achieve the desired aims of composability and customisability giving further guidelines and documentation for users on the best ways to use the framework would be worthwhile especially if open sourced docsepthe authors present a software framework proppo for constructing propagation algorithms on generic computational graphs these can be used to implement a variety of learning approaches the authors make the case that existing ml frameworks because of their limited capabilities of operating on generic computation graphs discourage researchers from pursuing particular directions the authors present example approaches to monte carlo gradient estimator and the total propagation algorithm implementations using proppo then show an experiment demonstrating total propagation can improve stability of an rnnbased system with nearlychaotic dynamics strengths the idea of developing a generic framework to perform propagation on computational graphs is important and indeed opens the doors for many new learning algorithms that may not be based on gradientbased learning or may involve gradient computation with higher order or other methods there is a need for such a generic framework to implement such algorithms weaknesses the paper is very light on the details of the framework itself the apis semantics for implementing other learning algorithms or implementation details are not explained at all in neither the paper nor the appendix this makes verifying and validating the quality and correctness of the authors implementations difficult the work is missing a set of performance benchmarks which are especially relevant for monte carlobased estimation methods total propagation or similar methods are not new and while an implementation of them using a generic framework has utility it is not clear if such an implementation is competitive computationally with other implementations the authors present no new learning algorithms imagined with proppo instead benchmarking existing ones the paper would be made stronger if the authors were able to present case studies demonstrating the utility of their approach in more detail the monte carlo gradient estimation example in the paper while an interesting setting in which to study stability is not convincingly broadly useful the authors should make such a case and should expand the set of learning settings in which they explore these algorithms taking advantage of their interoperability with pytorch proppo as a framework allows users to implement rp and lr algorithms as discussed but these algorithms are not new and the authors do not detail why an implementation enabled by proppo is any better than existing implementations of these algorithms beyond positing that there may be correctness issues with existing implementations that proppo helps avoid these ideas are very underspecified docsepexisting automatic differentiation ad frameworks do not provide flexibility for composing learning algorithms this paper invents automatic propagation ap software which generalizes ad and allows the construction of complex learning algorithms this work serves as in my opinion a kind of software illustration that incorporates modern learning algorithms as a result i didnt find this paper to make a major academic contribution see strengths and weaknesses
### Summary:
|
as an ac of the paper i have read this paper and made the decision not only based on the reviewers reviews but also considering my own judgment as well unfortunately one reviewer could not provide a review with high confidence and the other two disagreed where reviewer kjqn was willing to nominate the paper for publication summary this paper presents a software framework for a method that the authors called as automatic propagation ap ap generalizes ad and allows us to implement custom and composable learning algorithms however most of the ml frameworks previously focused on ad rather than implementing general message passing algorithms for learning with montecarlo gradient estimation this paper proposes a software framework called proppo implemented in pytorch to fill this gap proppo has base units called propagators that implements the forward and backwards mode of the computation propagators are quite flexible and users of the framework could easily download propagators to implement algorithms like backpropagation and the propagators are managed by propagation managers the paper uses this software framework proppo to implement algorithms like total propagation and gaussianshaped gradients total propagation is in particular interesting because it combines lr and rp it did not have a python implementation before the authors provide extensive analysis of those both in terms of variance and the speed of their framework strengths and weaknesses this paper studies an important and interesting problem in machine learning on designing software frameworks that could easily enable the implementation of learning algorithms that make use of monte carlo gradient estimation i am not an expert on messagepassing frameworks but the overall proposal looks novel to me overall it feels like this papers strengths outweigh the weaknesses i think majority of the weaknesses could be easily addressed in the cameraready version of the paper strengths important problem and provides a wellthought solution for automatic propagation thorough and convincing experiments provides a simple and flexible solution for ap which i hope will enable future research by enabling the implementations of mc gradient estimation algorithms like total propagation that didnt have a python implementation before weaknesses clarity the writing could be improved as reviewer jeb5 noted the paper is very light on the details of the framework figure 1 feels unnecessary and the same story could be told without it figure 2 is not referred from anywhere in the main text that figure lacks sufficient description and is difficult to understand for example what do the dashed lines in figure 2 mean in the paper it is not clear what makes an algorithm like total propagation to be difficult to implement in adbased frameworks such as in storchastic framework the authors should clarify in this paper why it is easierbetter to implement these algorithms with ap and proppo especially it feels like the authors can do a better job at justifying and motivating message passing algorithms in the paper in the current version this is briefly done in the intro by citing minka et al 2019 slides code the current version of the paper lacks the source code but the authors promised to release the code with the paper i would at least expect to see a submission of code with the supplementary material for an early review especially it would be very helpful if the authors could present an example of an ml model such as a variational autoencoder implemented with proppo decision the paper and the software framework proppo are important contributions to the community that might interest the broader neurips community as far as i am aware ap proposed in this paper is novel and it helps implement montecarlo gradients on stochastic computational graphs there are also a bunch of other possible usecases of prop the authors mentioned i can definitely see this type of framework becoming popular quickly in the community if it is implemented well and is easy to use from the examples provided in the paper proppo seems to have a relatively simple interface and is easy to use i am willing to nominate this paper for acceptance with the condition that the framework will be opensourced with the publication of this paper without opensourcing proppo this paper would not be as impactful besides i would recommend the authors clarify the following points in the cameraready version of the paper refer to the computational time experiments in app b3 previously b2 from the main paper the paper should incorporate the clarifications requested by reviewer kjqn into the main paper the paper should compare and clarify its differences with respect to other stochastic gradient estimation pytorch libraries such as storchastic i would recommend that the authors address my concerns about the clarity and code in the weaknesses section for the cameraready version of the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper proposes a framework that adds extra layer normalizations and scaled residual layer to the normal gpt3 model empirical results demonstrate faster convergence speed and its robustness towards the learning rate in general this is a good paper with sufficient and exhaustive experiment results the 60 timesaving is impressive the ablation studies show that both rescaling and more layer normalizations are contributing to better quality in masklm accuracies and zeroshot tasks the 3 gpt3 models of different size scales prove the generalizability of the techniques there are some small points that i think the authors can improve to make their statement stronger the current framework though claimed to be transformerrelated modification is conducted in gpt3 only it would be helpful to see whether this transformer modification can be used in other frameworks such as bert and t5 or a single transformer model the peak lr experiment does show that the current modification will allow the normformer to have up to 00275 learning rate stability which compared to the original 002 baseline does not have a magnitude difference it would not be that a strong statement to argue for the added robustness if higher lr would not always lead to better quality in general i hope the authors can show more evidence if this is the case and the reason why they care about this aspect the paper shows a simple yet effective way of modifying the existing gpt3 framework by increasing only a small number of parameters and computation per epoch which can be a potential addon to many existing transformer architectures docsepnormformer improves on preln transformers by making the following modifications learnable scaling parameters for each dimension of the output of each attention head prior to concatenation across heads scaled attention layer norm on the attention output post attn ln layer norm on the ffn nonlinearity output ffn ln and learnable scaling parameters for each dimension of the skip connection around the ffn scaled residuals they apply normformer to gpt3 and robertastyle model configurations and find that normformer models reach baseline isoaccuracy in 2243 less time and achieve notably lower higher isotime perplexity accuracy they then conduct a number of analyses to attempt to understand why normformer works i think this is mostly solid work the writing and methods are easy to understand and the contributions are clear and convincing however i think the baselines could be more challenging and the analyses more convincing feedback i appreciate the authors clear communication and presentation of their work i also applaud their use of gpuhourmatched baselines my main concern with the present work is that it is evaluated against baselines with no stabilitytrainability interventions such interventions exist the authors mention a number of them in the related work section but without including any of them its difficult to directly assess whether the present work is an improvement over existing methods im a bit confused the caption of figure 2 says the green star shows the point where normformer outperforms the baselines lowest perplexity reflecting 22 and 43 compute savings for clm and mlm models respectively while the first paragraph of results says normformer matches preln models while needing only 60 and 57 as much compute for clm and mlm models respectively either im misunderstanding your metrics or one of these statements is incorrect regardless i would encourage the authors to pick one efficiency metric and use it consistently and exclusively personally i think x as much compute or x the amount of compute are more intuitive as they clearly mean normformerbaseline100 amount of compute i dont find the gradient norm analysis section 41 particularly convincing its hard to know whether the effect of normformer on gradient norms is meaningful and even whether its causally related to the performance improvements it would be great to see some experiments demonstrating the causal effect of gradient norm changes on trainability at the very least i think readers would appreciate some theoretical justification or intuition why is it desirable to have similar gradient norms across layers also in figure 3 most of the layer 0 gradient norms are out of the frame in both preln plots making comparison difficult i suggest changing all three plots to a logarithmic yaxis i think it would be very informative to examine the scaling parameters over the course of trainingespecially early in training the early phase of transformer training seems to be the most fraught this is when the loss tends to explode and this is when you and others show the largest changes in gradient magnitudes analyzing only the final learned parameter values ignores the most important part of the story there seems to be an implicit assumption that the converged scaling parameters eg figure 4 are good for training does initializing the normformer parameters to be of similar magnitude to what is observed at convergence yield improvements over the current initialization scheme layernorm consists of a scale and bias but only the scale parameter is analyzed its possible that the bias could counteract the effect of the scaling although i think this scenario is unlikely it should nevertheless be accounted for the resscale results end of section 4 should be expanded and plotted and why did the authors choose only to look at the minimum value of lambdaresid the layerwise analysis of lambdaresid should be repeated with different metrics please include quantitative results from the other experiments section there are a few relevant papers that might be worth discussing or at least referencing zhang et al conducted earlier research on the importance of initialization improving deep transformer with depthscaled initialization and merged attention 2019 gradinit zhu et al 2021 is recent and also quite relevant powernorm shen et al 2020 and catformer davis et al 2021 both also take similar approaches to solving transformers trainability issues layerscale from touvron et als going deeper with image transformers 2021 examines the trainability problem in vision transformers and find a solution that appears similar to one the present innovations finally brock et al characterizing signal propagation to close the performance gap in unnormalized resnets 2021 examine cnns but their approach and findings are quite relevant transformers are notoriously unstable to train progress on this problem is of clear value to the field the baselines could be more challenging and the analyses more convincing but if these issues are addressedwhich i am optimistic abouti think the paper could be a meaningful contribution to the field docsepcompetitive empirical results but four modifications two norms and two scalings are not well motivated and explained this paper proposes some modifications including two normalizations and two new scaling operations to mitigate the issue of gradient magnitude mismatch inside the transformer the resulting normformer improves pretraining perplexity and downstream task performance with a negligible increase of computing cost 1 the paper is concisely written and easy to understand however the contributions are not clearly presented in the introduction besides since the section of related work is put at the end of the paper the readers may get confused about how the proposed method differentiates from existing works 2 the issue of gradient magnitude mismatch is not well explained for example how the gradient magnitude mismatch affects the training moreover how the optimal weighting of residuals is connected with this issue is unclear as they are the main motivations of the proposed method it is suggested to provide a detailed explanation about how gradient magnitude mismatch optimal weighting connects with the proposed method 3 as shown in fig3 of 1 the preln with warmup can effectively tackle the problem of gradient magnitude mismatch inside the ffn of the transformer can you explain why the issue still emerges in transformers with a common learning schedule for example can you provide some intuitions on why gradients at early layers are much larger than at later layers in preln transformers as shown in fig3 gradnorm in only several layers are visualized 4 i appreciate that the ablation study is conducted to show the performance gain of each component in normformer however since four modifications are proposed to solve a single issue it is expected to see how these four modifications affect the gradient magnitude and how these modifications interact with each other 5 the effect of headscale parameters is overclaimed the readers cannot see any evidence showing that headscale parameters can adjust the importance of attention heads 6 are there other studies investigating training instability please discuss more in related work it is good to show the effectiveness of normformer through extensive experiments 7 the results seem convincing and competitive overall this paper proposes normformer to improve the preln transformer by introducing some modifications the proposed techniques are empirically successful but not well motivated i hope that the authors can firstly present the issue inside the preln transformer clearly and then provide a detailed discussion about why the proposed modifications can mitigate this issue docsepthis paper aims to improve pretraining prelayernorm transformers by alleviating two issues early layers have much larger gradients than later ones and naive residual learning cant provide optimal weighting to this end it proposes to add two layernorms after the multihead attention and the gelu nonlinear activation in ffn respectively it also adds learnable scaling coefficients for the ffn residual and the attention head outputs the four modifications are applied to both casual and masked language modeling with improvements observed in downstream tasks strengths 1 the paper is wellwritten and easy to follow 2 the four operations are easy to implement and applicable to many tasks using transformer architectures some visualizations eg figure 3 provide straightforward comparisons weaknesses 1 although the four operations are shown effective in accelerating pretraining convergence or boosting downstream performance they look more likely engineering tricks these tricks have been widely used in prior works and i dont see much insight in being used here 2 there is a lack of analysis of why the gradient mismatch issue exists in prelayernorm transformers and why the proposed four operations can alleviate it the paper mainly focuses on what it does rather than justifying why doing that 3 there are two scaling operations added in figure 1 but three are plotted in figure 4 it is unclear how the left two subfigures in figure 4 correspond to the scaled residual connection in figure 1 in summary this paper proposes to add four operations two layernorms and two scaling parameters in the preln transformer layers the operations are simple and can help stabilize pretraining and improve downstream performance in several nlp tasks however i think the method novelty is not enough moreover the paper doesnt provide convincing motivation why the four operations help handle the issues
### Summary:
|
this submission proposes a few small changes to the preln transformer architecture that enable training with higher learning rates and therefore can result in faster convergence the changes include the addition of two layer norm operations as well as a learnable head scaling operation in multiheaded attention the proposed operations add only a small computational overhead and should be simple to implement experiments are conducted on language modeling and masked language modeling with improved results demonstrated at various scales and according to various evaluation procedures the paper also includes a good amount of ablation study as well as some analysis reviews on the paper were mixed and a great deal of changes were made to the paper during the rebuttal period to summarize the concerns and recommendations reviewers requested better connection between the proposed changes and the purported issue gradient scale mismatch between earlylate layers better analysis of why gradient scale mismatch is a major issue and investigation of where it comes from better comparison to existing techniques that allow for higher learning rate training of transformers additional experiments on different model types and ideally different codebasesimplementations i think overall this is a solid submission since it proposes a simple change that is reasonably likely to be helpful or at least not harmful however i think that there are enough concerns with the current draft and there were enough changes made during rebuttal that this paper should be resubmitted to a future conference i would suggest the authors take the final updated form from this round add additional motivationanalysisexperiments and resubmit and i suspect a positive outcome
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposed and encoding model using large fmri datasets to predict the brain response to visual images in different regions of the ventral stream of the visual cortex they test the model with different paradigms such as generalization to outofdistribution images generalization to other subjects spatial tuning in the early visual areas separability of objects in higher visual areas alignment with human perception and shape bias they then visualized the preferred images by different voxels predicted by the model the main novelty here are the shape bias and the ability of the model to achieve these above things concurrently each one of these was observed already in previous models but not together the paper is quite dense and the authors attempted their best to fit within the page limit strengths the paper succeeds in having a strong prediction accuracy for ood images while many other models have managed to do that it is not an easy task shape bias is one novelty over the traditional dnn models for cv the model shows multiple similarities with the ventral stream properties instead of focusing on one or two weaknesses the paper is quite dense leading the authors to omit many necessary information and to have figures too small and cluttered literature review in this field is lacking many important relevant references that discuss the idea of the similarity of hierarchy between humans and dnns to name a few horikawa tomoyasu and yukiyasu kamitani generic decoding of seen and imagined objects using hierarchical visual features nature communications 81 2017 115 abdelhack mohamed and yukiyasu kamitani sharpening of hierarchical visual feature representations of blurred images eneuro 53 2018 tsunoda kazushige et al complex objects are represented in macaque inferotemporal cortex by the combination of feature columns nature neuroscience 48 2001 832838 schrimpf martin et al brainscore which artificial neural network for object recognition is most brainlike biorxiv 2020 407007 schrimpf martin et al integrative benchmarking to advance neurally mechanistic models of human intelligence neuron 1083 2020 413423 federer callie et al improved object recognition using neural networks trained to mimic the brains statistical properties neural networks 131 2020 103114 the paper tests many different conditions with their model leading the paper to be increasingly complex since each condition is done via a separate analysis one suggestion is to have a summary table of test and how it is measured and baselines so one can understand the big picture each of the baseline model is trained very differently which makes them not really good baselines viewing figure 6a does not support the statement in the text that preferred stimuli increase in complexity encoding models are inherently limited by their dependence on the images solely unless other mental or prior information are added testing on a new dataset limits their ability to test the model against other models that focus on classification the appeal of encoding models is to understand how the model is able to achieve these feats that are similar to human visual cortex which is not addressed here docsepthis work builds endtoend voxellevel encoding models for fmri recordings of a popular publicly available dataset of natural scene images nsd the authors perform extensive comparisons against networks optimized for image classification on different types of data and show that their proposed responseoptimized networks are better able to generalize to heldout subjects are more sampleefficient and better generalize to out of distribution images the work also offers several interpretability analysis of the representations learned by the responseoptimized networks and shows that they appear to align with human similarity judgments and have more shape bias than a taskoptimized network strengths this work is very rich it has a good technical contribution and the scientific interpretation is topnotch i found the following highlights about the proposed method especially important good generalization to heldout subjects good generalization to out of distribution images increased shapebias an improved sample efficiency weakness no discussion of related work this works needs to be addressed in order for this work to be published works that specifically integrate brain recordings during the training of models are important to mention and discuss with respect to the current work here are a few that im familiar with and some of them even use the same image dataset seeliger et al 2021 plos comp bio httpsjournalsplosorgploscompbiolarticleid101371journalpcbi1008558 styves et al 2022 biorxiv httpswwwbiorxivorgcontent10110120220121477293v1 khosla and wehbe 2022 biorxiv httpswwwbiorxivorgcontent10110120220316484578v1 in language schwartz et al 2019 neurips httpsproceedingsneuripsccpaper2019hash2b8501af7b64d1aaae7dd832805f0709abstracthtml using an alexnet that is trained on imagenet as that taskoptimized model baseline doesnt seem like the best choice the nds images are from the coco dataset which has a different distribution from imagenet and therefore the test nsd images are much more out of domain for the taskoptimized model than they are for the responseoptimized model a more fair comparison as a taskoptimized model is to either train the same cnn architecture minus the linear layer on top to classify the nds images or at least finetune alexnet on the training nds images similarly in the generalization to imagenet experiments the authors compare to a baseline model that has been randomly initialized a stronger and fairer baseline here would be a taskoptimized model that has the same architecture as the responseoptimized and was trained with the same nsd images but with a different objective image classification rather than brain prediction the organization can be improved currently the materials and methods section is very long ends on page 5 and details all experiments by the time that the corresponding results come around the reader has forgotten the necessary details from the experiment i suggest describing the materials and main models in this section and then moving the description of each analysis to the corresponding place in the results this work directly integrates brain recording measurements into the training of deep learning models it should discuss any possible undesirable biases that can arise from this for instance any additional societal bias docsephere the authors leverage the natural scenes dataset to train up a hypothesisagnostic encoding model in contrast to recent taskoptimized models they train a series of cnns one per visual area and linear readouts one per voxel to predict neural responses to a large set of images then generally speaking the authors test two things about these models 1 whether they generalize in a nontrivial way to other subjectsdatasets and 2 whether the model learns interesting features in the data that may not be present in taskoptimized models strengths 1 the writing is clear and concise in most places 2 the approach to the problem using an encoding model with a shared convolutional core is relatively novel 3 some of the results impressive especially when the authors focus on generalization and sample efficiency weaknesses 1 it is difficult to contextualize the encoding model especially the results of figure 1 without a more systematic set of baseline models while it is not necessary to comprehensively survey the field a la cornet kubulius et al 2019 it is essential to know if the gains the authors see are unique to their proposed encoding model here i would include top performing models for imagenet eg densenet and resnet along with the older alexnet that they have already included 2 as mentioned above the authors argue that their encoding model 1 generalizes well and 2 learns nontrivial features and response properties they present compelling evidence for 1 but 2 is less clear for example figure 4 presents measuredpredicted retinotopic parameters but without the taskoptimized network for comparison are these features unique to their encoding model even if the encoding model is a better match it is explicitly optimized to better match neural data so it hard to know whether this is truly surprising or not 1 same point holds for figure 5 i would expect this result in the taskoptimized model as well 1 more broadly i think the point about generalization is interesting maybe their encoding model does learn a set of features that are generically useful in a visual system however the fact that the model learns certain features is potentially circular since the model is optimized to mirror neural responses in other words it would be more surprising for a taskoptimized network to have neurallike responses while here the model is explicitly optimized to have neurallike responses 3 related to the previous point im a little confused about the shape bias analysis presumably the neural responses that the model is fit to have a shape bias though i could not find this analysis in which case the model should inherit a shape bias via the fitting procedure perhaps i am misunderstanding this but it seems that the results in figure 6 are potentially circular minor 1 why is the prediction performance pooled across visual areas higher than the prediction performance for each visual area on its own 2 include units in all axes for convenience eg rdm similarity is kendalls tau i would include in the y axis where relevant eg supp fig 3a 3 should the taskoptimized network also be in figure 6b middle silhouettes transfer performance one major limitation that i found missing in the discussion is that many of the important features found in their encoding model eg correspondence to perceived image similarity are also found in the neural data which the models are fit to the discussion would benefit from clarifying what unique contribution their encoding model brings to the table
### Summary:
|
the authors leverage a large fmri dataset to fit hypothesisagnosticdatadriven encoding models to brain data as opposed to taskoptimized ones as done in most of the related work while there is general agreement that this is not a groundbreaking advance there is already similar work published the reviewers agreed that the paper was technically strong good generalization to heldout subjects good generalization to ood images increased shapebias improved sample efficiency the interpretation of results for neuroscience was rated as topnotch the paper received a rare unanimous clear accept the ac recommend acceptance
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes to apply a simple adaptive dimensional reduction technique on trust region bayesian optimization turbo for high dimensional bayesian optimization task which has low effective dimensionality the algorithm improves the probability of containing the optimum in the embedding space over previous work hashingenhanced subspace bo hesbo by enforcing the hashing from the input dimension to the lowdimensional embedding to be evenly distributed the paper demonstrates the efficacy of the proposed algorithm with a comprehensive empirical study strengths 1 wellmotivated algorithm design with the theoretical guarantee of its optimality over the existing hesbo in terms of probability of containing the optimal candidate in its embedding 2 the proposed embedding is wellintegrated into the existing hdbo framework turbo with appropriate algorithmic modification ablation studies prove the efficacy of each part of the proposed algorithm theoretical analysis in the appendix also shows that it inherits turbos convergency weaknesses 1 some of the definitions and concepts are confusing in lines 5962 the definition of the active subspace seems to be incomplete if for all x z fx gz z then the effective dimensionality could always be 1 regardless of the structure of x i suggest defining the subspace as in definition 1 of the rembo paper which defines the effective subspace and effective dimensionality for definition in lines 157 162 could the author specify the reference the classical definition of countsketch structure assumes the pairwise independence of the hashing functions charikar et al 2002 this is how hesbo implements its dimension reduction 2 the classical countsketch techniques rely on the pairwise independencies of the hashing functions assigning input dimensions to different bins and the fourwise independencies of the sign assignment function as mentioned the hesbo chooses the uniform random hashing which follows the assumption the independency of hashing seems to be violated when baxus enforces that the input dimensions be evenly distributed to bins therefore the original theoretical guarantee of the countsketch structure probably does not hold in this case the paper doesnt inherit the independency assumption on its countsketch definition and lacks a discussion on this violation of independency of the hashing function in practice the violation of independency might not hurt much as in expectation hesbo also distributes the input dimensions evenly to target dimensions and the downstream performance of the proposed embedding is strong however the missing independency downgrades the theoretical contribution of the paper due to its violation of classic countsketch structure and lacks a discussion on it especially when considering that the work is motivated to improve upon hesbo while doesnt inherit its advantage strictly 3 as far as i know turbo doesnt work well sometimes when only one trust region is operating and that motivates the turbom which is the mtrustregion variant of turbo turbo1 in the original paper baxus lacks discussion over this problem i expect a brief discussion over its choice of m and the corresponding implementation 4 the descriptions could be confusing as the notations are not well specified between line 138 and line 147 and deferring algorithm 2 which is a critical part of the algorithm to the appendix might not be a good choice reference charikar m chen k and farachcolton m 2002 july finding frequent items in data streams in international colloquium on automata languages and programming pp 693703 springer berlin heidelberg not applied docsepthis paper proposes baxus which extends trbo trust region bayesian optimization by using not just trust regions but a linear embedding of varying dimensionality the linear embedding is constructed via a countsketch style embedding that produces effectively bins of data in each region of the input space the dimensionality of the embedding can grow or shrink as part of the trbo procedure much like how in standard trbo the bounds of the space can change originality this approach is a pretty clever combination of trbo and embedding based approaches for high dimensional bayes opt however it ends up being very building block based which isnt necessarily a bad thing computing the success probability of the embedding and tying the primary idea back to well studied count sketch techniques does seem to be original to me quality the experimental evaluation and ablation studies do seem to be performed of a reasonably high quality and the plots are quite easy to parse the one set of high dimensional bayesopt benchmarks that is probably missed in the experimental evaluation is that of lamcts wang et al 20 which uses a classifier to construct the trust regions for trbo it might be too much to ask for a comparison using baxus inside but it seems like a fair comparison to use against standard trbo lamcts lamcts also has a slightly distinct and more interesting set of benchmarks which use mujoco environments which are somewhat more interesting than simply adding dimensions into test problems like branin fig 5 is very nice to see and seems to convincingly demonstrate that the baxus embeddings perform better downstream than the hesbo embeddings even inside of trbo clarity its really nice that the probability of computing the optimum is computable for this approach even if its a bit more complicated than the hesbo expression the plots and writing are overall easy to parse however the error bars and the marks eg triangles on the plots are confusing to me what are they and what do they represent significance better methods for high dimensional black box optimization are definitely quite significant and core to the neurips community i would have liked to have seen a more challenging benchmark task such as control of mujoco environments see quality comment references wang l fonseca r and tian y 2020 learning search space partition for blackbox optimization using monte carlo tree search in advances in neural information processing systems volume 33 mostly yes negative societal impliations of bayesopt eg weapons or bioweapons design could also be mentioned docsepthe paper considers bayesian optimization for sparse axisaligned functions over highdimensional spaces hundreds of dims the proposed solution creates a lowerdimensional space by subselecting dimensions from the original input space the lowerdimensional space is searched with thompson sampling and trust regions the dimensionality of the lowerdimensional space is increased over the course of the proposed algorithm the paper shows that the probability of containment of the functions optimum by the proposed algorithms lowerdimensional space is no worse than a recent highdimensional bayesian optimization approach also for sparse axisaligned functions hesbo further a total input evaluation budget is appropriately split among the sequence of nested lowerdimensional spaces the paper provides results on highdimensional datasets originality the nested lowerdimensional spaces is a neat idea i wish this direction was exploredpushed further in the paper quality it wasnt obvious to me why rembo was not included in the experimental comparison as it seems to be a natural competitor in this setting clarity the setup and algorithm description are generally clear some of the choices described in the splitting strategy fail tolerance control and initial target dimension werent immediately intuitive i couldnt parse the extent to which these were necessary for the proposed algorithms effectiveness over competitors significance the general setting of highdimensional blackbox optimization is clearly important and effective general algorithms in this setting would be valuable for the community the assumption of the function being axisaligned is a strong one however the paper does cover limitations and societal impacts to some degree in the paper
### Summary:
|
new activesubspaces type approach for high dimensional blackbox optimization that works with a family of nested subspaces of increasing dimensionality with some theoretical control over the failure risk overall well written and complete with convincing experiments that indicate better performance than popular approaches like cmaesrandomsearch as well as several recent methods please consider reviewer feedback on clarity and presentation for the final set of revisions
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper addresses the degeneration problem occurring in the twophase models for rationalization the authors identified that the learning speeds of predictor and generator are out of sync thus the predictor overfits to the uninformative pieces selected by the not yet welltrained generator leading to a suboptimal model the authors propose fr that uses a shared encoder to fold the two phases into one such that the learning speeds of two components can be aligned the authors claims are supported empirically and theoretically strengths 1 the observation and the idea presented in the paper is interesting folding two phases of training into one enforces that the learning processes of two components are compatible addressing the degeneration problem 2 the claims are well supported by both theoretical analysis and solid empirical results 3 the paper is generally well written and easy to follow the idea and motivation are presented clearly 4 the topic discussed in the paper is valuable to the broader community as it aims to improve the interpretability of nlp models weaknesses the paper did not thoroughly discuss the discrepancy when using a unified encoder the functions of two components are related but not identical while parts of the encoder can be shared indeed i cannot see why the unified encoder can work well if there are no other specialized layers in each components encoder yes the paper discusses the limitations of their work docsepthis paper investigates the rationalization for nlp models the authors analyze the relationship between generator and predictor in terms of working mechanism and propose folded rationalization that folds two phases of the rationalization into one via a unified encoder results show that fr improves over the existing baselines on beer and hotel review datasets pros the proposed approach is simple but effective for rationalization in nlp the authors conduct indepth analysis to show that the predictor in fr is regularized by the generator through the unified encoder experiments show that the proposed approach outperforms baselines by a large margin cons all experiments are conducted on two review datasets it would be better to show the effectiveness of fr on other tasks yes docsepthis paper proposes a new method for the task of rationalized classification where a latent discrete rationale is extracted and then used in a prediction module to render a classification decision the core idea of this paper is to share the encoders for both the generator selector and the predictor modules this model is trained using a sparsity and coherence regularizer from prior work the paper presents some theoretical results on why this rationalizer structure should be effective results on two rationalized classification datasets beeradvocate and hotel reviews show that the proposed approach produces rationales that agree more closely with humanlabeled gold standards strengths the problem of rationalizing predictions is an important one being able to do this without any supervision is a worthy goal the paper compares against results from several pieces of recent prior work and shows strong improvements over these on the automatic evaluation of rationale extraction the proposed method is simple and could be widely adopted for this task as a result weaknesses im not convinced of the correctness of the theory in this work lets start with the definition of uninformativeness this is defined in terms of the conditional probability of the label given a particular token however i think definition 1 makes a very specific assumption about the structure of language that really doesnt hold what about the word not we can imagine that not is uninformative by the definition here perhaps it occurs in equal measure with positive and negative labels in the dataset but not good and not bad might be meaningful collocations that the encoder has to account for i believe this case breaks lemma 1 the behavior of the model is changed by inserting not moreover i dont see how lemma 3 holds for cases like this with a welltrained encoder i dont see derivations of either lemma from first principles these are simply asserted and i disagree with them as they are asserted this state of affairs is worse for pretrained transformer models where the holdover from the pretraining objective makes much of the reasoning about the objective for this particular problem moot anyway as the encoders behavior reflects the base language modeling objective as well even setting aside collocations i am also skeptical of lemma 3 based on the mathematics of the models themselves i would believe there are some idealized grus that this can be true for but has this been verified on grus for real problems i think the burden of proof is on the authors to show that there isnt some small latent state update eg counting uninformative words going on at each timestep otherwise if the forget gates are nonzero even for some positions in the latent state vector this lemma wont hold as a result i dont buy the theoretical results in this paper setting aside the theory the core contribution of this paper is pretty minor largely an engineering one of sharing an encoder and given that most neural modes are overparameterized im not sure how much sharing an encoder actually constrains the representations that get learned even empirically the model does seem to work better possibly due to a multitasking effect but again im not convinced that its for the reasons the paper states the models used in this paper are very weak 200dimensional grus on 100dimensional glove embeddings are simply not representative of the stateoftheart in 2022 robertaelectradebertaetc encoders should be considered i think the empirical results about multitask learning of the predictor and generator actually depend on decisions like this and i could imagine the outcomes being different with these models the paper uses word overlap as an automatic metric but does not do any human evaluation of the produced rationales i could imagine that rationales from other methods may disagree with human ground truths but still illustrate the prediction in a meaningful way i think human evaluation would strengthen the empirical claims the conclusion and limitations section is quite vague with claims like existing works have proposed many extra modules to solve the degeneration problem which demonstrate great effectiveness the limitations listed here are largely conceptual things that havent been done yet as opposed to the in my opinion quite real limitations of the present technique what datasets it can apply to the assumptions it makes about classification the limited scope of models considered etc
### Summary:
|
there is consensus between reviewers that this is a worthwhile paper suitable for this venue i also apppreciate the extensive back and forths between the authors and the reviewers that seem to have improved the paper during the reviewing period
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the autocast dataset was introduced a dataset for measuring the ability of neural networks to forecast future world events autocast poses a novel challenge for large language models and improved performance could bring large practical benefits additionally autocast questions were written by experienced forecasters and are always unambiguous given the full question description in contrast to related work well structured and mostly well described easy to read the difference from forecastqa is significant so the contribution is considerable it is not clear how to use sourcelinks field and how cc news corpus is used some article urls from the dataset are unavailable some redirect to another page some require subscription docsepthis paper introduces autocast dataset that contains thousands of highquality forecasting questions and an accompanying temporal news corpus the paper is interesting and wellwritten the paper is interesting and wellwritten the dataset has highquality forecasting questions and it has an associated temporal news corpus the problem is somehow welldefined it design some simple baselines and the results seem makingsense i am not sure if all these questions are predictable given the temporal news corpus especially top ten news based on a ranker could the topranked news provide enough inductive bias to predict the questionsthis might not make sense from agnosticism point of view fid temporal does not significantly outperform fid static one might wonder what does this really mean for incorporating the auxiliary crowd predictions in line 235 should table 3 be table 2 docsepthe authors propose a new autocast dataset for future events forecasting under the qa format the raw data comes from public forecasting tournaments which ensures the quality of the event forecasting questions the dataset also includes news corpora corresponding to each question that can be leveraged by forecasting models a benchmark including human performance is also introduced this paper introduces a new dataset based on public forecasting tournaments it expands the existing future events forecasting qa research it will bring new research material to the related communities 1 about human performance comparison based on my understanding the human aggregated performance is provided by domain experts or enthusiasts crawled from forecasting tournaments different categories probably have different experts with lots of domain knowledge however for a deep learning model it is very difficult to gain knowledge of all domains so the large gap between models and human performance reported in the table is no surprise from my point of view a more reasonable human performance collection at the current stage is collecting the performance from multiple crowdsourcing workers and each worker should answer questions from multiple categories at least one question from one category using news content before the closing date as input information i agree that this extra human performance collection would take much extra effort but i think it would also be a more practical indicator for evaluating further autocast models 2 it seems that no information about the distribution of the question types ie tf mcq and numerical in autocast it could further strengthen the selection of the combined score metric for example if the majority of the dataset is mcq as well as tf questions a model with a strong ability in these types but worse numerical performance would also have a relatively high combined score 3 why not report the error bars of the investigated models the results would be more convincing if different random seeds are applied for running experiments multiple times 4 for unifiedqa according to the paper the reported result is the zeroshot performance without any finetuning is there a particular reason for not finetuning unifiedqa on autocast but t5 is finetuned docsepin this paper the authors describe their efforts to 1 curate a dataset of forecasting questions taken from forecasting tournaments and answers drawn from a news corpus and 2 provide an early evaluation of popular modeling for at answering these forecasting questions great care was taken so that there is no timebased data leakage ie machines can simulate learning facts about the world in the order they occurred the framing of the problem is very convincing ie the paper makes a strong argument that this dataset will be important for a variety of applications and fields the introduction does a great job motivating the work and summarizing the key contributions related work is focused on situating related technical work on forecasting ir qa as noted below the clarity of the work is also great which enhances the overall contribution important well documented contribution of a novel dataset with very high potential for real world impact clear and concise paper that explains why this dataset is useful and provides details about how it was collected and how similar datasets might be collected in the future the paper is convincing that this is a pretty important domain for research ie it is fairly important to have discussions in the community about benchmarks in the forecasting space early stage evaluations are also provided which may save researchers working with this dataset substantial time minor the main paper is sometimes a bit vague in discussing the specific scraping and manual verification process specifically any additional details that can be provded about how the human curation of data was performed will probably be helpful for this kind of work given that this 1 a dataset contribution and 2 this is the kind of task where people with different backgrounds may make different curation decisions as i understand full details on the questions side of the data about how the questions were curated may not be possible as the platforms hiring moderators to curate questions are private platforms that said the general topic of scraping and verification is briefly discussed several times in the supplement and code is available for replication the info provided in section 3 is very likely enough for independent replication additionally were there major differences in the curation aspects of the main dataset vs the calibration dataset it may be worth noting that the test set only lasts for one year in slight contrast to the example given of using the collapse of the soviet union as a motivating example just to avoid the small chance that readers may think that examples like soviet union example appear prominently in this test set finally it may help to discuss whether a more complete backtesting approach from finance is feasible here does the lm training cost make this prohibitively expensive should researchers in this space be aiming to work towards comprehensive backtests in the long run all these concerns are very minor hopefully these are helpful in the strengthening the paper but id argue for accepting the paper even with very minor revisions docsepthe authors propose a dataset of questions and answers for predicting world events the dataset includes the questions the start and close dates the answers if the questions have been resolved the timeseries of crowd forecasts by human experts and an accompanying news corpus sorted by date collected from commoncrawl the authors state that they intend to update the dataset regularly with new questions and resolutions this dataset has diverse question types including multiple choice questions mcq truefalse questions tf and numerical questions the dataset covers diverse topics such as politics economics society science and a category for everything else called other and altogether they have a total of 44 subcategories the questions have different time horizons some questions are resolved while some of them are still unresolved the dataset is collected from public human forecasting tournaments and amounts to 6706 questions in total the authors balance the truefalse questions by adding negated questions using gpt3 and then manually checking for correctness moreover they include an auxiliary dataset of numerical questions that includes about 30000 questions this auxiliary dataset provides the groundwork for measuring calibration traintest split is done based on the resolution date of questions so that questions in the test set all resolve after a specific date and the training set includes the questions that resolve before that specific date the authors prevent the models from overfittingmemorization by using only those news articles that were available before the resolution date of a forecast the authors train two sets of models the first set is models without retrieval including unifiedqav2 and t5 and the second set is retrievalbased models including fid static and fid temporal that use news articles fid static uses the top 10 retrieved articles while the fid temporal uses the top 1 article and in addition leverages the intermediate crowd predictions before the question is resolved as auxiliary supervision their results show that their temporal method neither harms nor helps they find that retrievalbased models that take advantage of relevant news articles outperform the models without retrieval and become even more effective as the number of parameters increases however all these language models still perform significantly worse than the aggregate human forecast which is a time series of forecasts from start to resolution date 1 the problem that the authors address is of great importance forecasting future world events can be helpful in a variety of domains 2 the authors show that the stateoftheart language models are significantly worse than the human baseline and this dataset can open up a great opportunity for researchers to improve the language models for forecasting 3 the paper is very well written and details are explained 4 this dataset is more accurately collected and is more realistic compared to prior works 5 the authors take the necessary precautions to avoid leakage from the training set to the test set most of the weaknesses are minor 1 in the metrics section it is worth mentioning that for tf and mcq metrics the higher is better while for the numerical metric the lower is better to avoid confusion for readers also in the combined score metric which is tf mcq numerical2 why is the denominator two instead of three 2 are collected news from commoncrawl only used for training or do you use them for the test as well also are they only used for retrievalbased models please clarify these and state them in the paper 3 would you please clarify why you use the top 1 article in the fid temporal and top 10 articles in the fid static that does not seem to be a fair comparison 4 on lines 209211 you say that you limit the article length to 512 tokens please specify whether you consider tokens to be words or characters 5 in table 2 column numerical why 195 is bold given that it is not the max 6 x axis in figure 1 does not have a label although it can be inferred by the reader it would make it more clear to indicate that the x axis is the date also is the prediction percentage for model blue line an average of multiple models since the graph is the performance for a single binary question i assume it is an average of multiple models but it needs to be clarified what models 7 additional graphs similar to figure 4 illustrating the fid static model and the two models without retrieval would be helpful for understanding the differences among all these models docsepthe submission provides clear motivated and well written contribution to establishing datasets that test the ability of learning algorithms to build models forecasting future world events of importance from available text news corpora the authors gather the dataset autocast comprising 6707 questions by making use of three public forecasting tournaments metaculus good judgment open and cset foretell that feature human forecast experts crowd questions contain truefalse multiple choice and numerical ground truth answers text news corpora is curated from common crawl and is organized by date allowing to mimic the conditions under which humans make past forecasts avoiding leakage from the future information authors further present baseline approaches for learning forecast models from the presented dataset they compare approaches without retrieval based on zeroshot unifiedqav2 and t5 finetuning and with retrieval based on bm25 for retrieval and on fusionindecoder fid for encoding retrieved news articles the retrievalbased method is either considering final output and top10 ranked retrieved articles fid static or takes into account human crowd predictions evolving in time between start and closing of a question fid temporal using gpt2 autoregressive model for intermediate forecast outputs comparing baseline methods to human crowd performance authors observe advantage for retrievalbased methods fid static fid temporal against nonretrieval ones that perform close to random guess level on the task the observed gap to the human performance is substantial which make authors call for further development of forecast learning algorithms the authors do not observe substantial difference between fid static and fid temporal retrievalbased methods with the conclusion that further efforts have to be undertaken to make better use of auxilliary losses for the forecasting task further the authors provide an additional dataset to calibrate output of the forecasting models on numerical answer where they source the questions from nlp datasets covering diverse topics and with answers varying across orders of magnitude they observe that with increasing model size using debertav3 model backbone calibration improves along with other performance metrics the presented work deals with important and challenging question of constructing datasets that can be used to test the ability of learning algorithms to build models that forecast future world events from large corpora of text data authors take care that training procedure and traintest split respect the notion of forecasting problem and cleanly separate past and future data provided baselines check dataset sanity and show clear gap to human crowd performance providing motivation to use the dataset for further development and improvement in this important direction one concern in my opinion is a small scale of the questions dataset few thousands of those may by far not be conclusive enough to evaluate performance of stateofthe art algorithms at scale further limitation minor as it can be readily lifted is in my eyes decision to use only news text corpora for training arguably other available sources of information eg stock exchange behavior weather text corpora like wikipedia dated before the relevant events to forecast may be helpful to enhance learning docsepthe authors introduced a judgmental forecasting question dataset autocast accessible along with a news corpus not released yet with date information for the purpose of facilitating the research of using language models to forecast future events the authors also provided several benchmarks constructed using various popular language models and timeseries models as well as human crowd predictions the benchmarks performance also revealed that retrieving relevant articles could help improve the performance of machine forecasts while there still existed a large improvement space for machine forecasts when compared to human crowd forecasts although there are several existing mature forecasting datasets autocast along with the news corpus with date information is still a progress as they provide an environment for language models to simulate the real forecast environment if the news corpus is released and can be verified significance of the contribution the autocast dataset along with the news corpus have the following features which as a combination is unique 1 forecasting questions are professional ie coming from three wellknown public forecast tournaments gjp metaculus and cset foretell 2 forecasting questions and news articles have date information so that they allow ml models to simulate the real forecasting environment and avoid information leakage relevance to the broader research community i believe forecasting future events is an important topic that have many realworld applications the provided dataset will help the general forecasting community as well as the machine forecasting community accessibility and accountability autocast is easily accessible on github with most fields clearly defined significance of the contribution it seems that the dataset and the news corpus are all tailed from existing datasets and projects however the effort should still be meritted accessibility and accountability the accompanying news corpus dataset is not released according to the github page resolved date is missing in the dataset which i believe is very important as forecasts only make sense before resolution and the resolved date may be earlier than the given close date in other word the information leakage problem is not entirely solved if the resolved date is not provided the human crowd forecasts is an important feature as well as a good target to learn in this dataset however the authors did not clarify how the human crowd forecasts are generated or obtained it would be better if the authors introduced the distribution of the time scope of the forecasting questions eg how many of them are shortterm questions eg taking weeks to resolve and longterm questions eg taking one or two years to resolve
### Summary:
|
the paper presents a novel dataset on the task of judgmental forecasting autocast was collected by scraping questions from public forecasting tournaments accompanied by a news corpora corresponding to each question the reviewers agree that this is a high quality and novel dataset with diverse questions and tasks it has a potential for high impact given the benefit of the datasets the paper is clearly written accompanied by a detailed data sheet after consulting the ethics chairs i would like to recommend conditional acceptance of this paper pending minor revision the authors clarify that they received permission from the source websites when possible directly addressing the concerns of ethics reviewer nk2v and reviewer qpti this additional context needs to be mentioned explicitly in the paper and supplementary materials authors should state the following in an appendix or footnote to the main text full permission for metaculus ceo gaia dempsey csetforetell website does not have a terms of use but the data is only being used for academic noncommercial purpose which is consistent with csets purpose as a policy research organization and fair use the terms of use for the common crawl website are being used however to further compliance the link to the ccnews dataset is removed from our github page the dataset is fully reproducible by including the script to download and filter ccnews on github the terms of use of the involved platforms addresses the concern for the presence of potentially offensive content
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the term negative transfer is quite confusing especially when it is used together with the term domain adaptation in domain adaptation negative transfer means transferring knowledge from a source domain to a target domain in a bruteforce manner may result in worse performance compared with that obtained by only using the target domain data in this paper the negative transfer problem is different from that in domain adaptation the authors just tried to model the proposed negative transfer learning problem as a domain adaptation problem however the defined problem setting of negative transfer is quite strange where for the target dataset neither instances nor labels are available expect for the probability of ptyp ya and there is relationship between yp and ya which is different from that of the source dataset it is not convincing that why the proposed problem setting is important in practice the proposed algorithm is designed based on two strong assumptions 1 dt is drawn from a distribution that is nearest to that of ds and 2 pty is given in advance regarding the first assumption it is not reasonable and it is hard to be satisfied in practice for the second assumption it is also too strong to be satisfied in practice though the authors mentioned that when pty is not given in advance pty can be further assumed to be of the uniform distribution or the classes are uncorrelated however these are just adhoc solutions in practice if pty is unknown and it is very different from the uniform distribution or labels are highly correlated the proposed algorithm may perform very poorly regarding the details of the algorithm it just simply applies an existing model dann in addition the theoretical part is a wellknown theorem there are some typos on page 3 figure 3a figure 2a on page 4 figure 3b figure 2bdocseppros provides illustration and math formulation for the problem of generalization beyond the correlation of labels and correlated but irrelevant attributes forming the issue as a domain adaptation problem or specifically a special kind of probability shift is a clever idea cons lacks comparison to existing work making features invariant to attributes to improve generalization is not a new idea cf 1 xie qizhe et al controllable invariance through adversarial feature learning advances in neural information processing systems 2017 2 if you consider the domain difference between various domains to be similar to attribute then this is also related li haoliang et al domain generalization with adversarial feature learning proc ieee conf comput vis pattern recognitcvpr 2018 3 there are other works that although do not aim at improving generalization use very similar formulation to decouple attribute from features eg a lample guillaume et al fader networks manipulating images by sliding attributes advances in neural information processing systems 2017 b mitigating unwanted biases with adversarial learning which the authors cite but do not offer any comparison or differentiation to improve the paper these related work should be discussed in related work section and if applicable compared to the proposed method in the experiments rather than a very brief mention of one of them in section 33 and no comparison use of the term negative transfer is problematic this is a more important shortcoming but people may disagree with me as far as i know this term is used to describe a source task being used to help a different target task but result in a negative gain in performance torrey lisa and jude shavlik transfer learning which is inherently a multitask learning setting however in this paper it refers to the effect of unrelated features being used in classifier resulting in a worse generalization the existence of this issue does not involve a second task at all if this is not intended please use another phrase if the authors think that these are one and the same i would strongly argue against this proposition also there is no negative transfer technique as implied by page 2 end of the first paragraph section 32 and 33s analysis is somewhat disjoint from the method the analysis boils down to given a different correlation between primary and aux tasks you can compute the distribution of inputs which will be different from the source so lets make the aux task unpredictable to get domain invariance and the method goes on to remove auxiliary task information from the shared feature space this is disjoint from either eq 1 picking a target domain closest to source and theorem 1 the bound for domain adaptation one way to improve the paper is to analyze how these analysis are affected by the adversarial training one of the selling points is that the method can adapt to trainable features in deep learning however in the experiment fixed extracted features from pretrained resnet is used anyway if so a way to improve the paper is to compare to the traditional methods cited in page 2 paragraph 1 by applying them on fixed extracted resnet features docsep the authors study the problem of negative transfer in representation learning and propose to use the formulation proposed by ganin lempitsky 15 for domain adaptation to reduce negative transfer instead of defining the domain classification as the adversarial task to learn a domainindependent representation they collect a set of classification problems irrelevant to the main task as the adversarial tasks and aim to learn a representation that focuses only on the primary task there are very little changes compared to the proposal by ganin lempitsky 15 but the application to solve the problem of negative transfer is interesting my main concern on the whole argument of the paper is whether the benefits we see in the experiments come from the elimination of negative transfer or just come from having more training labels from different tasks available in the main formulation of the approach equation 7 the authors try to learn a feature representation that works well for the primary task but works poorly for the auxiliaryirrelevant tasks if we switch the sign for lambda then it becomes very similar to traditional multitask learning i wonder how the multitask formulation would compare against the adversarial formulation proposed by the authors there are reasons to suspect the multitask formulation will also work better than the logistic regression baseline since more labels from different tasks are available to learn a better joint representation it is not clear whether the improvements come from modeling the auxiliary tasks using negative transfer where the adversarial approach should beat the baseline and multitask approach or just come from having more information where both the adversarial approach and the multitask approach beat the baseline but have similar performance from a practical point of view it is not easy to decide what prediction tasks are irrelevant for example in the birds dataset i would expect the color and patterns in the body parts to have some correlations primarycolor upperpartscolor underpartscolor wingcolor etc in the case of occlusion of the relevant body parts i could make a guess on the color based on the colors on other parts of the bird in the ideal case for the current method i would expect the adversarial approach proposed to learn a representation that mask out all the irrelevant parts of the animal or irrelevant contextual information apart from showing improved prediction performance have the authors perform analysis on the image activation patterns similar to the motivation example in figure 1 to see if the new approach actually focus on the relevant body parts of the animals the definition of auxiliary tasks are described in the second last paragraph of 33 but it would be clearer if it is also mentioned how they are defined in the experiments section i went through the whole experiments section having trouble interpreting the results because i could not find the definition of adversarial tasks overall i like this paper since it attempts to solve an interesting problem in computer vision but i would like to see the above question on comparison with multitask learning answered or some image activation pattern analysis to provide a more solid argument that the improvements come from elimination of negative transfer
### Summary:
|
this paper proposes reducing so called negative transfer through adversarial feature learning the application of dann for this task is new however the problem setting and particular assumptions are not sufficiently justified as commented by the reviewers and acknowledged by the authors there is miscommunication about the basic premise of negative transfer and the main assumptions about the target distribution and its label distribution need further justification the authors are advised to restructure their manuscript so as to clarify the main contribution assumptions and motivation for their problem statement in addition the paper in its current form is lacking sufficient experimental evidence to conclude that the proposed approach is preferable compared to prior work such as li 2018 and zhang 2018 and lacks the proper ablation to conclude that the elimination of negative transfer is the main source of improvements we encourage the authors to improve these aspects of the work and resubmit to a future venue
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes an extension of particle belief propagation which allows the factor parameters eg neural network weights for factors modeled as nns to be learned using standard stochastic gradient descent this algorithm is then applied to several continuousdomain state estimation tasks involving articulated objects eg hand pose estimation from images the main contribution of the paper is adapting an existing line of nonparametric belief propagation methods to this setting which allows partial endtoend learning the learning is only partly endtoend as the particle resampling stage is nondifferentiable instead the proposed approach supervises the belief at every step of the algorithm but does not backpropagate through the entire inference procedure opting to instead maximize the belief of the gt values for the unobserved labels at each step experimental results on synthetic and real datasets for articulated object state estimation show that the proposed algorithm is able to learn meaningful nns for the unary and pairwise potentials while also producing reasonable uncertainty estimates the proposed approach sometimes outperforms the baselines eg an lstm but in the task of parametric human hand tracking it lags behind the state of the art by a large margin strengths s1 the proposed method produces meaningful uncertainty estimates improving state estimation robustness under uncertaintyocclusion s2 the writing is good s3 the paper potentially opens up future avenues of work that build on the proposed framework improving learning efficiency tackling a wider range of problems etc limitations l1 limited empirical performance the proposed approach seems to struggle to outperform a much simpler lstm baseline while the proposed dnbp does provide uncertainty estimates which are unavailable with the lstm there are many ways to add train and calibrate such outputs for nns as well however such baselines are not evaluated l2 the reasons why hernando et al outperform the proposed approach are not discussed at all in spite of the large gap between the two methods l3 training is not endtoend as backpropagation does not occur through the particle resampling stage l4 the comparisons to related papers which may for example also use nn factors but perform inference in a different way are a bit too brief in my opinion for readers who are not experts in graphical models it may be difficult to establish what this paper does differently and what its strengths and limitations are wrt the rest of the literature suggestions persection related work could you please elaborate on the differences between the setting of the current paper and that of eg xiong ruozzi that paper also seems to learn the deep potentials end to end i did not read that paper in detail but is the main idea that they use a variational method rather than belief propagation results can you please make the text on fig 3 a bit bigger especially the legend currently the legend is difficult to read while i think that there is definitely a lot of potential in the paper there are several areas that remain open to improvement first the comparisons to related work are a bit unclear to me and explaining the tradeoffs between different kinds of inference methods especially through the lens of datadriven factor learning would help position the paper better in this area of ml second the empirical evaluation is not very thorough and its results dont improve much over simple baselines like an lstm a broader evaluation including on tasks where formulating handengineered potentials is more difficult would strengthen the paper in my opinion because of this i would recommend rejection for now unless the above issues are fully clarified in upcoming communications that being said i am likely to be the least experienced of this papers reviewers when it comes to graphical models so please take this into consideration when evaluating my reviews this is addressed to both the authors as well as to the acs docsepsupervised learning of markov random fields mrf by mle requires to compute pairwise and unary marginals of the current model estimate at each iteration of the likelihood maximisation this task is not tractable except for mrfs on trees with finite hidden state spaces the authors consider mrfs with infinite state spaces propose to model the pairwise and unary potentials by neural nets and aim at developing an approximated belief propagation bp approach for learning these networks bp is known to be exact on trees but is not tractable for infinite state spaces the authors apply their method on a challenging task of hand pose estimation in rgbd image sequences taken from the firstperson perspective strengths the paper aims at a differentiable approach for supervised learning and prediction in mrfs with infinite state spaces weaknesses it is well known that loopy bp inherently fails in estimating pairwise marginals for mrfs on graphs with cycles the examples analysed in experiments are mrf on trees the authors do not clearly state that in such cases there is no need for loopy bp the proposed bp approach employs a series of approximations which are not concisely explained some of them are given in the supplements only eg what is meant by pairwise sampling network sec 4 and line 6 of the algorithm in the supplement the pull strategy approximation is adopted from desingh et al 2019 the quality of this approximation is not evaluated further empirical approximations of the bp approach which is itself an approximation are explained in the supplement even though they are part of the proposed approach again their approximation quality is not evaluated overall i can imagine that such approximations of approximations can be useful if the main goal is to solve a particular application and they are unavoidable however this is not adequate if the goal is to develop a conceptual approach the authors mention the work xiong ruozzi 2020 where the task of computing unary and pairwise marginals is solved approximately by variational inference and then combined with learning of mrf potentials given by neural networks this approach has the potential to learn mrfs on graphs with cycles i would have expected at least an experimental comparison with this method furthermore it is well known that supervised learning of mrfs with finite state spaces can be approached by using the pseudolikelihood estimator instead of the ml estimator this obviates the need of computing unary and pairwise marginals during learning i would expect that it is much easier to adapt the pl estimator to the case of infinite state spaces the artificial examples studied in the experiments and seemingly also the human hand tracking use mrfs on trees this would obviate the need for loopy bp because in this case a standard twopass bp can be used instead it seems that the proposed approach is worse wrt estimation than the neural network baseline fig 8 this holds both for the frame by frame variant and the tracking variant of the proposed approach it remains unclear to me how the latter tracking variant works precisely the proposed approach for learning mrf parameters by combining neural networks with an approximated belief propagation method is not concisely explained and involves in my view too many adhoc approximations the experimental results are not convincing enough to compensate the conceptual weaknesses docsepthis paper enables endtoend learning of the factors of a graphical model for nonparametric belief propagation nbp methods by using neural networks it calls this method differentiable nonparametric belief propagation dnbp the aim is to replace domainspecific hand crafted factors with learned factors by replacing each factor with a neural network compared to vanilla neural net based solutions dnbp also reports uncertainty the method is evaluated on a couple of toy examples of articulated pose tracking as well as using hand pose estimation on the fphab dataset the method is compared against learned neural network based baselines strengths for readers who arent sufficiently knowledgeable on bp techniques the paper does a good job of going over the basics of bp the paper is clearly written easy to understand weaknesses the main issue i have with the paper is that i find the results of the paper weak 1 the chosen baseline is lstm and it is trained with an extremely small batch size 6 this doesnt provide enough confidence that the dnbp is any better than a simple neural approach quantitative differences are very small as are the toy datasets 2 lstm is not the natural choice for graphical models graph neural networks gnn would be far more appropriate in fact gnns are shown to beat bp techniques significantly in this uncited paper yoon et al inference in probabilistic graphical models by graph neural networks 3 why is dnbp not compared against nbp or some other bp method the paper mentioned above does compare against bp for instance the paper doesnt give a good reason to choose this method over others 4 qualitatively hand tracking results are significantly worse than any other recent hand tracking method i know so it is hard to judge how well the method works the jitter is extremely high which would be the opposite of what id expect from a method that takes uncertainties into account 5 id expect the uncertainty estimates to be elongated along the bones for the hand tracking results for some reason dbnp seems to somehow claim that keypoints can be significantly outside of the hand area especially for a depth based method where background pixels are very far away this doesnt make any sense likewise for the toy dataset results with occlusion dnbp doesnt seem to do a better job than the simplistic lstm 6 other neural net based methods that can measure uncertainty are not mentioned for instance kumar et als luvli face alignment estimating landmarks location uncertainty and visibility likelihood paper uses gaussian and laplacian loglikelhood losses and their uncertainty estimates align much better with the face features eg elongated along the jawline here we dont see the same property also that paper has a good methodology to check how informative the uncertainties are i dont see a similar analysis in this paper while im not an expert on bp techniques and havent verified the math hence the low confidence ive found the paper to be weak on experimental results the lstm baselines are very weak and some important citations are missing uncertainty visualizations for hands do not seem sensible and the uncertainty estimates are not analyzed in depth there are no comparisons with other bp methods as well as other uncertainty prediction techniques even if the approach seems sound the results dont justify adopting this challengingtoimplement method over any other regular method docsepthe method dnbp proposed in the paper considers a nonparametric belief propagation method where the unary potential functions the pairwise potential functions and the particle diffusion function are modeled as feedforward neural networks it allows them to learn the parameters of these networks using labeled data which is the main contribution of the paper dnbp is evaluated on three tasks simulated double pendulum simulated articulated spider firstperson hand action in each application dnbp is not able to outperform the considered baseline but it is able to provide measures of uncertainty associated with its predictions strengths 1 the idea of replacing the handcrafted potential functions with feedforward neural networks is interesting 2 the paper is well written and easy to read 3 the implementation of the gradients is not straightforward ie it does not simply consist of building the computational graph and backpropagating through it weaknesses 1 the main weakness to me which is also pointed out in the conclusion is the fact that the graph structure needs to be provided ie it is handcrafted moreover if i am not mistaken the paper only considers trees ie graphs without loops these two limitations are likely the reason why dnbp is not able to outperform the considered baseline in each application it is even significantly outperformed by the method hernando et al on the hand pose tracking task 2 in section 4 pairwise potential functions it looks like the pairwise sampling network psisdsim output is deterministic shouldnt it be stochastic since it is a sampling network please provide more details about this network 3 the main benefit of dnbp over classical neural networks is its ability to associate a measure of uncertainty to its predictions but the three applications that are considered do not accentuate this benefit i suggest to evaluate dnbp on an application where its ability to associate a measure of uncertainty is very importantuseful currently the weaknesses outweigh the strengths especially the fact that the handcrafted graph structure is probably the reason why dnbp is not able to outperform the considered baselines
### Summary:
|
all reviewers concur that the paper has promise but fails to deliver on that promise the idea of learning potentials based on dnns is appreciated but the evaluation of the contribution is considered lacking by all reviewers in addition reviewers note that the training is not differentiable which the rebuttal acknowledges is future work i do not reject the paper simply for failing to beat a deep learning baseline but for having chosen applications which do not even test the papers hypotheses reviewers note that the models are tree structured so loopy bp is not tested despite the revised papers claim that the inference strategy is compatible with graphs containing cycles
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper introduces groupsort the motivation is to find a good way to impose lipschitz constraint to the learning of neural networks an easy approach is atomic construction which imposes a norm constraint to the weight matrix of every network layer although it guarantees the network to be a lipschitz function not all lipschitz functions are representable under this strong constraint the authors point out that this is because the activation function of the network doesnt satisfy the so called jacobian norm preserving property then the paper proposes the groupsort activation which satisfies the jacobian norm preserving property with this activation it shows that the network is not only lipschitz but is also a universal lipschitz approximator this is a very nice theoretical result to my knowledge it is the first algorithm for learning a universal lipschitz function under the architecture of neural network the wasserstein distance estimation experiment confirms the theory the groupsort network has stronger representation power than the other networks with traditional activation functions admittedly i didnt check the correctness of the proof but the theoretical argument seems like making sense despite the strong theoretical result it is a little disappointing to see that the groupsort doesnt exhibit any significant advantage over traditional activation function on image classification and adversarial learning this is not surprising thoughdocsepsummary a paper that states that a new activation function which sorts coordinates in a vector by groups is better than relu for the approximation of lipschtiz functions pros interesting experiments lots of different problems evaluated with the technique cons the groupsort activation is justified from the angle of approximating lipschitz transformations while references are given why lip is good for generalisation i cannot see why groupsort does not go against the ability of deep architectures to integrate the topology of inputs see below the proof of theorem 1 requires polishing see below experiments require some polishing detail the proof of theorem 1 has three problems first in the main file argument since relu is not differentiable you cannot use the partial derivative maybe a sub differential second in the rhs after the use of the cauchyschwartz inequality no equation numbering you claim that the product of all three norms larger than 1 implies each of the last two is 1 this is wrong it tell nothing about the the value of each only about the product of each which then make the next two identities a sufficient but not necessary condition for this to happen and invalidates the last identity last the theorem uses a three lines appendix result c which is absolutely not understandable push this in the proof make it clear section d1 proof of theorem 2 the proof uses group size 2 over a vector of dimension 2 this unless i am mistaken is the only place where the group sort activation is used and so the only place where groupsort can be formally advocated against relu if so what about just using relus and a single group sort layer somewhere instead of all group sort have the authors tried this experimentally if i strictly follow algorithm 1 then groupsort is carried out by partitioning the d indexes in g groups of the same size this looks quite arbitrary and for me is susceptible to impair the capacity of deep architectures to progressively integrate the topology of inputs to generalise well table 3 tends to display that this is indeed the case as fullsort does much worse than relu table 5 replace accuracies by errors to be consistent with other tables in the experiments you do not always specify the number of groups table 4 docsepthe paper proposes a new sorting layer in neural networks that offers some theoretical properties to be able to learn network which are 1lipschitz functions the paper contains what seems to be a nice contribution but the manuscript seems to have been written in a rush which makes it full of typos and very hard to read this unfortunately really feels like unfinished work just to name a few please check the use of citep and citet see eg szegedy ref on page 3 unfinished sentence in this work page 3 somewhere at the bottom of page 4 hence neural network has cannot to lose jacobian norm etc although i would like to offer here a comprehensive review i consider that the authors have not done their job with this submission
### Summary:
|
this paper presents an interesting and theoretically motivated approach to imposing lipschitz constraints on functions learned by neural networks r2 and r3 found the idea interesting but r1 and r2 both point out several issues with the submitted version including some problems with the proofprobably fixableas well as a number of writing issues the authors submitted a cleanedup revised version but upon checking revisions it appears the paper was almost completely rewritten after the deadline i do not think reviewers should be expected to comment a second time on such large changes so i am okay with r1s decision to not review the updated version future reviewers of a more polished version of the paper will be in a better position to assess its merits in detail
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
this paper consider multilabel text classification problem and propose a cross attention transformer encoder to model the correlation between latent labels and input text sequence the hidden states of those latent labels are concatnate as input to layers of mlp for the classification head the experiment results show marginal gain over the baselines strengths encouraging experiment results that show marginal gains over baselines on two small scale xmtc datasets weakness lack of complexity analysys what is the time complexity with respect to number of labels during training and inference how scalable is the proposed method can it be used in extreme multilabel classificiation problems where number of labels is million or more such experiments are missing the rationale of latent labels are not convincing what does the latent labels mean does it serve as a semantic cluster that consists of multiple labels how do you find such latent labels otherwise how can you claim you are modeling the correlation between labels the experiment section should also mention the difference in model parameters are the proposed methods using more number of parameters this paper propose a crossattention transfomrer architecture for the multilabel text classification problem such modeling architecture is not new in mltc literature in addition the rational of latent labels are not techincal sound and justified time complexity with respect to number of labels is also missing finally the author should consider experiment on largescale multilabel datasets such as wiki500k and amazon3m given those reasoning i would vote this paper being marginal below the acceptance threshold docsepthis paper proposes to implicitly model label correlations in multilabel text classification different from previous studies eg treebased models that describe label correlations explicitly this paper appends latent labels to the beginning of each document and feeds it into a bert classifier these latent labels are randomly initialized and the concatenation of their output is used for classification the authors conduct experiments on two benchmark datasets aapd and rcv1 experimental results show that their proposed method outperforms several baselines for multilabel text classification ablation studies further show that using latent labels is better than using actual labels in their framework technically the idea of appending multiple latent tokens to the beginning of each document is not new as far as i know it was first proposed in 1 the bertxml model for multilabel text classification and later in 2 note that in both papers this is just a small trick instead of their key novelty so there may be even earlier work already proposing this trick that being said the novelty of this submission is rather limited it is quite confusing why these latent tokens can be claimed as labels and can implicitly model label correlations essentially they can be viewed as multiple cls tokens or probes to text semantics in 1 and 2 their explanation is that the label space is quite large eg 10k so one cls token ie a 768dimension vector may not be informative enough to predict the relevant labels in your paper i cannot see intuitive explanations or experiments supporting your claim of implicit label correlations some important baselines are missing including xmlcnn 3 and attentionxml 4 significance tests are missing it is unclear whether your improvement in tables 1 and 2 is statistically significant or not please run each experiment multiple times and report standard deviation or pvalues in section 42 it is confusing to append true labels to the input text sequence during training no doubt this cannot perform well because the encoder already knows the labels and will not focus on the remaining part however during inference you no longer have such label information 1 xun et al correlation networks for extreme multilabel text classification kdd 2020 2 zhang et al match metadataaware text classification in a large hierarchy www 2021 3 liu et al deep learning for extreme multilabel text classification sigir 2017 4 you et al attentionxml label treebased attentionaware deep model for highperformance extreme multilabel text classification neurips 2019 the key idea has been proposed in previous studies so the novelty is rather limited the intuition is not well explained docsepthis paper addresses the task of multilabel text classification by modeling the label correlations implicitly different from the previous works that explicitly model the label correlations such as the label embedding methods this paper proposes modeling the label correlations via latent labels the proposed method outperforms the baselines on two multilabel text classification benchmarks in the reported experimental results recent interest for multilabel classification lies in how to model the label correlations instead of modeling the label correlations explicitly this paper suggests that it is more effective to implicitly model the label correlations via latent labels the reported experimental results show that the proposed llem outperforms the other baselines and the empirical study shows that pretraining latent label embeddings cannot improve the results if i understand correctly the latent labels can be considered as the expanded cls tokens prepended to the sentencestokenized text since the cls token is intended to represent the sentence pretraining the latent label embeddings may not capture the label correlations and lead to performance loss what concerns me most is how the classification benefits from the latent labels modeling the label correlations explicitly is intuitive and explainable such as the label embedding methods it would be interesting to have more empirical analysis such as how the number of latent labels influence the classification results in addition the paper mentions the labelcorrelation overload but lack of further discussions more detailed theoretical and empirical study could be given such as how llem learns the label correlations in an adaptive manner with few a priori limits typo given a text t containing m sequential word tokens w0 w1 wm1 wi the subscript of the last token should be m this papers proposes a simple method to model the label correlations implicitly via latent labels the reported experimental results show the proposed method is effective whereas the method lacks of theoretical and empirical analysis in summary this paper is not good enough for iclr docsepthe paper presents a method that uses latent label representations to model label correlations implicitly for the multilabel text classification mltc task the method concatenates a set of randomly generated latent labels to input text tokens then the method uses this as the input to the bert model at last the contextual encodings of these latent labels are used to generate predictions for the actual labels the model has been tested and compared against the laco algorithm 1 that sets the sota on the aapd and rcv1v2 datasets and outperforms laco using hamming loss and microf1 especially the proposed method has even better performance than the baseline laco algorithm on the lowfrequency labels and intensivelabel samples 1 enhancing label correlation feedback in multilabel text classification via multitask learning strengths the model has been tested and compared against the laco algorithm 1 that sets the sota on two widelyused multilabel text classification datasets aapd and rcv1v2 and outperforms laco using hamming loss and microf1 especially the proposed method has even better performance than the baseline laco algorithm on the lowfrequency labels and intensivelabel samples weaknesses the papers overall contribution and impact seem quite limited the idea is very similar to laco 1 and the papers contribution is that it encodes the latent label rather than the actual label baseline models there are many recent papers on xmc extreme multilabel classification investigating how to utilize the label correlations although these papers focus on the mlc cases where there are thousands or millions of labels these methods are also applicable to general mlc task eclare 2 and vanilla bertbased attentionxml 3 without the treebased hierarchical structure might be very strong baseline models i am curious to know the performance of these models i might have misunderstood something here the author can clarify the choice of baseline models or clarify my misunderstanding in the rebuttal some questions on the algorithm itself label semantics is not utilized in the model design since the embeddings of the latent labels are randomly initialized how to interpret the latent labels k the number of latent labels is a hyperparameter but how to choose k there is no ablation study on tuning k in the experiments section the authors mentioned laco can be less effective and even impossible when the number of actual labels are very large however the authors only show the experimental results of aapd and rcv1v2 datasets which have 54 and 103 labels respectively there are popular datasets 4 in the xmc community for example rcv12k eurlex4k and amazoncat13k if the authors can show the proposed method is applicable to these xmc datasets as well the advantage over laco can be justified and the model can have even bigger impact in the xmc community 1 enhancing label correlation feedback in multilabel text classification via multitask learning 2 eclare extreme classification with label graph correlations 3 attentionxml label treebased attentionaware deep model for highperformance extreme multilabel text classification 4 httpmanikvarmaorgdownloadsxcxmlrepositoryhtml i would like to reject this paper since overall contribution and impact seem quite limited also the authors neglect recent advances in related fields docsepthis paper proposes a novel multilabel text classification method named llem that jointly encodes a document and latent labels with smaller number than actual labels and tries to better model label correlations implicitly and impose less a priori limits compared with previous stateoftheart works the method is conceptually simple but outperforms the stateoftheart results on two widely used benchmarks the main strength of this paper lies in its conceptually simple but very effective idea of latent label encodings which is quite different from all previous methods using actual label encodings the new llem method is simple and straightforward and it naturally eliminates some adhoc designed tasks in the previous stateoftheart method laco but surprisingly achieves better performance the latent label encoding reminds me of the ptuning for acquiring a bestperforming prompt httpsarxivorgpdf210310385pdf since neural networks are inherently continuous discrete prompts can be suboptimal the ptuning also leverages trainable continuous prompt embeddings to serve as prompts fed as the input to the pretrained language models and then optimizes the continuous prompts using gradient descent as an alternative to discrete prompt searching they are two completely different nlp tasks but seem to share some common insight and they might provide a new line of thought for some interesting future research figure 2 shows the llem outperforms laco in every single group of data divided by label frequencies or numbers but corresponding discussions just present the observations of experimental results and the better performance does not seem to easily lead to the conclusion that llem is superior in labelcorrelation utilization as mentioned by the authors some further work are still required to investigate how the latentlabel correlate with each other with the actual labels and the context and how can they be interpreted the paper was well written and organized and there is an extensive survey on recent work of multilabel text classification minor issues typos intresting on page 9 it is an interesting and useful study on multilabel text classification
### Summary:
|
this paper proposes an approach for multilabel text classification the method constitutes appending few label tokens to the beginning of the text input instead of the traditional single cls token the paper shows improvements over a competitive baseline on two datasets reviewers agree that the novelty and contribution of the paper are marginal the method of appending extra fake tokens has been used in other works as a trick it is also unclear how adding a few extra tokens allow for the model to represent label dependencies better the authors did not respond to the reviews so there was no further dicussion
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
1 this paper deals with an interesting problem the proposed idea is somewhat novel 2 the organizations are clear minor things 1 the presentation can be further improved 1 there exist some grammar errors in the presentation 2 sometimes the pointers of the cited papers are not working well the readers feel difficulty to find the correct references when clicking the pointers 1the paper proposed a novel perspective by framing the problem of learning under shifting domains as a temporal domain generalization problem it works with the gradient by introducing a meta future gradient generator it is a direct solution towards the temporal domain generalization problem based on forecasting the future information for training train a meta future gradient generator using fgd update 2the overall presentation is clear and wellorganized for example the paper provides a specific statement in the background about the temporal domain generalization and bu a widely used updating pipeline the paper also provides a good illustration of three different updating methods in figure 2 presenting the difference among these methods which helps the reader understand their main ideas quickly 3 a theoretical generalization error analysis is provided the readers can also get some insights from their analysis 4 the experiments seem to be solid this paper also provides a good statement about the experiment including the training and evaluating procedure and different baselines and settings the experiments show the advantage of the proposed method 5 minor grammar mistakes we are more interested in make mfgg accurate at a small subset of the model parameter space interested in making pointers of the cited paper in your experiments baseline descriptions the pointers of your baselines are not working well docsepthe topic is relevant and the proposal is well described experiments seems to suggest that the approach is sound the experimental evaluation is done over data of short periods of time with subsamples of datasets the impact on real recommender systems need to be further explained the paper presents an approach for dealing with temporal domain shifts by forecasting the future gradient descent a paper can be greatly improved with a discussion of the impact of this approach in realword recommenders and domains in which this can be potentially applied also the use to forecast more gradual conceptdrift as the ones occurring in user profiling can be analyzed the novelty of the approach need to be highlighted in relation with stateoftheart research experimental results have also room for improvement and requires a more detailed explanation the selection of a small sample 10 of criteotb for example it is not justified enough the evaluation is done over short periods of time a few days the application of the approach over longer periods of time needs to be also discussed docsepsolid theoretical basis for the proposed approach very clear presentation and development of ideas each step in the process of formulating their solution is well motivated an enjoyable read good experiments good explanation of results figures 1 and 2 are brilliant visual summaries of the different approaches not a gamechanging technology incremental enhancements to solving a very specific learning problem may not be of interest to the broad community is a hybrid approach possible which combines the direct and indirect approaches you mention at the outset or is your approach in effect a hybrid approach docsepthe main contribution of this article is to propose a new solution to the problem of temporal domain shift in the context of recommendation system the domain distribution changes over time and may also differ in different splits of the data as we have no associated prior knowledge of such shifts the model undergoes generalization errors in this case tactfully authors exploit meta learning to model this shift and present the idea of using generators to produce gradients the motivation is sensible their approach is also validated on two real datasets and a corresponding theoretical analysis is given see the detailed comments below interesting idea proposed by the author but there are some concerns which i hope the author will elaborate on central to this article is the idea of using meta learning methods to predict characteristics of future data that can be used in a direct or secondary way to help update the model the authors predict the gradient directly based on meta learning i dont know if it is possible to predict alternative types of knowledge such as label shift also is it possible to anticipate hyperparameters other than gradient such as learning rate simulation of the generation of gradients for future data based on meta learning is predicated on the assumption that the domain bias is small however in practice it is possible that at some moment in time the domain shift may be substantial will the predicted gradient be unbiased authors only compare to baseline methods with final experimental results i kind of wish the authors would perform some ablation studies for illustration we can indeed acquire the true gradients would the author show the difference between the predicted gradients and the true gradients the authors performed experiments on two datasets i expect the authors to compete the methods on more datasets there is a lack of detail in the experiments which leads to reproduction difficulties in addition there are some typos and poor presentation throughout the submission and the author needs to carefully check them typos interested in make should be in making by apply should be by applying by and large the motivation solutions and theoretical analysis of this post are commendable but there are still some issues that need to be addressed the authors need to settle them to make this submission more thorough
### Summary:
|
meta review the reviewers are all positive about this paper the proposed idea is somewhat novel and the paper is well presented one suggestion is to evaluate the proposed approach on more sophisticated datasets
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a domain generalization method the key idea is to remove domainspecific features by first training domainspecific classifiers for all domains then training encoderdecoder network that transforms an input image to a domaininvariant version based on them specifically the encoderdecoder network is trained together with another domaininvariant classifier so that the domainspecific classifiers cannot discriminate the classes of transformed images but the domaininvariant classifier can classify them experiments show that the model improves the generalization performance from the baselines on pacs vlcs and officehome with several different backbone architectures 1 strengths the idea to learn domaininvariant features has been studied in several existing work among them the paper looks similar to the epifcr 28 which also uses domainspecific classifiers to train a domaininvariant feature extractor this paper explicitly designs a training scheme to remove domainspecific features and the effect looks pretty clear the proposed method consistently improves the baseline on pacs vlcs and officehome with different backbone architectures the paper is clearly written and easy to follow 2 weaknesses i think maximizing the classifier uncertainty of a domainspecific classifier does not necessarily guarantee that the domainspecific features are removed for example assume an input image has two channels and the trained domainspecific classifier f1 uses only the first channel since the classifier is fixed the encoderdecoder network can cheat to include domainspecific features in the second channel so that it can perform well on the classifier f which was originally intended to be domaininvariant at the same time encoderdecoder network can be trained to output the first channel constant so that the classifier f1 cannot discriminate classes properly minor weakness is that the method needs to pass the input image through the encoderdecoder network and it makes an overhead in inference time the authors did not address the limitations and potential negative societal impact of their work docsepthis paper introduces a novel framework namely learning and removing domainspecific features for generalization lrdg allowing the trained model can extract only domaininvariant features in order to improve the outofdomain generalization performance in particular in lrdg for each source domain the training process of the domainspecific classifier is based on the classification loss and the uncertainty loss on other source domains the experimental results show that the proposed method can produce better classification accuracy compared to several existing methods originality the proposed framework for learning domainspecific features based on an autoencoder and uncertainty loss in the paper is novel in domain generalization quality the technical contributions of the paper are relatively insignificant due to the lack of a mathematical definition for domainspecific features and a theoretical guarantee for the capability to learn the real domainspecificity of the proposed lrdg although i appreciate the theoretical result about the generalization bound in theorem 2 it does not support the main claim of the paper clarity the paper is quite wellwritten and easy to follow significance my major concern is about the real benefit of the proposed method in particular compared to the direct domaininvariant learning approaches see eg 12 the lrdg framework seems to be more complicated and more computationally expensive since requires the training of an autoencoder moreover the main idea of lrdg is to learn and eliminate the domainspecific features while some recently published papers eg 3 show that effectively combining domainspecific and domaininvariant features are also useful for domain generalization 1 inproceedingsli2018deep titledeep domain generalization via conditional invariant adversarial networks authorli ya and tian xinmei and gong mingming and liu yajing and liu tongliang and zhang kun and tao dacheng booktitleproceedings of the european conference on computer vision eccv pages624639 year2018 2 inproceedingshu2020domain titledomain generalization via multidomain discriminant analysis authorhu shoubo and zhang kun and chen zhitang and chan laiwan booktitleuncertainty in artificial intelligence pages292302 year2020 organizationpmlr 3 articlebui2021exploiting titleexploiting domainspecific features to enhance domain generalization authorbui manhha and tran toan and tran anh and phung dinh journaladvances in neural information processing systems volume34 pages2118921201 year2021 na docsepthe manuscript describes a new framework for training models that are general for multiple domains domain generalisation their protocol is to 1 train one classifier for each domain which are made domainspecific by enforcing a random guess for domains that they are not specialised to 2 train an auto encoder where the decoding process is constrained to generate images that confuse all of the domain specific classifiers and at the same time 3 train a new classifier on top of those domainagnostic images the paper is well written and easy to follow the presented idea for generating domaininvariant images and training domain specific classifiers is intuitive and the entire package seems to bring about improvements on average to performance on the baselines datasets the authors have made comparisons against stateoftheart studies and compared across different backbones the use the generalisation bound and of the proxy adistance pad is very welcome here for showing that the model truly generalises however the application used by the authors employ the features of an alexnet network for the unmapped domains and the features of their domaininvariant classifier for the mapped domains this makes it hard to assess whether their mapping was effective in reducing the hdivergence between domains it would probably be more adequate to use the same features for both to allow for a fairer comparison i understand this is not a simple choice given the features of an alexnet will be more discriminative for the photo domain and this will probably also apply for other pretrained networks and the features of the domaininvariant classifier would be less discriminative for unmapped domains since it was trained for mapped versions even though it is the intuitive conclusion that the framework would remove domainspecific features it is not guaranteed to do so different classifiers could work with different manifolds of the original image space to do their classification and the uncertainty loss could force each classifier to use their own space to perform classification ignoring or not domainspecific features from other domains this is similar to how svms with different kernels could solve the same classification tasks and even use spaces with the same dimensionality this nonguarantee is important to state since the authors state that they reduce lambda lines 174176 and prove that they reduce e in the generalisation risk bound maybe those statements could be toned down with probably or indicates that reduces the limitations of the model are addressed in the conclusion the authors discuss the need for training separate classifiers for each source domain and that the model is not able to remove domainspecific features that regard unseen domains i would add the limitation that some domainspecific knowledge is useful for some domains as we can see that this model does not outperform others on more complex domains such as photo and sometimes art edit postdiscussion i will also add this here for completeness per the discussion below and with other reviewers the authors have acknowledged that the removal of domainspecific features is conditional on these features being learned by the domainspecific classifiers via the optimisation in eq 8 this is expected to work given the design of eq 8 but it is not guaranteed and therefore this assumption configures an expected limitation of the work that future readers and users should keep in mind
### Summary:
|
after the authorreviewer discussion reviewer 8k1l shows strong support for the paper and reviewer mbun finds most concerns addressed and upgrades the score to weak accept reviewer t9vv has some remaining concerns but does agree the proposed method seems working empirically after careful consideration ac recommends accepting the paper
|
Below is given review of a research paper from cnoference journal. Please write a summary the review.
### Review:
the paper proposes a method to transfer knowledge from multiple diverse models first it learns pathway routes that activate a different set of models depending on input data later it aggregates the outputs to generate taskspecific predictions the authors propose an exploration loss to promote the utilization of all the models finally the paper discusses the efficient finetuning of models on a subset of data that activates them the paper claims stateoftheart performance for model hub transfer learning on computer vision and reinforcement learning tasks strengths the paper is wellorganized and clearly written the papers primary idea to facilitate knowledge transfer from multiple pretrained models is important the paper proposes a novel approach for transfer learning from multiple models hubpathway learns to identify topk models best suited for input datum the aggregator network generates a final prediction based on the output of k models and path weights the authors propose an interesting exploration strategy to improve the transferability and exploitation of activated models via selective finetuning the paper highlights the high efficiency compared to ensembles since not all models are active for both traininginference phases weaknesses the paper makes a broad claim on the effectiveness of the proposed method on heterogeneous model architectures however its unclear how it could be used for autoregressive models where subsequent output tokes depend on the previous ones the paper discusses replacing output head layers with taskspecific layers 119120 it is unclear how aggregator network could be set up for heterogeneous models with different output dimensions there could be limitations on what all heterogeneous model architectures hubpathway could support please refer to the weaknesses section docsepin this paper the authors propose a transferlearning method called hubpathway to leverage a library of pretrained models instead of a single pretrained model the proposed hubpathway utilized the idea of datadependent transfer learning to find the best transfer learning path through multiple pretrained models for each input the output of each pretrained model is then aggregated to produce the final prediction the generator for generating the pathway is trained with softmax policy with additional noise to encourage exploration the authors also propose exploitation loss to better leverage the activated model extensive experiments on multiple tasks demonstrate the benefits of the proposed approach strengths 1 the paper is generally wellwritten and the organization is clear 2 the idea of using multiple pretrained models for transfer learning is interesting and the authors also propose a welldesigned method for leveraging multiple models 3 the experiments are extensive which show the effectiveness of the proposed method weaknesses 1 the additional storage is an overhead for using multiple pretrained models for transfer learning also the introduction of the pathway generator is another additional cost 2 the key experimental details should be included in the main text rather than in the supplementary the choices of the hyperparameters are also not clearly specified 3 several related papers on dynamic computation and transfer learning are not cited 4 although the authors stated that the experiments are repeated for 3 times no error bar is reported postrebuttal thanks for the rebuttal most of my concerns are addressed in the rebuttal i encouraged the authors to include the additional results in the final draft to improve the significance of the paper i increase the score from 4 to 5 yes docsepthis paper tackles the problem of transfer learning from a zoo hub of pretrained models unto a specific endtask they treat the problem as datapoint dependent and route each datapoint through an adaptive subset of the available pretrained models they also finetune the pretrained models on the datasets that activate them this paper compares to some relevant baselines and shows improvement over them strengths 1 method works for heterogeneous architectures which is not the case for previous approaches 2 improved performance over reasonable baselines 3 method is relatively easy to understand weaknesses 1 though the authors claim that the inference and training time cost is better for hubtransfer over ensembling i am not 100 convinced note that the hubtransfer approach introduces significant overhead in terms of shuffling models in and out of memory as datapoints are adaptively assigned also with the exploration bonus and also early in training it stands to reason that a batch of data could activate all models though each datapoint activates an individual subset but the union could be the whole hub which presents a computational memory hurdle to handle efficiently can the authors provide more details about how the approaches were implemented and benchmarked for table 5 2 method performance in the case of outofdomain transfer is on par with the ensembling approach which may be simpler to implement 3 seems technically complex to implement would involve swapping models in and out of memory unless one has access to a large memory budget 4 no error bars on results even though experiments were run for 3 seeds update after rebuttal score updated from 46 after rebuttal discussion 1 the method introduces extra hyperparameters over methods like ensembling docsepthis study proposes hubpathway a method for maximizing the knowledge gained from multiple pretrained deep neural network models in a transfer learning problem setting the basic idea of the hubpathway is to select the best subset of models for each datum from the model hub to make predictions there are two main challenges of this approach i how to select the best path for each data set and ii how to aggregate knowledge from multiple models for the first challenge the paper introduces a gating function with randomness to achieve a variety of path options for each datum for the second challenge the paper develops a mechanism that combines the aforementioned gate function with a function for knowledge aggregation to output a prediction result for the hub as a whole in the experiments the paper compared the proposed method with prior methods in the tasks of classification facial landmark detection and reinforcement learning and showed that the proposed method achieves superior performances by dynamically routing paths on model hubs strength research questions and motivations are clear the proposed method is quite simple so it is easy to reimplement the proposed method can handle heterogeneous sets of models unlike prior methods comparison with prior methods in a variety of experimental settings weakness insufficient explanation and evaluation of the need to change paths for each datum a little lack of evaluation in a homogenous setting larger memory size and execution time required during inference time the paper is wellstructured and clearly describes the research questions and motivations the proposed method provides new insights into this research area by achieving knowledge aggregation from heterogeneous model hubs which has been difficult before and by showing better experimental results than models in homogeneous settings on the other hand comparisons with stateoftheart methods and verification of the validity of switching paths on a datumbydatum basis are insufficient these will be discussed in questions in addition the proposed method arises new challenges in terms of memory size and execution time during inference however these issues might not be considered serious enough flaws to damage the main argument of the paper nothing to report
### Summary:
|
the submission introduces an approach called hubpathway to leverage a diverse collection of pretrained models for transfer learning hubpathway trains a pathway generator network to route examples to various models in a datadependent manner and aggregates the outputs to produce taskspecific predictions noise is added to the pathway generator and its output is entropyregularized to encourage exploration and the activated models are also individually trained on the target loss to encourage exploitation the approach is evaluated on several image classification facial landmark detection and reinforcement learning tasks reviewers noted the papers clarity and writing quality and found the empirical evaluation extensive and convincing on the other hand they expressed doubts regarding hubpathways computational and memory complexity at training and inference time in particular in comparison to the alternative of using an ensemble of models the authors responded by citing existing results in the submission and providing new results in the revised appendix showing that hubpathway does in fact have lower computational complexity than an ensemble they also argued through additional results that the forward propagation through multiple models as opposed to holding the parameters of multiple models in memory is the main memory bottleneck which model ensembles also face and that hubpathway does better in that regard than model ensembles due to the pathway activation mechanism overall the authors response was satisfying to the reviewers and their consensus is that the submission should be accepted i therefore recommend acceptance
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.