Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Google Landmark Recognition Challenge
Solution from Dawnbreaker on steroids in top-40
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Ok, it looks about right. However, I got a bit fewer images: 1217575.</p> <p>My history of training:</p> <p><code> 2018-05-23 11:11:18,929 INFO : A snapshot was saved to resnet50_[1]_87.09.pk. 2018-05-23 12:54:47,470 INFO : A snapshot was saved to resnet50_[2]_91.10.pk. 2018-05-23 14:38:18,451 INFO : A snapshot was saved to resnet50_[3]_92.51.pk. 2018-05-23 16:21:49,048 INFO : A snapshot was saved to resnet50_[4]_93.39.pk. 2018-05-23 18:05:18,594 INFO : A snapshot was saved to resnet50_[5]_95.35.pk. 2018-05-23 19:49:12,695 INFO : A snapshot was saved to resnet50_[6]_95.50.pk. 2018-05-23 21:32:39,548 INFO : A snapshot was saved to resnet50_[7]_96.09.pk. 2018-05-23 23:16:23,891 INFO : A snapshot was saved to resnet50_[8]_96.21.pk. 2018-05-24 00:59:55,941 INFO : A snapshot was saved to resnet50_[9]_96.37.pk. 2018-05-24 02:43:19,916 INFO : A snapshot was saved to resnet50_[10]_96.57.pk. 2018-05-24 04:26:49,113 INFO : A snapshot was saved to resnet50_[11]_96.59.pk. 2018-05-24 06:10:20,220 INFO : A snapshot was saved to resnet50_[12]_96.62.pk. </code></p> <p>Yours:</p> <p><code> 2018-05-24 07:31:37,350 INFO : A snapshot was saved to resnet50_[1]_86.61.pk. 2018-05-24 14:02:38,859 INFO : A snapshot was saved to resnet50_[2]_90.71.pk. 2018-05-25 00:00:17,239 INFO : A snapshot was saved to resnet50_[3]_92.50.pk. 2018-05-25 06:15:14,521 INFO : A snapshot was saved to resnet50_[4]_93.35.pk. 2018-05-25 12:50:06,437 INFO : A snapshot was saved to resnet50_[5]_95.11.pk. 2018-05-25 20:13:08,194 INFO : A snapshot was saved to resnet50_[6]_95.27.pk. 2018-05-26 02:56:06,500 INFO : A snapshot was saved to resnet50_[7]_95.98.pk. 2018-05-26 09:55:24,890 INFO : A snapshot was saved to resnet50_[8]_96.06.pk. 2018-05-26 16:30:26,323 INFO : A snapshot was saved to resnet50_[9]_96.39.pk. 2018-05-26 23:09:15,832 INFO : A snapshot was saved to resnet50_[10]_96.50.pk. 2018-05-27 05:26:10,145 INFO : A snapshot was saved to resnet50_[11]_96.50.pk. 2018-05-27 11:48:44,202 INFO : A snapshot was saved to resnet50_[12]_96.59.pk. </code></p> <p>I do the same.</p> <p>Why do you have 1419521 images in total? There are 1225030 lines in train.csv.</p> <p>I get 0.109 after 12 epochs. Nothing special. Maybe your data is incomplete or directory structure is wrong? Can you upload your log_finetune.txt?</p> <p>TL;DR: ensemble of ResNet+DenseNet+SE-ResNext with non-landmark classifier. 39th place. The code is here: <a href="https://github.com/artyompal/google-landmarks">https://github.com/artyompal/google-landmarks</a></p> <p>Ok, my solution wasn't working, so I tried Dawnbreaker's one. It was already Wednesday of the additional week (check the forum often)! I was totally new to PyTorch, I did <code>pip install pytorch</code> and it just worked. After re-score, it showed 0.109. </p> <p>So I've started thinking: if everyone could do the same, what can I do? I started trying different pretrained models. I tried DenseNet121, DenseNet169 and SE-ResNext. ResNet50 was the worst and SE-ResNext was the best. I wish I had more time to try various models. So eventually I ended up with an ensemble of all of those four networks. Same architecture everywhere (no additional FC layers, no dropout, no regularization), same augmentations (random crop and horizontal flip). </p> <p>Also, I made a simple classifier to detect non-landmarks. I took an example from Keras, put 4 CNN layers and no FC layers to avoid overfitting. I marked up 300 positive samples and 300 negative ones. Then I replaced it with ResNet50, it worked better. Eventually, on the very last evening, I marked up 3000 more samples.</p> <p>An ensemble of 3 networks showed 0.139 on the Public leaderboard, and an ensemble of 4 showed 0.159.</p> <p>On the last weekend, I tried to solve Retrieval challenge too. I got feature vectors from DenseNet169 and calculated cosine distances using PyTorch, however, this did't work well. So, on the last day I took Faiss and it produced something reasonable. Turned out, I had to just calculate Euclidean distances between points.</p> <p>And literally at 11 PM on the last evening, I added a similar check into Recognition code to punish classes with greater distances. However, this added just 0.002 to my score.</p> <p>BTW, this was my first challenge on Kaggle!</p> <p>Thanks for reading this! Hope this helps :) Kudos to @Dawnbreaker, all of this wouldn't be possible without his code.</p>
Google Landmark Recognition Challenge
Visual Recognition Group (VRG) Prague - 3rd place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our approach was ranked 3rd and was presented at the <a href="https://landmarkscvprw18.github.io/">CVPR 2018 landmark recognition workshop</a>. The presentation can be found <a href="https://drive.google.com/open?id=1NFhfkqKjo_bXM-yuI3KbZt_iHRmiUyTG">here</a>.</p> <h2>Brief summary</h2> <ul> <li>We handle the task as instance search due to the low intra-class variability and the large number of classes.</li> <li>A simple kNN classifier is used which employs (1) global CNN-based descriptors and (2) geometric matching based on local features</li> <li><a href="http://cmp.felk.cvut.cz/~radenfil/publications/Radenovic-TPAMI18.pdf">GeM</a> descriptor is learned using the training set of the challenge</li> <li><a href="http://www.robots.ox.ac.uk:5000/~vgg/publications/2007/Philbin07/philbin07.pdf">Geometric verification</a> is performed using <a href="https://arxiv.org/abs/1612.06321">DELF</a> features and <a href="http://hal.inria.fr/docs/00/86/46/84/PDF/iccv13_tolias.pdf">ASMK</a> for indexing. </li> </ul> <h2>The team</h2> <p><a href="http://cmp.felk.cvut.cz/~radenfil/">Filip Radenovic</a>, <a href="http://cmp.felk.cvut.cz/~iscenahm/">Ahmet Iscen</a>, <a href="http://cmp.felk.cvut.cz/~toliageo/">Giorgos Tolias</a>, <a href="http://cmp.felk.cvut.cz/~chum/">Ondrej Chum</a></p>
Google Landmark Retrieval 2019
15th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our scores are as follows:</p> <p>pipeline / private / public</p> <p>PCA&amp;Whitening + AQE + NN search: 0.20294 0.17844 PCA&amp;Whitening + DB-augment + AQE + NN search: 0.21213 0.19039 PCA&amp;Whitening + DB-augment + AQE + NN search + Diffusion: 0.21809 0.19687</p> <h1>15th Place Solution</h1> <p>Congratulations to the winners! And I would like to express my gratitude to all the people involved in this competition.</p> <p>This is our 15th place solution. I am busy working on the iMaterialist Competition, so I'll explain our solution only with a brief diagram. "Reference" is a collection of links that we referred to when implementing.</p> <h1>Training</h1> <p>We trained models using 1st stage train dataset.</p> <p>Reference: - <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">https://github.com/filipradenovic/cnnimageretrieval-pytorch</a> - <a href="https://github.com/SeuTao/Humpback-Whale-Identification-Challenge-2019_2nd_palce_solution">https://github.com/SeuTao/Humpback-Whale-Identification-Challenge-2019_2nd_palce_solution</a></p> <p><img src="https://gist.githubusercontent.com/chmod644/20feec5b9cc5bcc66491c698fec5e860/raw/891ad2e9c48daebe61a41d142b71082cf9269ff6/ffecd280-8798-11e9-9520-d94c7c62797d.png" alt="network_and_loss"></p> <h1>Submit</h1> <p>Reference: - <a href="https://www.kaggle.com/c/landmark-retrieval-challenge/discussion/57855">https://www.kaggle.com/c/landmark-retrieval-challenge/discussion/57855</a> - <a href="https://pdfs.semanticscholar.org/79bf/2aedcd89c116225f8e4b72fa66f7df1f7d97.pdf">https://pdfs.semanticscholar.org/79bf/2aedcd89c116225f8e4b72fa66f7df1f7d97.pdf</a> - <a href="https://github.com/facebookresearch/faiss">https://github.com/facebookresearch/faiss</a> - <a href="https://github.com/ducha-aiki/manifold-diffusion">https://github.com/ducha-aiki/manifold-diffusion</a></p> <p><img src="https://gist.githubusercontent.com/chmod644/20feec5b9cc5bcc66491c698fec5e860/raw/e05e65c53f6506f236253d59c1bba7d97c743cac/1f2f3400-8788-11e9-96d7-337836617bd9.png" alt="inference"></p>
Google Landmark Retrieval 2019
1st/3rd place solution by Team smlyaka
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you and congratulations to you too!</p> <p>(Updated on June 10: Priv LB was updated. Finally, our team won 1st on the retrieval challenge!) (Updated on June 12: Our solution code and paper are available. We plan to refactor them after the CVPR conference. If you are interested in the code, I strongly recommend you to read it a few weeks later. Also, our model parameters and cleaned dataset will be available. Reproducing our work is very welcome! (code) <a href="https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution">https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution</a>, (paper) <a href="https://arxiv.org/abs/1906.04087">https://arxiv.org/abs/1906.04087</a>, (poster) <a href="https://www.dropbox.com/s/y3c3ovdiizz59j4/cvpr19_smlyaka_slides.pdf?dl=0">https://www.dropbox.com/s/y3c3ovdiizz59j4/cvpr19_smlyaka_slides.pdf?dl=0</a>)</p> <hr> <p>Congratulations to all participants and winners! And special thanks to Filip Radenovic for his great paper and open-source code. We learned a lot from them. We two worked very hard and enjoyed this competition very much.</p> <p>Here is a brief overview of our solution. (Now we are working hard for preparing the detail description toward the paper submission deadline.)</p> <ol> <li>Automated data cleaning with using train labels, global descriptors, and spatial verification (SV). For preparing a cleaned train set, we selected only the images that it's topk neighbors in descriptor space are same landmark_id and verified by DELF+SV. This dataset cleaning is essential to train networks since the original train set is too noisy to train networks properly. To train with the cleaned dataset significantly boosts our score compared to only training with train data from previous year's competition.</li> <li>We use FishNet-150, ResNet-101, and SEResNeXt-101 as backbones trained with cosine-based softmax losses, ArcFace and CosFace. Besides, various tricks are used such as aspect preserving of input image, cosine annealing, GeM pooling, and finetuning at full resolution on the last epoch with freezing Batch Normalization.</li> <li>For the recognition task, accumulating top-k (k=3) similarity in descriptor space and inliers-count by spatial verification helps a lot.</li> <li>For the retrieval task, reranking with using (3) significantly improved our score in stage2.</li> </ol> <p>DBA, alpha-QE, Iterative DELF-QE (proposed by Layer6 AI team last year), Diffusion [Yang+'19] or Graph traversal (EGT, rEGT) [Chang+'19] work in stage1, but they don't work well in stage2.</p> <p>Our experimental results for the retreival/recognition are attached. In table 1, all results are based on euclidean search only. No use of PCA/whitening, DBA, QE, Diffusion or Graph Traversal.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13423/table_1.png" alt="Table1"> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13424/table_2.png" alt="Table2"> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13425/table_3.png" alt="Table3"></p> <h2>Post-processing for the recognition task</h2> <p>Supplement to the Table 3. We treated the frequently occurring landmark as distractors and updated their confidence score in a post-processing step. Specifically, the corresponding landmark's confidence score is calculated by multiplying the frequency by -1. We treated landmarks that appeared more than 30 times as distractors.</p> <h2>Things doesn’t work</h2> <ul> <li><a href="https://arxiv.org/abs/1905.00292">AdaCos: Adaptively Scaling Cosine Logits for Effectively Learning Deep Face Representations</a></li> <li><a href="https://arxiv.org/abs/1903.10663">Combination of Multiple Global Descriptors for Image Retrieval</a></li> <li>Label smoothing</li> <li>SPoC, MAC, R-MAC, Compact bilinear pooling</li> <li>Data Augmentation from AutoAugment paper (imagenet setting)</li> <li>Warmup learning rate scheduling</li> <li>Under Sampling</li> <li>Remove distractor by Places365 indoor/outdoor classification</li> <li>Meta classifier</li> <li>Gaussian SARE</li> <li>... and many other methods!</li> </ul> <p>Best,</p> <p>Team smlyaka.</p> <p>.... And I want to thank my team mate, lyakaap! Many congrats lyakaap for becoming Grandmaster ;-)</p> <p>Thanks!</p> <p>Thanks!!!</p> <p>Thanks!! I do not know what happened, but I am very happy to get the 1st!</p> <p>Yes! It will be open soon.</p> <p>Thanks! And congrats to your 3rd place!</p> <p>Big thanks and respects to the research community including you!</p>
Google Landmark Retrieval 2019
26th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>Ending final validation, I share my solution.<br> I trained two models. One uses raw images as input. The other uses DELF as input. I concatenated two vectors from two models for indexing. I also used query expansion and geometric verification.</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-google-landmark-retrieval-2019">https://github.com/toshi-k/kaggle-google-landmark-retrieval-2019</a></p> <p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-google-landmark-retrieval-2019/master/img/solution.png" alt="conceptual diagram"></p>
Google Landmark Retrieval 2019
2nd Place and 2nd Place Solution to Kaggle Landmark Recognition and Retrieval Competition 2019
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for the great challenge, our solution has been released to arxiv. Further, we will open our source code. Thanks.</p> <p>arxiv link: <a href="https://arxiv.org/pdf/1906.03990.pdf">https://arxiv.org/pdf/1906.03990.pdf</a> source code: <a href="https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark">https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark</a></p> <p>Thanks. I also learn a lot from this competition. </p>
Google Landmark Retrieval 2019
8th Place Solution: Clova Vision, NAVER/LINE Corp.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Approach by Clova Vision, NAVER/LINE Corp.</h1> <p>Thanks to the organizers for the awesome challenges (both recognition and retrieval), and congratulations to all participants! <br> We briefly explain our approach here, and hope the future participants can benefit from this :)</p> <hr> <h2>Quick summary of pipeline</h2> <ul> <li>[step 0.] Dataset Clustering </li> <li>[step 1.] Learning Representation</li> <li>[step 2.] Feature Ensemble</li> <li>[step 3.] DBA/QE + PCA/Whitening</li> <li>[step 4.] DELF / Diffusion </li> </ul> <hr> <h3>(1) Dataset Clustering</h3> <p>At the very beginning of the challenge, stage1 train/test/index datasets were all released, and the labels were provided for the train set only. We wanted to have the model biased to the test/index data distribution since it might help to achieve a high score in the challenge. To do so, we did clustering on stage1 test/index datasets and filtered out noisy clusters using several rules. The virtual class ids were given to each cluster and added to the train v1 dataset to use it for training. In this way, we got approximately 0.01 mAP increase on stage1.</p> <p>At the stage2, we found the newly released train/test/index dataset is super noisy comparing to the stage1. We also made clusters on train/test/index datasets to remove the "garbage images" based on the constructed clusters. </p> <p>Finally we train our model using the datasets below: </p> <ul> <li>TR1: train v1</li> <li>TR2: train v1 + test/index cluster v1</li> <li>TR3: train v1 + test/index cluster v2 (noise cluster)</li> <li>TR4: train v1 + test/index cluster v1 + train v2 (garbage removed) + test/index cluster v2 (garbage removed), </li> </ul> <p>where v1 is from stage1, v2 is from stage2. </p> <h3>(2) Learning Representation</h3> <p>We've tried to train various models by combining different backbones, losses, aggregation methods and train sets,</p> <ul> <li>training data: TR1, TR2, TR3, TR4</li> <li>backbones: resnet50, resnet101, seresneet50, seresnet101, seresnext50, seresnext101, efficientnet</li> <li>losses: xent+triplet, npair+angular</li> <li>aggregation methods: GeM, SPoC, MAC, </li> </ul> <p>which results in 4 x 7 x 2 x 3 = 168 combinations (or more…) <br> In general, GeM worked well. We didn't have much time to fully fine-tune the efficientnet (tensorflow released version) on the google-landmark-challenge 2019 datasets so we early stopped the training. </p> <h3>(3) Feature Ensemble</h3> <p>Based on the trained model, we extracted the features and found the best combinations for the ensemble. To test the result on a small dataset, we sampled validation set (VAL_V2) which is a subset of stage2 datasets. We run auto-search process which automatically concatenates 4 different off-the-shelf features which are randomly chosen every time, and evaluates the performance of the concatenated feature on VAL_V2. We found the mAP result on VAL_V2 is aligned with the result on real submissions, and this trick helped us saving submission trial and time. </p> <p>Finally, we found the best combinations: </p> <ul> <li>seresnet50 / SPoC / TR4 / npair+angular</li> <li>seresnext50 / SPoC / TR4 / npair+angular</li> <li>resnet101 / GeM / TR1 / xent+triplet</li> <li>seresnext50 / GeM / TR3 / xent+triplet</li> </ul> <p>The features were L2-normalized before the concatenation.</p> <h3>(4) DBA/QE + PCA/Whitening</h3> <p>DBA/QE is a well-known technique for boosting the performance. We found 10-nearest neighbors for each data points for this. Based on our model, we got 9~10% mAP increase at stage1. The PCA/Whitening improved mAP a bit more when the output dimension was set to 1024. </p> <h3>(5) DELF / Diffusion</h3> <p>We applied DELF in two ways: </p> <ul> <li>reranking top-100</li> <li>Diffusion with SV</li> </ul> <p>we found simply reranking the top-100 retrieved result was not really helpful. We tested only one case, but it showed +0.00018 performance increase after DELF reranking (it might depend on models though). <br> Instead, we used DELF matching score for graph construction for diffusion, and we got the better performance increase in this case. The gain was about 0.002 in mAP. </p> <p>For the last step, we applied diffusion on the features after DBA/QE + PCA/Whitening. It always improved the performance, but we performed a limited number of experiments because the diffusion process was very slow.</p> <hr> <h2>Our Thoughts on Frequently Asked Questions</h2> <p><strong>Q1. Our Best Single Model?</strong> <br> The stage2 mAP of best single model is approximately in range of [0.1718 ~ 0.1995] in public, [0.1878 ~ 0.2211] in private with 1024-dimensional feature. The high score model in public did not necessarily achieve a high score in private. </p> <p><strong>Q2. Local Descriptor vs Global Descriptor?</strong> <br> It's not a good idea to use DELF directly to retrieve images from 1M index images due to excessive computation. Suppose we have top-K ranked candidates retrieved by global descriptors, we can rerank the order using local descriptors with geometric verification.<br> In our case, the gain was very slight though. </p> <p><strong>Q3. Things that didn't work?</strong> </p> <ul> <li>Ensemble with features extracted from multi-resolution input images. (didn't work)</li> <li>Explore-Exploit Graph Traversal (EGT) (slow on our implementation) </li> <li>Detect-to-Retrieve (D2R) (cropped regions from D2R landmark detector were used for feature extraction, and the result was worse.)</li> </ul> <p>** for more detailed experimental results, please refer to the paper we upload to arxiv. (<a href="https://arxiv.org/abs/1907.11854" target="_blank">https://arxiv.org/abs/1907.11854</a>)</p> <p>Sure 😄 </p> <p>We found that sometimes totally unrelated images obtain more than 10 match scores. To suppress this, we rerank the candidates only if the match score is larger than a certain threshold (used threshold=50). The gain was slight, but it worked. Otherwise, DELF reranking rather worsen the performance.</p>
Google Landmark Retrieval 2019
9th Place Solution: VRG Prague
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>If there will be interest, I can release our models as a part of our <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">CNN Image Retrieval in PyTorch</a> toolbox, with the script to evaluate them on R-Oxford and R-Paris.</p> <h1>Approach by Visual Recognition Group (VRG) Prague</h1> <p>Thanks to the organizers for the great challenge once again and congratulations for the winning teams!</p> <p>Similarly to last year, our method is based on a combination of CNN-based global descriptors trained on the training set of the first version <a href="https://www.kaggle.com/google/google-landmarks-dataset">Google Landmarks Dataset</a> used in the challenges last year. Unfortunately, I did not have enough time to dedicate to the challenge, so I did not manage to set up a successful training on the training set of this years <a href="https://github.com/cvdfoundation/google-landmark">Google Landmarks Dataset v2</a>.</p> <h3>Note</h3> <p>Performance is presented in the format private / public score.</p> <h3>Representation learning</h3> <p>Our network consists of the convolutional layers of ResNet50, ResNet101, and ResNet152 pre-trained on ImageNet, followed by generalized-mean pooling (<a href="https://arxiv.org/abs/1711.02512">GeM</a>), l2 normalization, a fully-connected (FC) layer, and a final l2 normalization. We train a network with contrastive or triplet loss (when training for a longer time, triplet loss resulted in a superior performance) on pairs selected from the <a href="https://www.kaggle.com/google/google-landmarks-dataset">Google Landmarks Dataset</a> training set with images resized to have the largest dimensions at most equal to 1024 (900 for ResNet152 due to memory restrictions). We randomly sample at most 50 positive pairs per landmark, while the hard negatives are re-mined for each epoch of the training. The FC layer is initialized by the parameters of <a href="https://arxiv.org/abs/1604.02426">supervised whitening</a> learned on such pairs. </p> <p>Training was done using our publicly available <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">CNN Image Retrieval in PyTorch</a> toolbox. </p> <p>Validation was done on <a href="https://github.com/filipradenovic/revisitop">Revisited Oxford and Paris datasets</a>.</p> <p>During testing, we perform descriptor extraction at 3 scales (scaling factors of <code>1, 1/sqrt(2), sqrt(2)</code>), where the original scale corresponds to largest image size equal to 800 (the maximum image size of v2 dataset), sum-aggregate the descriptors, and l2 normalize. The <a href="https://arxiv.org/pdf/1811.11147.pdf">PCA whitening with shrinkage</a> is learned/applied on the multi-scale descriptors. The dimensionality of the final descriptor is 2048.</p> <h3>Simple NN-search</h3> <p>When using our trained networks with cosine similarity and nearest neighbor search the performance is: <code> ResNet101-GeM: 0.200 / 0.178 ResNet152-GeM: 0.196 / 0.167 ResNet50-GeM: 0.184 / 0.163 </code></p> <p>Additionally, we evaluated <a href="https://arxiv.org/pdf/1610.07940.pdf">ResNet101 with RMAC pre-trained</a>: <code> ResNet101-RMAC: 0.181 / 0.150 </code></p> <p>Concatenating these four representations and jointly PCA-reducing (with shrinkage) to 4096 dimensions results in: <code> concat[4096]: 0.213 / 0.186 </code></p> <h3>Iterative DBA and QE</h3> <p>We perform iterative DBA with an increasing number of images used in each iteration. More precisely, we use three iterations, starting with one nearest-neighbor image being used to average with the respective DB image, repeating nearest-neighbor search and adding one additional nearest neighbor in each step. Instead of a plain average, we use weighted average, where the weight is given as squared cosine similarity of the respective DB image and its neighbor. We denote this method as <code>2dba1:1:3</code>.</p> <p>We perform iterative QE in a similar way as DBA, but now in 10 itearation, denoting it as <code>2qe1:1:10</code>.</p> <p>Performance of our concatenated 4096-dimensional representation is now: <code> concat[4096] -&amp;gt; 2dba1:1:3 -&amp;gt; 2qe1:1:10: 0.254 / 0.229 </code></p> <h3>Graph-based QE and Diffusion</h3> <p>We tested two approaches for the final rank computation, graph traversal and diffusion.</p> <p>Our graph traversal algorithm explores the knn graph with a greedy algorithm where the next image to be retrieved is the one with the highest similarity to any already retrieved image or to the query. This approach is a special case of <a href="http://www.cs.toronto.edu/~mvolkovs/cvpr2019EGT.pdf">Explore-Exploit Graph Traversal for Image Retrieval</a> where only one image is always exploited. This yielded the score of <code>0.257 / 0.235</code>.</p> <p>For diffusion, we used the conjugate gradient method from the <a href="https://github.com/ducha-aiki/manifold-diffusion">Python implementation</a> of <a href="https://arxiv.org/pdf/1611.05113.pdf">Efficient Diffusion on Region Manifolds</a> which was patched to support sparse matrices on the input. This composes our final submission of <code>0.263 / 0.243</code> on private / public score.</p> <h3>[14. 06. 2019] UPDATE</h3> <p>I released our models as a part of our <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch#networks-with-whitening-learned-end-to-end">CNN Image Retrieval in PyTorch</a> toolbox, with the script to evaluate them on R-Oxford and R-Paris.</p> <p>Multi-scale performance in the following table:</p> <p>| Model | ROxf (M) | RPar (M) | ROxf (H) | RPar (H) | |:------|:------:|:------:|:------:|:------:| | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet50-gem-w-83fdc30.pth">gl18-tl-resnet50-gem-w</a> | 63.6 | 78.0 | 40.9 | 57.5 | | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet101-gem-w-a4d43db.pth">gl18-tl-resnet101-gem-w</a> | 67.3 | 80.6 | 44.3 | 61.5 | | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet152-gem-w-21278d5.pth">gl18-tl-resnet152-gem-w</a> | 68.7 | 79.7 | 44.2 | 60.3 |</p> <p>Thank you! </p> <p>I did not try mini-batch mining, in fact, I did not try much more than what there already is in our code online, because I was heavily time-restricted in the past few months. That being said, mini-batch would hardly work, as we use high-resolution images for training and we accumulate gradients per single image per tuple, compute backward, repeat that multiple times (number of tuples in batch) and do one update of weights. So, mini-batch mining would have forward computations that are wasted, and then I am not sure where is the benefit compared to epoch mining (maybe "fresher" descriptors?). One more thing is the small batch size that doesn't promote the mini-batch mining strategy, ie, we use 5 tuples in a batch, tuple being a query, positive and multiple negatives (depends on memory and network size). To be very precise, we use the following parameters: <code> ResNet-50: image size 1024, number of negatives 5, number of tuples in batch 5 ResNet-101: image size 1024, number of negatives 4, number of tuples in batch 5 ResNet-152: image size 900, number of negatives 3, number of tuples in batch 5 </code> Going bellow 3 negatives had bad effects, using image size down to 800 should be fine, actually even preferable now that I see that new <code>test</code> and <code>index</code> have a maximum image size of 800.</p> <p>I released our models as a part of our <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch#networks-with-whitening-learned-end-to-end">CNN Image Retrieval in PyTorch</a> toolbox, with the script to evaluate them on R-Oxford and R-Paris.</p> <p>Multi-scale performance in the following table:</p> <p>| Model | ROxf (M) | RPar (M) | ROxf (H) | RPar (H) | |:------|:------:|:------:|:------:|:------:| | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet50-gem-w-83fdc30.pth">gl18-tl-resnet50-gem-w</a> | 63.6 | 78.0 | 40.9 | 57.5 | | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet101-gem-w-a4d43db.pth">gl18-tl-resnet101-gem-w</a> | 67.3 | 80.6 | 44.3 | 61.5 | | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet152-gem-w-21278d5.pth">gl18-tl-resnet152-gem-w</a> | 68.7 | 79.7 | 44.2 | 60.3 |</p>
Google Landmark Retrieval 2019
Our solution: 0.10439 private 0.08832 public (code available)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our solution consists of two steps, either single model (our best single model achieves 0.06335 public 0.08813 private), and ensembling multiple models which achieves 0.10439 private 0.08832 public.</p> <h2><a href="https://github.com/antorsae/landmark-retrieval-2019/blob/master/submission-trained.ipynb">Single model CNN extraction + NN search</a></h2> <p>This notebook extracts the last convolution layer of a given architecture applied GeM pooling and performs L2 normalization.</p> <p>It uses augmentation (images are LR flipped) so both the index and queries features are duplicated.</p> <p>Once features are extracted, it performs regular nearest neighbour search using PCA, whitening and L2 normalization and the resuls are ready for a submission.</p> <p>It also saves the results of nearest neighbour search for ensembling.</p> <h2><a href="https://github.com/antorsae/landmark-retrieval-2019/blob/master/ensemble.ipynb">Ensembling</a></h2> <p>This notebook takes the results of nearest neighbours search for each query (and flipped LR queries) and aggregates distances of different runs (different architectures) and builds a submission accordingly.</p> <p>For flipped LR images, we pick the minimum distance.</p> <h2>Some results</h2> <p><img src="https://user-images.githubusercontent.com/1516746/58861894-bc6a6980-86af-11e9-8261-21b4ebc26bb3.png" alt="image"> <img src="https://user-images.githubusercontent.com/1516746/58861581-f7b86880-86ae-11e9-8dd8-de50cf35af97.png" alt="image"> <img src="https://user-images.githubusercontent.com/1516746/58861667-34845f80-86af-11e9-9493-c012847fe0d0.png" alt="image"></p> <p>All code here: <a href="https://github.com/antorsae/landmark-retrieval-2019">https://github.com/antorsae/landmark-retrieval-2019</a></p>
Google Landmark Retrieval 2019
Share your approaches?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to break the silence here and started to share what we have done, and hopefully more others will share as well, especially from the top teams that we can learn :)</p> <p>In fact, we have nothing fancy in the end (all fancy things didn't work for us this time , unfortunately :( We followed similar paths of last winner's approach, and final model is ensemble of three (inception v3 + Atten&amp;Gem pooling, as well as David's Xception and SEResNet101 with VLAD pooling, each has 512 in dim) </p> <p>We used DBA for each model to enhance image vectors quality, and finally go through QE (1 round) using the concatenated vectors from three models, which boosted performances quite a bit. </p> <p>We tried different poolings, different ensemble approaches, but they didn’t give much difference or improvement, and the best one seemed to be GeM (with some CNN Attention) and VLAD pooling, with one round of DBA and one round of QE, that’s it. When we trained models, we started with classification training, followed by fine tuning using Siamese fashion, which further improves a bit more. </p> <p>However, I guess the biggest (possible) mistake was that we only trained using old train data set only, and ignore the set of new train, mainly because of the stage 1 LB shows so much correlation with old train, and we wrongly assumed that stage 2 would be similar, and final Stage 2 turned out to be from image set whose distribution was so different from old train (much more noisy) and much closer to new. </p> <p>In the end, we found a very good approach to remove most of the distractors images from test + index, but surprisingly for us, simply removing those distractors didn’t help at all for our current pipeline (DBA + QE). It could be because non-distractors will match well with non-distractors, and won’t be affected much by distractors that we are trying to remove. </p> <p>I found out this excellent QE code was very useful - <a href="https://github.com/fyang93/diffusion">https://github.com/fyang93/diffusion</a>, only that they shouldn’t use ANN at all even for large dataset, which unnecessarily increases computations. </p>
Google Landmark Retrieval 2020
1st place solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! I'm new to this field so I was confused about those concepts as well. You can think test images as query, and index images as database. So, for this competition, if we submit our model(trained using train sets) that extract embeddings from images, scoring system uses it to 1) Extract embeddings for the private test and index sets 2) Create a kNN(k=100) lookup for each test sample, using the Euclidean distance between test and index embeddings 3) Score the quality of the lookups using the competition metric.<br> In summary, this retrieval competition evaluates our model's performance by measuring 'quality of the lookups(from index sets) for each test image'.<br> You can refer this paper for more details. <a href="url" target="_blank">https://arxiv.org/pdf/2004.01804.pdf</a></p> <p>Thank you!</p> <p>Thank you! I used private GCP bucket to store data</p> <p>Thank you! Congratulations to you!<br> Yes I trained using both GLD-v2 clean and GLD-v2 total dataset. <br> GLD-v2 total dataset has 203094 classes. You can find link in the external data thread.</p> <p>Thank you!</p> <p>Thank you! I was suprised during the process, too!</p> <p>Thank you! No, I only relied on 512 global features for each model.</p> <p>Thank you! It's hard to say strictly because of too many expriment failures and so on, but if things were going smoothly It will take about 3 weeks to get my score.</p> <p>Thank you! It's too hard to write a paper..(my first time) yes, colab tpus are quite fast!</p> <p>Thanks! All works are done in tensorflow 2</p> <p>Thank you!</p> <p>Thank you!</p> <p>Thank you! I used colab TPUs for this work.</p> <p>[Updated]<br> It took 149hours for 1st step, 150hours for 2nd step, 80 hours for 3rd, 64 hours for 4th. </p> <p>[Update] solution arxiv link : <a href="https://arxiv.org/abs/2009.05132" target="_blank">https://arxiv.org/abs/2009.05132</a><br> </p> <h1> </h1> <p>Great thanks to google and kaggle team for hosting this competition, and congrats to all participants who finished successfully. I really learned a lot during the competition through reading articles, analysing codes, and doing experiments.</p> <p>I'd like to share my solution, and detailed solution will be uploaded to arxiv in a few days.</p> <p>Model structure is as below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F701191%2Fb1ac1bca6f58600f19ccbb31a2496682%2Fmodels_jpg.jpg?generation=1597910875460797&amp;alt=media" alt=""></p> <h1>Basic Configuration</h1> <p>Validation set : 1 sample per class which has &gt;=4 samples in GLD v2 clean dataset(72322/81313 classes)<br> Cosine softmax : s=determined by adacos, m=0<br> Weighted cross entropy : proportional to 1/log(class cnt)<br> Augmentation : left-right flip<br> Optimizer : SGD(1e-3, momentum=0.9, decay=1e-5)<br> Embedding Dimension : 512 for every model<br> Hardware : Colab TPUs</p> <h1>Training Strategy</h1> <p>1.Use GLD v2 clean dataset to train model to classify 81313 classes<br> efn7 512x512 priv.LB:0.30264, pub.LB:0.33907<br> 2.Take efficientnet backbone from step 1, use GLD v2 total dataset to train model to classify 203094 classes<br> efn7 512x512 priv.LB:0.33749, pub.LB:0.36576</p> <p>3.Take whole model from step 2, give increasingly bigger images to the model<br> efn7 640x640 priv.LB:0.35389, pub.LB:0.39121<br> efn7 736x736 priv.LB:0.36364, pub.LB:0.40174<br> 4.Take whole model from step 3, set twice loss weight for GLD v2 clean samples<br> efn7 640x640 priv.LB:0.35932, pub.LB:0.39881<br> efn7 736x736 priv.LB:0.36569, pub.LB:0.40215</p> <h1>Ensemble</h1> <p>1.736x736 efn7+efn6+efn5+efn5 weighted concat<br> (all train step3, weight : efn7=1.0, efn6=0.8, efn5=0.5)<br> priv.LB:0.38366, pub.LB: 0.41986<br> 2.Same Config, with train step 4 for efn7<br> priv.LB:0.38677, pub.LB: 0.42328</p> <h1> </h1> <p>If you have any questions, feel free to ask.<br> Thank you. </p> <p>Thank you!!</p> <p>Thank you! Congratulations to you! </p> <p>Thanks! It seems output dimension is wrong. <br> Maybe you shoud use output[0] instead of output to correct it.<br> In my case, I concatenated in this way.<br> outputs = model((image[tf.newaxis], image[tf.newaxis], image[tf.newaxis], image[tf.newaxis]))<br> output1 = tf.math.l2_normalize(outputs[0][0])</p> <p>output2 = 0.8*tf.math.l2_normalize(outputs[1][0])</p> <p>output3 = 0.5*tf.math.l2_normalize(outputs[2][0])</p> <p>output4 = 0.5*tf.math.l2_normalize(outputs[3][0])</p> <p>features = tf.concat([output1,output2,output3,output4],axis=-1)</p> <p>Thank you!<br> [Updated]<br> It took 35epochs for 1st step, 13epochs for 2nd step, 5epochs for 3rd step, and 4 epochs for 4th step.</p> <p>Thanks! I used colab tpus. I forgot to mention optimizer! SGD(lr=1e-3, momentum=0.9, decay=1e-5) and no scheduling was used. updated</p> <p>I experienced the crash, too. After experiments, batchsize 4 or 8 for efn7, and 8 or 16 for efn 6,5 were used depending on image sizes.(per TPU)</p> <p>Thank you!</p> <p>Thank you! :)</p> <p>Thanks to you!</p> <p>Thank you!</p> <p>Thank you!</p> <p>Thank you! I did not use dropout nor batchnormalization. I started with 256x256 at first, and dropped those models after seeing 512x512 scores much better. Similary, smaller EfficientNets had lower performance so dropped them. As you can see in my final ensemble, I used 2 efn5s, 1 efn6, 1 efn7. </p> <p>Thank you!</p> <p>Thank you!</p> <p>I used 128fold for GLDv2 total, and 16fold for GLDv2 clean. I don't know what size is optimal, too :)</p> <p>Thank you! About TPU pipelines, you can refer chris deotte's notebooks in melanoma competition. <a href="url" target="_blank">https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords</a></p> <p>Hi, Dense(512) is what I used.</p> <p>Thank you!</p> <p>Thank you!</p> <p>Thank you!</p> <p>Thank you, congratulations! Yes, only left-right flip was used. I thought train datasets were too large that It's hard to overfits. And another intention was not to disturb image distribution, since I don't use TTA in submissions. </p> <p>I used margin to be zero, because train datasets are quite noisy so I thought trying to cluster more between same classes could make training more hard. It's not experimentally proved.</p> <p>Thank you!<br> 1) I used fixed adacos. formula is right.<br> 2) Because dataset is noisy, I thought trying to cluster more between same class samples could make training more hard.</p> <p>Thank you! :)</p> <p>Yes, I did not preserve any aspect ratio. And also cropping was not used at all.</p> <p>Thanks for your review! Yes, It's going to be a fierce competition. Maybe we can learn more by that :)<br> Indeed we were colleagues! It's nice to see you 🙂</p> <p>Thank you!</p> <p>Thank you! :)</p> <p>I tried label smoothing but not worked well.</p>
Google Landmark Retrieval 2020
2nd place solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>[update]</strong><br> add google driver download link: <br> <a href="https://drive.google.com/file/d/1XnzxMOHhzua9tjrAjo-X55ieKVJzRJw_/view?usp=sharing" target="_blank">https://drive.google.com/file/d/1XnzxMOHhzua9tjrAjo-X55ieKVJzRJw_/view?usp=sharing</a><br> <strong>[update]</strong><br> submission to arxiv is on hold. We provide detailed information in pdf file, download link: <a href="https://vis-bj.bj.bcebos.com/landmark/landmark2020_retrieval.pdf" target="_blank">https://vis-bj.bj.bcebos.com/landmark/landmark2020_retrieval.pdf</a></p> <p><strong>Method:</strong><br> Our retrieval method for this competition is depicted in Figure 1. We mainly train two models for final submission and each model includes a backbone model for feature extraction and a head model for classification. ResNeSt2692 and Res2Net200 vd are selected as the backbone model since their good performance on ImageNet. Head model includes a pooling layer and two fully connected(fc) layers. The first fc layer is often called embedding layer or whitening layer whose output size is 512. While the output size of second fc layer is corresponding to the class number of training dataset. Instead of using softmax loss for training, we train these models with arcmargin loss. Arcmargin loss is firstly employed in face recognition, we found it works well in retrieval tasks which can produce distinguishing and compact descriptor in landmark.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3556209%2F3128c6a35d4df708a26530ff360c3f98%2Ffigure1.png?generation=1598414665139238&amp;alt=media" alt=""></p> <p>The training process mainly consists of three steps. Firstly, we train the two models with resolution 224x224 on GLDv1 dataset which has total 1215498 images of 14950 classes, and GLDv2-clean dataset which has total 1580470 images of 81313 classes. Secondly, these two models are further trained on GLDv2-clean dataset with resolution 448x448, the parameters of arcmargin loss may change during the process. We believe that using large input size is beneficial to extract feature of tiny landmark. However, we have to adopt the training strategy “from small to large” mainly due to the large cost and lack of GPUs. In the final step, some tricks are experimented to increase the performance. We have tried a lot of methods, such as triplet loss finetuning, circle loss finetuning and etc but only “GemPool” and “Fix” strategy are helpful.</p> <p><strong>Training and test details</strong><br> At the data level, we first used GLDv1 and GLDv2-clean data with small resolution to train at a large learning rate, and then used GLDv2-clean data with large resolution to train at a small learning rate. The specific details are listed in Table 1. Table 2 shows the results of training with the above strategies. Table 3 lists the mAP@100 score of model ensemble.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3556209%2F69543b8fa754361b44694b4375eb8a2b%2Ft1.png?generation=1598414755535679&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3556209%2Fe70fe15075878b440733828569423c57%2Ft2.png?generation=1598414770976775&amp;alt=media" alt=""></p> <p>🙈 </p> <p>thanks 😜</p> <p>thank you! We resize image and center crop it to different size as described in the article。<a href="https://vis-bj.bj.bcebos.com/landmark/landmark2020_retrieval.pdf" target="_blank">https://vis-bj.bj.bcebos.com/landmark/landmark2020_retrieval.pdf</a></p> <p>Thanks ! We are not from the same laboratory; profile pic is our idol, of course, she may also be someone else’s idol 😉</p> <p>thanks~~~~😬</p> <p>Yes, we start from pre-trained models(ImageNet). </p>
Google Landmark Retrieval 2020
4th place solution and experience sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ol> <li>Random crop with scale (0.75-1.25) and ratio (0.8-1.2)</li> <li>Add augmentation. We will choose two way from numbers of augmentation (such as shear, ratate, posterize, solarize, equalize, color, contrast, invert, sharpness, etc) and randomly choose the magnitude within the given limitation. More details <a href="https://arxiv.org/abs/1805.09501" target="_blank">here</a>.</li> <li>Resize to 448 and normalize.</li> </ol> <p>We try to remove tails of the dataset (remove classes with fewer than 4 pics), but it doesn't bring any better performance. I think it does no matter even if ignoring this problem (?</p> <p>The embedding of each model is 512. We concatenated two models' embedding as final embedding with size of 1024. We didn't do whitening.</p> <p>Thanks. We didn't use GCP.</p> <p>Hi all, here's our brief solution writeup:</p> <h4>What we have tried and works</h4> <ul> <li>Augmentation: such as random crop and rotate. Using AutoAugmentation really helps.</li> <li>Backbones: Resnest200 and Resnet152 due to the resource limit.</li> <li>Pretrain: ImageNet pretrained and Softmax pretrained helps convergence.</li> <li>Loss Function: angular based loss function such as ArcFace.</li> <li>Label Smoothing</li> <li>Cosine learning rate with warmup</li> <li>Larger input size. We try 224, 336, 448 and 560. We choose 448 as final input size because of its higher cost performance. Smaller input size causes a drop in score.</li> </ul> <h4>What we have tried but not works</h4> <ul> <li>EfficientNet B7: we trained b7 in pytorch and transferred it in TF savedmodel format, but it failed with <strong>Notebook Timeout</strong>.</li> <li>Some other large backbones but with lower scores: SEResNext, APolyNet, FishNet, HRNet.</li> <li>Some other hyper param in loss function such as larger or smaller margin.</li> <li>AdaBN</li> <li>DCN</li> </ul> <h4>What we haven't tried</h4> <ul> <li>Larger backbones, such as Resnest269. </li> <li>It seems that B7 works <a href="https://www.kaggle.com/c/landmark-retrieval-2020/discussion/175306" target="_blank">here</a>. Maybe the way we transfer our models from pytorch to tensorflow causes high time cost in the submission.</li> <li>Multi scale input like what baseline model has done.</li> <li>EMA</li> <li>KD</li> </ul> <p>Trained models will be upload in a few days. Thanks.</p> <p>We use 1080Ti with 11GB memory.</p> <p>Thanks, we would like to share anything as we can. If you have any questions, please ask here and we will answer as soon as possible.</p> <p>That's right. So we use syncbn to enlarge our batch size. We train on 32GPUs and batch size is 16 for each GPU. Thus finally the total batch size is 512.</p> <p>In fact, we found that at the end of the training, using smaller batch size as 64 or 128 may get more convergence. This is about your policy of augmentation. Hard augmentation causes severe shaking and leads to bad performance. </p> <p>We will share our details with arcface soon. By the way, we use the validation dataset from GLRv1 to check out our performance. </p>
Google Landmark Retrieval 2020
5th place solution write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank my teammate <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> for his work, discussions and ideas throughout this challenge, our first team-up turned out to be quite successful ✊. Congratulations to all the winning teams and solo winners ! Finally, many thanks to the organizers and Kaggle for hosting this interesting competition.<br> Our final submission consisted of 4 models from two different architectures (gempool cnn and delg). </p> <p><strong>1. Data pre-processing</strong></p> <ul> <li>Training only on clean train set (81313 classes), validation by computing global average precision metric on a subset of 120000 images (having overlapping classes with train set) randomly sampled from the index set.</li> <li>Data augmentations: resize image longer side to [544, 672] then random crop 512x512, RandAugment + Cutout.</li> </ul> <p><strong>2. Modeling</strong></p> <ul> <li>Two architectures: CNN (with SEResNeXt50, SEResNeXt101 and ResNeXt101-32x4d as backbones) with CosFace head, and DELG (re-implemented in PyTorch, with SEResNet101 as backbone).</li> <li>Generalized mean pooling with frozen p set to 3 was used; bottleneck structure (GEMPool(2048) -&gt; Linear(512) -&gt; BatchNorm1d -&gt; CosFace(81313)) to reduce computation. </li> <li>Models were trained either in 10 or 20 epochs with AdamW optimizer and warm-up cosine annealing scheduler.</li> <li>Focal loss and label smoothing were better than cross entropy loss</li> </ul> <p><strong>3. Inference</strong></p> <ul> <li>Features (512-dim) extracted at scale 1 for each model.</li> <li>Concatenate 4 models’ features into a 2048-dim vector.</li> <li>Kernel runtime: 8 hour 20 minutes</li> </ul> <p><strong>4. Public/Private performance</strong></p> <table> <thead> <tr> <th>Methods</th> <th>Epochs</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>resnext101 gem</td> <td>20</td> <td>0.34596</td> <td>0.31024</td> </tr> <tr> <td>seresnext50 gem</td> <td>20</td> <td>0.3349</td> <td>0.29811</td> </tr> <tr> <td>seresnext101 gem</td> <td>20</td> <td>0.34749</td> <td>0.31282</td> </tr> <tr> <td>seresnet101 delg</td> <td>10</td> <td>0.336</td> <td>0.29882</td> </tr> <tr> <td>ensemble</td> <td></td> <td>0.36644</td> <td>0.32878</td> </tr> </tbody> </table> <p><strong>5. Ablations on DELG vs GEM</strong><br> We used SEResNeXt50 for all experiments in this section. </p> <table> <thead> <tr> <th>Methods</th> <th>Epochs</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>gem</td> <td>10</td> <td>0.32163</td> <td>0.28434</td> </tr> <tr> <td>gem + self attention block after res5</td> <td>15</td> <td>0.32006</td> <td>0.28187</td> </tr> <tr> <td>gem + online hard neg mining</td> <td>15</td> <td>0.32357</td> <td>0.28687</td> </tr> <tr> <td>gem + focal loss</td> <td>20</td> <td>0.32634</td> <td>0.29054</td> </tr> <tr> <td>delg + focal loss</td> <td>20</td> <td>0.32928</td> <td>0.29263</td> </tr> <tr> <td>gem + focal loss + cutout</td> <td>20</td> <td>0.3349</td> <td>0.29811</td> </tr> </tbody> </table> <p><strong>6. Things that didn't work for us</strong></p> <ul> <li>Pre-training on v1 dataset hurt.</li> <li>Training on concatenated v1 and v2 data.</li> <li>Earlier, I trained an EfficientNet B3 and found out that given the same training configs/ epochs, it performed worse than my baseline ResNet50 (which scored 0.299 on public LB). EfficientNets seem to only work well when you train them long enough with hard augmentations. After reading the 1st place solution, I know which experiment I'm gonna run next for the recognition challenge 😁</li> </ul> <p>thanks bro <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> 👍</p> <p>thank you !</p> <p>RandAugment with apply_prob=1, num_ops=2, and magnitude randomly chosen from [1, 14]. I only used rotate, translate, shear and flip and removed all the color-based augmentations.<br> Cutout with apply_prob = 0.5, other params. were default from other implementations (albumentations, torch etc)</p>
Google Landmark Retrieval 2020
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to Google and Kaggle for hosting this competition and congratulations to the winners :)</p> <p>Since our solution is quite similar to the ones already posted and to DELG itself, I'll keep it simple and brief</p> <h3>Dataset</h3> <ul> <li>GLDv2 clean</li> <li>80% training, 20% val split by image</li> </ul> <h3>Loss</h3> <ul> <li>ArcFace Layer<ul> <li>Margin: 0.3</li> <li>Scale: 46</li></ul></li> </ul> <h3>Models</h3> <ul> <li>ResNet101</li> <li>EfficientNetB5</li> <li>GeM pooling<ul> <li>p=3 frozen for R101</li> <li>p trained for B5</li></ul></li> <li>2048d descriptors by applying FC + BN after pool</li> </ul> <h3>Training</h3> <ul> <li>Trained until convergence at 512x512</li> <li>Fine-tuned for a few epochs at 640x640</li> <li>Around 35 epochs for R101 and 20 for B5</li> </ul> <h3>Inference</h3> <ul> <li>Multi-scale TTA<ul> <li>R101: Resize to (640, 768) squared images </li> <li>B5: Resize to 640 squared images + resize to 1024 preserving AR</li> <li>2048d descriptors per model by averaging multi-scale predictions followed by l2-normalization</li></ul></li> <li>Ensembling by concatenating model's predictions into a 4096d descriptor followed by l2-normalization</li> </ul> <p>We used Kaggle's TPU. We used CosineAnnealing schedule (approximately because we did it by hand). No augmentations, just a preprocessing where we would sample an AR and crop it from the original image (similar to DELF's original one). We didn't try single resolution but it should be just a little worse, we got (0.30952/0.34823) for B5.</p> <p>1e-3 as base, 8e-4 for the minimum. But we scaled it linearly with batch size since TPUs can fit huge batches.</p> <p>Thank you Peter :D</p> <p>We only used Kaggle's TPUs. For R101 it was about 1 hour per epoch in 512x512, whereas for B5 it was abour 1h20min</p> <p>Thanks. I used TF</p>
Google Landmark Retrieval 2020
9th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Amp (Automatic Mixed Precision) was effective in that it allowed for faster training and more efficient experimentation in a limited amount of time.<br> Although unconfirmed, we believe that a model without amp may have better accuracy than a model with amp.<br> The final model we submitted was trained without amp.</p> <p>Thanks.<br> Changing the GeM p in testing doesn't seem to work in all cases, but I changed the GeM p to account for the GLR2019 validation score as well.<br> As long as I checked some results, it seems to be effective including Private LB, as shown in the example below.</p> <table> <thead> <tr> <th>model</th> <th>input_size</th> <th>GLR2019 mAP@100</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>ResNet101 (GeMp=3)</td> <td>576</td> <td>0.3201</td> <td>0.344</td> <td>0.298</td> </tr> <tr> <td>ResNet101 (GeMp=4)</td> <td>576</td> <td>0.3209</td> <td>0.347</td> <td>0.300</td> </tr> </tbody> </table> <p>We thank all organizers for this very exciting competition.<br> Congratulations to all who finished the competition and to the winners.</p> <p>We trained models using only GLDv2clean dataset on PyTorch and converted them to TensorFlow’s saved_model.<br> Then, we validated them using GLR2019 public and private datasets.</p> <h2>Final submission</h2> <p>ResNeSt50 + ResNet101 + ResNet152 + SEResNeXt101</p> <table> <thead> <tr> <th>model</th> <th>output dim</th> <th>input size</th> <th>GLR2019 mAP@100</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>ResNeSt50</td> <td>512</td> <td>416</td> <td>0.3114</td> <td>0.335</td> <td>0.292</td> </tr> <tr> <td>ResNet101</td> <td>512</td> <td>608</td> <td>0.3243</td> <td>0.346</td> <td>0.303</td> </tr> <tr> <td>ResNet152</td> <td>512</td> <td>480</td> <td>0.3180</td> <td>0.336</td> <td>0.288</td> </tr> <tr> <td>SEResNeXt101</td> <td>512</td> <td>480</td> <td>0.3209</td> <td>0.337</td> <td>0.296</td> </tr> <tr> <td>Ensemble of 4 models</td> <td>2048 (concat)</td> <td>-</td> <td>0.3396</td> <td>0.361</td> <td>0.317</td> </tr> </tbody> </table> <h2>Model details</h2> <ul> <li>Backbones: Ensemble of ResNeSt50, ResNet101, ResNet152 and SEResNeXt101</li> <li>Pooling: GeM (p=3)</li> <li>Head: FC-&gt;BN-&gt;L2 (the same as the last year’s first place team)</li> <li>Loss: CosFace with Label Smoothing (ArcFace was also good, but CosFace was better)</li> <li>Data Augmentation: RandomResizedCrop, Rotation, RandomGrayScale, ColorJitter, GaussianNoise, Normalize, and GridMask</li> <li>LR: Cosine Annealing LR with warmup, training for 30 epochs</li> <li>Input image size in training: 352</li> </ul> <h2>What we tried and worked</h2> <ul> <li>Automatic mixed precision training</li> <li>Replace GeM p=3 with p=4 in testing</li> <li>Increase input image size last few epochs of training with freezed BN</li> <li>Large and multiple input image sizes in testing: 416, 480 and 608</li> <li>MVArcFace</li> </ul> <h2>What did not work</h2> <ul> <li>PCA whitening</li> <li>Maintaining the aspect ratio of input images in testing (perhaps because our models were trained on square images)</li> <li>Combination of arcface loss and pairwise (e.g., triplet) loss<br> We have tried arcface loss with pairwise loss (specially multi-similarity loss). However, the single arcface loss was better than multiple losses.</li> <li>EfficientNet</li> <li>Circle loss</li> <li>Removing noisy classes<br> We have tried to remove the worst 3 noisy classes which have high variance based on arcface class weight, but it did not work.</li> </ul> <h2>What we have not tried</h2> <ul> <li>Training with GLDv1 dataset</li> <li>Training without changing the aspect ratio of images </li> </ul> <p>I also tried learning the p parameter of GeM, but it didn't work.</p>
Google Landmark Retrieval 2020
Finetuning the baseline - silver medal solution [0.285 public LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Since the majority of teams just resubmitted a public model, it turned out that in this competition it was enough just to beat the baseline in order to receive a silver medal, which is what I did.</p> <p>My idea for doing that was finetuning a baseline model. It turned out it was much harder than it sounds, due to extensive computational resources required for the task! Here are some pointers on how I was able to train on the huge dataset provided by Google, while using limited computational resources and budget.</p> <p>In order to train a model, I decided to use TPU from Colab Pro (9.99$ / month) since it gives performance comparable to multiple GPUs (so allows for fast iteration), and I was unable to get GPU / TPU quota from Google Cloud.</p> <p>In order to train on the TPUs, the dataset has to be uploaded to Google Storage, preferably in TFRecord format. I used Google Compute Engine to download and prepare the dataset using scripts from <a href="https://github.com/tensorflow/models/tree/master/research/delf/delf/python/training" target="_blank">https://github.com/tensorflow/models/tree/master/research/delf/delf/python/training</a> I think it used around 6$ of Google's 300$ free trial.</p> <p>To sum up - you could beat the baseline for as little as 9.99$ (or even 0$ if you use Kaggle's TPU, but 30h/week was too stringent for me, although it would be more than enough to train the model), if you used TPUs for training.</p> <p>Some tricks I used in order to be able to train on TPUs:</p> <ul> <li>Reimplement model in Keras to get access to high-level APIs for training on TPUs. This required model rewrite in Keras and using this trick: <a href="https://www.kaggle.com/qiubit/fetching-weights-from-baseline-model" target="_blank">https://www.kaggle.com/qiubit/fetching-weights-from-baseline-model</a></li> <li>Only use a subset of 14000 classes (7000 largest and the rest uniformly sampled) out of ~81k available in gldv2_clean dataset, in order to cut the parameters of the ArcFace layer from ~165M to around ~29M. A necessity, since memory on TPU available in Colab is really limited (just 8GB).</li> <li>Only retrain a couple of final layers. I was unable to fit into memory of TPU when re-training entire model. Un-freezing consecutive layers gave subsequent public leaderboard improvements.</li> <li>Uploading TFRecords to multi-region US Google Storage bucket to avoid bills for Network egress from Google Cloud (all TPUs on Colab are currently located in US).</li> </ul> <p>Specific methods used during finetuning:</p> <ul> <li>Data augumentation.</li> <li>Train on 800x800 input (TPU input has to have fixed size for entirety of training). It was inspired by last year's solution of team smlyaka, which suggested larger image size for final epochs. The input size of 800x800 was chosen, because I discovered that this is max dimension of any image in gldv2_clean dataset.</li> <li>Cyclical learning rate, Learning rate finder: <a href="https://www.pyimagesearch.com/2019/07/29/cyclical-learning-rates-with-keras-and-deep-learning/" target="_blank">https://www.pyimagesearch.com/2019/07/29/cyclical-learning-rates-with-keras-and-deep-learning/</a> <a href="https://www.pyimagesearch.com/2019/08/05/keras-learning-rate-finder/" target="_blank">https://www.pyimagesearch.com/2019/08/05/keras-learning-rate-finder/</a></li> <li>Ensemble with 0.277 baseline was used to increase score a little bit.</li> </ul>
Google Landmark Retrieval 2020
[Update] 3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The reason we do this is not to completely destroy the discriminative part of the image.<br> For landmark images, the most discriminative part is usually near the center of the image. Therefore, we only cutmix at the corners of the image to ensure that both image A and image B can be recognized.</p> <p>[Update 2020/10/09] our code: <a href="https://github.com/Raykoooo/3rd_place_to_Kaggle_Google_Landmark_Retrieval_2020" target="_blank">https://github.com/Raykoooo/3rd_place_to_Kaggle_Google_Landmark_Retrieval_2020</a></p> <p>[Update] arxiv: <a href="https://arxiv.org/abs/2008.10480" target="_blank">https://arxiv.org/abs/2008.10480</a></p> <p>Our solution:<br> <a href="https://drive.google.com/file/d/1QOeQqVckIlvDAWfAh0E_lrMEqmQR_Bzr/view" target="_blank">https://drive.google.com/file/d/1QOeQqVckIlvDAWfAh0E_lrMEqmQR_Bzr/view</a></p> <p>We have submitted a paper describing our method to arxiv. But it cannot be accessed now. Once the arxiv link of the paper can be accessed, we will immediately post it here. </p>
Google Landmark Retrieval 2021
11th Place Solution: Supervised Contrastive Pretraining + Query and Database expansion
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Google and Kaggle for organizing this interesting competition! <br> Congratulations to the winners!</p> <p>It was the first large-scale competition for both teams <a href="https://www.kaggle.com/ofitserovlad" target="_blank">@ofitserovlad</a>, <a href="https://www.kaggle.com/evgenysidorov" target="_blank">@evgenysidorov</a>, <a href="https://www.kaggle.com/joven1997" target="_blank">@joven1997</a> and we spent a lot of time on the experiments and came up with a strong pipeline pretty close to the end of the competition. Even though we finished just short of the gold medal we are still more than satisfied with the results and enjoyed the competition immensely.</p> <h1>Solution Overview</h1> <p><img src="https://user-images.githubusercontent.com/14181915/135837030-4bdf97b7-aa89-42db-9b2c-f97c837cdb58.png" alt=""></p> <h1>1. Supervised Contrastive Pretraining</h1> <h3>Models</h3> <p>Efficientnetv2_m</p> <h3>Validation</h3> <p>mAP calculated with Index (100k sampled) and Test (1k) data from google’s repository. CV and LB had good correlation with LB being almost always 0.015-0.02 lower than CV (ex. CV 0.380 -&gt; LB 0.360)</p> <h3>Training Pipeline</h3> <p><img src="https://user-images.githubusercontent.com/14181915/135837248-2b9063ec-aaed-40a1-987b-2bc3f5181538.png" alt=""></p> <p>Team Fat Cat finalized a pretty strong 3 stage training process a bit too late (1.5 week before the end) into the competition:<br> Artificially Sampled Supervised Contrastive Pretraining with class balanced temperature on 448 resolution and clean dataset <br> Finetuning with AdaCos with class balanced Focal Loss with 512 resolution on clean dataset <br> Same as stage 2 but with 604 resolution and full dataset </p> <h3>Artificially Sampled Supervised Contrastive Pretraining with class balanced temperature</h3> <p>Pretraining part was inspired by this <a href="https://arxiv.org/abs/2004.11362" target="_blank">paper</a>. At this stage the model should learn easy image features and be more generalizable before finetuning stage. The premise of the paper is pretty similar to SimCLR, which takes 2 augmented (color jitter, random flip) random cuts of the image and makes them close in the cosine space. The extension of Supervised Contrastive paper is to make cuts of the image of the same class in the batch also close together. This paper also normalizes the image embeddings before feeding them into the nonlinear projection head. Outputs of the projection head are then used for supervised contrastive loss. </p> <p>The main problem with implementation of the original paper was that they used enormous batch size (8192) and only 1000 classes and by doing so are somewhat guaranteed to have multiple instances of the same class in the batch. When I tried that kind of approach with a batch of 256 cv was at ~0.180, clearly indicating that there was very little probability of the same class occurring in the batch. </p> <p>1.5 weeks before the end I realized that I can artificially sample cuts of the different images of the same class into 1 batch. I shuffled my dataset in such a way that each 4 consecutive images were augmented cuts of the same class. This kind of pretraining resulted in cv mAP of .273 (and was much faster than cutting 2 parts of the same image), after 10 epochs of pretraining. Graphs showed that given more time it could reach .300. For pretraining I used SGD with cosine warmup and annealing. </p> <p>Additionally, individual temperature for each class was used. It was derived using class-balanced strategy and omitting final normalization (i.e sum of the weights equals to the number of classes). </p> <h3>Finetuning</h3> <p>For finetuning Adaptive Cosine loss was used. It is an extension of AcrFace and was also used by the previous year’s winner. Supervised Contrastive pretraining was a crucial step for faster convergence during finetuning. Without pretraining after 1 epoch training on clean data and 512 resolution CV mAP was at .210 and with pretraining it started at 0.31 and reached 0.35 after 10 epochs with SGD and cosine scheduling. However, with better hyperparameters it potentially could have climbed even higher. Then, the resolution was increased to 604 and finetuned on the full dataset. </p> <p>At the very end of the competition we added additional weight to landmarks from Asia, Africa, and Oceania to try to match the distribution of the test data which was described by Google's paper. It helped to bridge the gap between LB and CV. </p> <h1>2. Encoder + GEM + ArcHead</h1> <h3>Networks:</h3> <p>EfficientnetB6, 704x704<br> EfficientnetB7, 664x664<br> Seresnext101_32x4d, 512x512</p> <h3>Training procedure:</h3> <p>ArcFace loss based on Focal Loss with various modifications was used to train the models. <br> We quickly realized that big models with big resolution give improvements up to 0.03-0.05.<br> Training big models was optimized by using FP16 precision, RandomCrop augmentations to reduce training image size (for example, from 704 to 664) and doing all the augmentations on GPU with Kornia.<br> One model training can be divided into two steps:<br> Training on GLDv2 cleaned<br> Fine tuning on GLDv2 full dataset with smooth labeling and doubled weights for cleaned samples to reduce effect of the noisiness </p> <h3>3. Post Processing</h3> <p>Two stage post processing procedure was suggested by Fat Cat. It consisted of query expansion and database expansion:</p> <p>Query expansion - for each sample from test set we find top2 samples from index set by cosine similarity and generate new embedding for test sample by normalizing sum of weighted by cosine similarity score top2 index samples and test sample<br> Database expansion - for each test and index sample we find top16 samples from GLDv2 train set and based on their classes define the class of the test/index sample. Using this classes we re-rank top100 list of index samples for each test sample that we get after qe expansion. We put on top of the new list samples from the top100 list that have the same class as the test sample in the same order. Then we put samples from the index set that have the same class as the test sample but weren't in the top100 list before sorted by cosine similarity score, to the end of the new list we put all samples from the top100 list that have different class than the test sample. Finally, top100 samples from the new list are taken.<br> This post processing strategy improved our LB score by around 0.07.</p> <h3>Ensemble</h3> <p>For all the post processing procedures we used faiss to speed it up using GPU instead of CPU. For testing the ensemble we used Biba &amp; Boba’s Seresnext101_32x4d model (0.418 LB) and Fat Cat’s model (0.425 LB). Our team explored two different ways of merging models.</p> <ul> <li>“Majority voting” - from each model in the ensemble top100 samples was found independently and then all the image ids were sorted based on the sum of the positions that this ids has in each model’s top100 list. Finally, the top 100 indices from the sorted list were taken. (0.433 on LB)</li> <li>“Concat” - concatenating normalized model’s embeddings. Before concat each embedding was multiplied by the weight given to its model. When all embeddings are concatenated we normalize the new big one. To new big embedding we applied two staged post processing described above. (0.455 on LB)</li> </ul> <p>The final ensemble were made with:<br> Two EffientnetV2 M models trained with supervised contrastive pretraining + AdaCos<br> Seresnext101_32x4d and EfficientnetB6 “Encoder + GEM + ArcHead” models</p> <h3>Final Thoughts</h3> <p>It was both teams' first large-scale kaggle competition so a lot of different mistakes were made along the way. After all experiments here are the ingredients of what we believe could be very strong training pipeline</p> <ul> <li>Better cropping <a href="https://arxiv.org/abs/1906.06423" target="_blank">paper</a></li> <li>Artificially sampled Supervised contrastive pretraining with 8 cuts of different images (here we used 4) with 448 validation resolution and 20 epochs on clean data</li> <li>Finetuning with ArcFace 512 validation resolution and class balanced focal loss for 20 epoch with sgd without schedulers on clean data</li> <li>Same as previous previous step but with 736 and total data</li> <li>Ensemble by concatenation </li> <li>Post processing: qe + db expansion </li> </ul>
Google Landmark Retrieval 2021
13th Place - Soft Landmark ID Assignment
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to google and kaggle for hosting this competition and clairvoyant.ai for supporting this work. It was a lot of fun and a full of learning experience.</p> <p>Certainly, due to limited hardware, extremely large dataset, and time constraints, my models are not that powerful like top teams, but I would love to discuss post-processing, which uses soft landmark id assignment for index and test images, and gave <strong>+0.075</strong> boost on private LB.</p> <p>But before, to give a glimpse of models.</p> <h2>Models</h2> <p>I was using CurricularFace loss along with Adam Optimizer and Gradient Accumulation to train my models. I was using an ensemble of 5 models. All models were trained on the full GLDV2 dataset.</p> <p>Their scores without any post-processing and using simple KNN on model embeddings are - </p> <table> <thead> <tr> <th>Model</th> <th>Image Size</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Effnet B4</td> <td>512</td> <td>0.322</td> <td>0.342</td> </tr> <tr> <td>Effnet B6</td> <td>384</td> <td>0.331</td> <td>0.349</td> </tr> <tr> <td>Effnet v2 - s</td> <td>512</td> <td>0.325</td> <td>0.351</td> </tr> <tr> <td>Effnet v2 - m</td> <td>384</td> <td>0.357</td> <td>0.373</td> </tr> <tr> <td>Effnet v2 - xl</td> <td>256</td> <td>0.318</td> <td>0.319</td> </tr> <tr> <td>Concat all embbedings</td> <td></td> <td>0.379</td> <td>0.389</td> </tr> <tr> <td>Concat all + Post Processing</td> <td></td> <td>0.448</td> <td>0.464</td> </tr> </tbody> </table> <h2>Post-Processing</h2> <h3>Soft Landmark ID assignment</h3> <p><strong>Step 1:</strong> Landmark assignment - Test Images</p> <p>For the landmark id assignment, I calculated the cosine similarity between the test image embeddings and the embeddings of images in the GLDV2 dataset. Next, I picked the top 'n' most similar GLDV2 images for every image in test data and assigned labels and label similarity scores to that image. An example label and label score calculation for an image A in test data is mentioned below.</p> <p>For a test image A, pick the top 'n' (in my case, it was 5) most similar images from GLDV2-</p> <table> <thead> <tr> <th></th> <th>GLDV2 Img 1</th> <th>GLDV2 Img 2</th> <th>GLDV2 Img 3</th> <th>GLDV2 Img 4</th> <th>GLDV2 Img 5</th> </tr> </thead> <tbody> <tr> <td>Landmark ID</td> <td>121</td> <td>10</td> <td>121</td> <td>10</td> <td>199</td> </tr> <tr> <td>Similarity Score</td> <td>0.9</td> <td>0.7</td> <td>0.5</td> <td>0.47</td> <td>0.45</td> </tr> </tbody> </table> <p>Then Image Landmark ID score can be calculated as</p> <table> <thead> <tr> <th>Landmark ID</th> <th>Images</th> <th>Landmark ID score</th> </tr> </thead> <tbody> <tr> <td>121</td> <td>GLDV2 Img 1, GLDV2 Img 3</td> <td>0.9+0.5 = 1.4</td> </tr> <tr> <td>10</td> <td>GLDV2 Img 2, GLDV2 Img 3</td> <td>0.7+0.47 = 1.17</td> </tr> <tr> <td>199</td> <td>GLDV2 Img 5</td> <td>0.45</td> </tr> </tbody> </table> <p>Consider landmark id score as zero for landmark ids that are not there in the top 5.</p> <p><strong>Step 2:</strong> Landmark assignment - Index Images</p> <p>A similar procedure was followed for index images, and landmark ids and landmark id scores were assigned to them as well.</p> <p><strong>Step 3:</strong> Test-Index similarity score calculation</p> <p>Finally, calculated similarity between Image A from test and Image B from Index as - </p> <p>Similarity (A, B) = Landmark_ID_1_score(A) x Landmark_ID_1_score(B) +<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Landmark_ID_2_score(A) x Landmark_ID_2_score(B) +<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Landmark_ID_3_score(A) x Landmark_ID_3_score(B) + <br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ………..</p> <p>Then for every image in test data, arrange all similar images from index data in descending order based on their similarity score.</p> <p>As the next step, At the end of this list, I concatenated the knn-neighbours not found in the above search.</p> <p>This post-processing gave about +0.069 boost on public LB and +0.075 on private LB, with 0.448 and 0.464 as final Public and Private scores.</p> <h2>Additional</h2> <p>Additionally, I am <a href="https://www.kaggle.com/prateekagnihotri/curricularface-and-gradient-accumulation" target="_blank">attaching</a> my gradient accumulation code for TPUs and a TensorFlow implementation of CurricularFace. I hope you will find them useful as well.</p> <p>Thanks, Enric!<br> Img sizes mentioned above are the train img sizes. During inference, like my last time <a href="https://www.kaggle.com/prateekagnihotri/change-image-size-to-get-0-281-lb" target="_blank">solution</a>, I did inference at train time img size x sqrt ( 2 )</p> <p>I am happy that you find it insightful. But it is more painful not to win gold in any😉. Anyway, you have to do everything you can and have to be fortunate. I am doing my part. I hope fortune will smile.</p>
Google Landmark Retrieval 2021
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, let me thank kaggle staff and google team for organizing the 2021 landmark competitions. I am still overwhelmed and shocked by my result, which resulted in becoming #1 on the overall kaggle leaderboard, an ultimate goal for many kagglers. My solutions are similar in terms of models and training routine for the retrieval and the recognition task, but slightly different in details, especially in postprocessing, i.e. in re-ranking procedure. I try to give attribution to people, whose smart ideas I levered from last year's solutions. For retrieval track I re-used several ideas from 2nd place solution of google landmark recognition 2020 by <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> and 3rd place solution of team <em>All Data Are Ext</em> (@boliu0 <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> <a href="https://www.kaggle.com/alexanderliao" target="_blank">@alexanderliao</a> ) as well as post-processing ideas from winning team of google landmark retrieval 2019 (<em>smlyaka</em> <a href="https://www.kaggle.com/confirm" target="_blank">@confirm</a> <a href="https://www.kaggle.com/lyakaap" target="_blank">@lyakaap</a>) Please comment if I forget someone.</p> <h2>TLDR</h2> <p>My solution is an ensemble of DOLG models [1] and hybrid swin transformers, which are both especially strong architectures for integrating local and global descriptors of a landmark into a single feature vector in a single-stage manner. I trained the models using a 3-step approach where I enlarged the training dataset and image size in each step. For postprocessing I used an improved version of the idea from team smlyaka from 2019</p> <h2>Preamble</h2> <p>Before going into details I want to address a few circumstances leading me to the solution. Firstly, compared to last year, this year's competition had a significantly more tense timeline, which forced me to skip following some ideas I had and concentrate on the most promising. That especially led to the choice to not consider spatial verification with local descriptors at all, as it has (at least in my opinion) a bad cost-benefit trade-off, especially in large scale image recognition, with limited kernel runtime. Secondly, since top teams of last year's iterations published their solutions in detail, accessible to everybody, the bar to outperform here was quite high. To be consistent in naming I use the following for subsets of gldv2</p> <p><strong>gldv2</strong> available under [2] (5 mio images, 200k classes)<br> <strong>gldv2c</strong> (cleaned subset of gldv2 / this years training data) (1.6 million training images, 81k classes)<br> <strong>gldv2x</strong>, not-cleaned subset of gldv2 restricted to only have the same 81k classes as gldv2c (3.2 mio images)</p> <p>Furthermore I use GLRec for past google landmark recognition competitions and GLRet for retrieval. </p> <h2>Cross-validation</h2> <p>For cross validation I used the 2019 retrieval solution which can be found under [2],[4], but filtered out “ignored” images to save time in the validation step I performed at the end of every training epoch. With this I achieved a very good correlation between Public LB and my local validation.</p> <h2>Training routine</h2> <p>My training routine follows train dataset choice from 2nd place GLRec 2020 but with some improvements using ideas from <em>All Data Are Ext</em><br> As a first step I train models on small image size (e.g. 224x224) on the clean gldv2c for around 10 epochs. Then I use medium large image size (e.g. 512x512) and train for a long time on the more noisy gldv2x (30-40 epochs). Finally I finetune on large image size (e.g. 768x768) also on gldv2x for a few epochs. I used Adam optimizer and cosine annealing schedule in every step. I used same augmentation as <em>All Data Are Ext</em>, which proved to be very efficient:</p> <pre><code>cfg.train_aug = A.Compose([ A.HorizontalFlip(p=0.5), A.ImageCompression(quality_lower=99, quality_upper=100), A.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=10, border_mode=0, p=0.7), A.Resize(image_size, image_size), A.Cutout(max_h_size=int(image_size * 0.4), max_w_size=int(image_size * 0.4), num_holes=1, p=0.5), ]) cfg.val_aug = A.Compose([ A.Resize(image_size, image_size), ]) </code></pre> <h2>Model Architectures</h2> <p>For me this is the most fun part in every competition, to design and explore model architectures. I like the idea of models solving multiple parts of the problem at hand in end2end fashion. In landmark recognition it is crucial to leverage both, global and local image information. While historically this was solved in a two-stage fashion by training a global descriptor for preliminary recognition on the one hand and extracting local descriptors for post reranking using spatial verification on the other hand. Recently attempts have been made to train local and global descriptors simultaneously using a single model (DELF/ DELG). But that still uses a 2nd stage spatial verification. <br> I implemented two architectures that combine local and global descriptor into a single image embedding. Both use an effientnet encoder and an sub-arcface head with dynamic margins which was shown by team All Data Are Ext in 2020 to outperform classic arcface.</p> <h3>DOLG</h3> <p>The very recently released DOLG paper, goes on step further claiming to train local and global descriptors also simultaneously but additionally fusing them within the model into a single descriptor. Unfortunately/ fortunately it was so new that no code was released so far and I needed to implement deep orthogonal fusion by myself. Luckily the paper gives enough details and I was able to implement it quickly. The following shows the final architecture:</p> <p><img src="https://i.imgur.com/9FiSEet.png" alt=""></p> <h3>Hybrid-Swin-Transformer</h3> <p>Another architecture I explored was a Hybrid-Transformer, half a CNN encoder with a transformer ending. The idea behind is that transformers are like graph neural nets on image patches connecting distant local features with each other. A property CNNs are lacking, but which are very important for landmark recognition/ retrieval. However, pretrained image transformers are mostly done on 224 or 384 image size, and I wanted to go bigger. Also the encoding of each patch is of a higher quality when using a sophisticated CNN for the patch embedding. Luckily the awesome timm repository gives you the tool set to glue together a CNN and an image transformer and I ended with an architecture like shown below. In particular I used Swin transformer [3], as I think the moving window approach captures more diverse local descriptors and EfficientNet for encoding</p> <p><img src="https://i.imgur.com/2iXNuBA.png" alt=""></p> <p>However, for training several problems arise: Firstly the pretrained weights of image transformer and CNN are trained separately, hence not fitting very well together, resulting in NAs while training. Secondly, since image transformer require a fixed input size it's a bit more difficult to follow a training routine with increasing image size in each step. I found that the following procedure fixes the issues and gives a good final model, and especially step 3 is important to make the hybrid swin transformer working</p> <ol> <li>train only image transformer on 224x224 size</li> <li>Exchange original patch embedding module with block 0,1,2 from CNN encoder, </li> <li>freeze image transformer and sub-arcface head and train for 1 epoch on 448x448</li> <li>unfreeze image transformer and head and train for 30-40 epochs</li> <li>add block 3 from CNN encoder and finetune a few epochs on 896x896</li> </ol> <p>For two of my models I used the “stride trick”, i.e. putting a stride of (1,1) in the first conv layer to effectively double the used resolution instead of double the actual image size, which I think is better for small original images.<br> I also retrained a few models from 2nd and 3rd place solutions from 2020 recognition competition following their provided git repositories and instructions, and evaluated possible ensembles on my local cross validation. Models form All Data Ext team where quite low correlated to mine and added a nice diversification benefit</p> <p>In my final ensemble I had the following models:</p> <ul> <li>DOLG-Efficentnet b5 (768x768)</li> <li>DOLG-Efficentnet b6 (768x768)</li> <li>DOLG-Efficentnet b7 (448x448) stride 1</li> <li>Hybrid SwinBase224-Efficentnet b5 (896x896)</li> <li>Hybrid SwinBase224-Efficentnet b3 (896x896)</li> <li>Hybrid SwinBase384-Efficentnet b6 (384,384) stride 1</li> <li>All Data Ext GLRec2020 Efficientnet b6 (512x512) </li> <li>Bestfitting GLRec2020 Efficientnet b5 (768x768)</li> </ul> <h2>Post-processing</h2> <p>For post processing I developed an improved version of <em>smlyaka</em> 2019. They assigned landmark labels to each query and index image using cosine similarity to train images and then “hard” up-ranked index images with labels matching the query labels to first positions. I found that I can improve this approach by additionally considering cosine similarity as a confidence score and perform a “soft” up-rank for index images where the assigned label matches query label and “soft” down-rank for index images where labels do not match. At the end the score of an index image is set as cossim(query,index) + 1 * cossim(index,train) if labels match and <br> cossim(query,index) - 0.1 * cossim(index,train)if labels do not match. </p> <p>For further details please refer to the arxiv paper. Thank you for reading. </p> <p>[1] <a href="https://arxiv.org/abs/2108.02927" target="_blank">https://arxiv.org/abs/2108.02927</a> (DOLG)<br> [2] <a href="https://github.com/cvdfoundation/google-landmark" target="_blank">https://github.com/cvdfoundation/google-landmark</a><br> [3] <a href="https://arxiv.org/abs/2103.14030" target="_blank">https://arxiv.org/abs/2103.14030</a> (Swin)<br> [4] <a href="https://s3.amazonaws.com/google-landmark/ground_truth/retrieval_solution_v2.1.csv" target="_blank">https://s3.amazonaws.com/google-landmark/ground_truth/retrieval_solution_v2.1.csv</a></p> <p>Code: <a href="https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place" target="_blank">https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place</a> (in progress) <br> Paper: <a href="http://arxiv.org/abs/2110.03786" target="_blank">http://arxiv.org/abs/2110.03786</a></p>
Google Landmark Retrieval 2021
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Code: <a href="https://github.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution" target="_blank">https://github.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution</a><br> Paper: <br> <a href="https://arxiv.org/abs/2110.04294" target="_blank">https://arxiv.org/abs/2110.04294</a> OR<br> <a href="https://github.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution/blob/master/ILR2021_2nd_solution.pdf" target="_blank">https://github.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution/blob/master/ILR2021_2nd_solution.pdf</a>. <br> Submission Notebook: <a href="https://www.kaggle.com/zhangwesley/ilr2021-retrieval-2nd-solution" target="_blank">https://www.kaggle.com/zhangwesley/ilr2021-retrieval-2nd-solution</a></p> <h1>Introduction</h1> <p>Image retrieval is a very important computer vision task which aims at finding images similar to the query image. It is different from instance-level retrieval. Image retrieval aims to retrieve objects holding the same appearance with the query, even they are not the same instance. This makes the task more easier compared to instance-level retrieval. <br> On the other hand, it is different from the fine-grained level image retrieval. The fine-grained level image retrieval pays more attention on the local attentions to discover more details due to its small intra-category variance, such as person re-identification.<br> Landmark retrieval is an instance-level retrieval task, which aims to search the same landmark from a large candidate set. In this paper, we will introduce our techniques used in the fourth landmark retrieval competition, Google Landmark Retrieval 2021 held on Kaggle. Some of them are inspired from the state of the art algorithms in person re-identification.</p> <p>Besides, we also involve many techniques that are commonly used in previous competitions including model structures training strategies, loss functions. These techniques have been well explored and introduced in previous competitions, so we only introduce our methods, denoted as new contributions listed below.</p> <ul> <li>We involve bags-of-tricks from person re-identification and conduct careful experiments on these tricks.</li> <li>We propose a continent-aware sampler to balance the distribution of training images based on their continent tags.</li> <li>We design a Landmark-Country aware reranking algorithm and integrate it with the K-reciprocal reranking method.</li> </ul> <h1>Method and experiments</h1> <h2>Train and validation set</h2> <p>The official GLDv2 dataset has provided a clean and a full version. As pointed out by previous works, many noisy images with the same IDs as clean set have been filtered out during the cleaning stage. Expanding these noisy data into clean set results in ‘c2x‘. Although dis-similar, these noisy images may contain very valuable information, e.g., the in- door or outdoor of a building. What’s more, we also include the index set from GLDv2, which shares many common ids with trainfull. We list the dataset below for a better understanding.</p> <table> <thead> <tr> <th>Trainset</th> <th># Samples</th> <th># Labels</th> </tr> </thead> <tbody> <tr> <td>Clean</td> <td>1,580,470</td> <td>81,313</td> </tr> <tr> <td>C2x</td> <td>3,223,078</td> <td>81,313</td> </tr> <tr> <td>Trainfull</td> <td>4,132,914</td> <td>203,094</td> </tr> <tr> <td>All</td> <td>4,825,830</td> <td>203,094</td> </tr> </tbody> </table> <p>We use the 1129 GLDv2 test set together with the 76,176 index set from the competition. We put all GT images of each query into the index set and expand it to 78,959 im- ages. In this case, any query could find all its GTs in the index set.</p> <h2>Baseline network</h2> <p>We select several large CNN networks including SE-ResNet-101, ResNeXt-101, ResNeSt101 and ResNeSt269 as backbones. IBN extension is used for SE-ResNet and ResNeXt-101. The input size is selected as 384 for pretraining and 512 for the last fine-tuning. The last stride of the CNN network is set to 1.<br> We use generalized mean-pooling (GeM) for pooling method with p=3.0. Arcface loss with scale=30 and margin=0.3 is used. We use weight decay=0.0005. Training details with gradually enlarging input size and data scale can be found in implementation details.</p> <p>Some tricks from person re-identification has been explored. Since the dataset is very large, we use R50 backbone with an in- put size of 256 × 256. We list the validation accuracy as well as public/private scores. Random Erasing randomly erases out image patches and has shown great success in many fields. Label smoothing by using soft targets that are a weighted average of the hard targets can often be useful in many computer vision tasks. From the table below, ransom erasing proves to be effective while label smoothing fails.</p> <table> <thead> <tr> <th>Setting</th> <th>Validation</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>baseline</td> <td>32.60</td> <td>28.44</td> <td>30.59</td> </tr> <tr> <td>+RE</td> <td>32.78</td> <td>29.29</td> <td>30.60</td> </tr> <tr> <td>+label smooth</td> <td>32.55</td> <td>28.21</td> <td>29.79</td> </tr> </tbody> </table> <h2>Sampling Strategy</h2> <p>On one hand, in person re-identification or face recognition, id-uniform is widely used as a data sampling strategy. For a batch, we randomly select P Ids and then K images for each Id. Thus we have P*K images as a batch. Each id is treated fair for this setting. On the other hand, softmax sampling has been widely used in previous competitions. The softmax sampling just shuffle all dataset once at the beginning of the epoch and then samples iterative through the data. Head data which appears more will be put more attention with this setting.<br> We have tried these sampling strategies and find neither of them are good enough for our task.</p> <p>As the provided landmark dataset pays more concentration on Asia landmarks, we manage to design a sampling strategy based on their continent labels. <br> First, we use the country-and-continent-codes-list to find how many countries each continent has. Then we search and list all the landmarks in each country. Based on this processing, we can get the country tag and continent tag for every landmark.</p> <p>We setup a continent sampling prob by {'Asia': 0.5, 'Europe': 0.2, 'Africa': 0.15, 'North America': 0.1, 'South America': 0.02, 'Antarctica': 0.01, 'Oceania': 0.01, 'OTHER': 0.01}. For an epoch of images, we sample continent images by the corresponding ratios. Also, we learn from previous paper which set 0.66 probability for clean data and 0.33 probability for noisy data.</p> <p>We list results for different sampling strategy below. The widely used id-uniform strategy fails for landmark retrieval. We think the reason is due to the large amount of tail data. These tail data may be noisy and thus degrades the model training. The softmax strategy works better than id- uniform. For continent-aware strategy, it achieves the best performance.</p> <table> <thead> <tr> <th>Setting</th> <th>Validation</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Id-uniform</td> <td>31.05</td> <td>24.78</td> <td>27.28</td> </tr> <tr> <td>Softmax</td> <td>32.60</td> <td>28.44</td> <td>30.59</td> </tr> <tr> <td>Continent-aware</td> <td>33.07</td> <td>31.37</td> <td>32.44</td> </tr> </tbody> </table> <h2>Reranking</h2> <p>Reranking is very essential to the final performance. Besides K-reciprocal reranking, a Landmark-Country aware reranking algorithm is specially designed for our task.</p> <p>We observe that there are many images which contain the same landmark and can't be easily retrieved by visual features due to great variations caused by views and illumination. Considering this, a Landmark-Country aware reranking is proposed by taking fully use of the training set. Specifically, as each image in training set has its landmark tag and its country tag, we first assign query and index images in the testing set with a training tag, and then retrieval the query from index images by the assigned tags. </p> <p>For query image tagging, We give a list of potential landmark tags and country tags to every query image according to its top K similar images in training set, and each landmark tag and country tag is scored by the accumulation of similarity in top k . With the Landmark-Country aware reranking, images have the same landmark tag or country tag with the potential tag of query are advanced in the retrieval sorting. An illustration can be found below and more details and equations are listed in paper.</p> <p><img src="https://raw.githubusercontent.com/WesleyZhang1991/Google_Landmark_Retrieval_2021_2nd_Place_Solution/master/figs/rerank_vis.jpg" alt="rerank"></p> <h2>Implementation details</h2> <p>Motivated by previous solutions, we first train the models on ‘clean‘ subset with an input size of 384(For ResNeSt269, the size is 448). The initial learning rate is 0.01 and we train for 10 epochs with the first epoch as warmup. Then we keep the same category number but with more data as ‘c2x‘. The initial learning rate is 0.001 and we train for 6 epochs. Then we expand dataset to ‘trainfull‘ and train for another 2 epochs with the initial learning rate as 0.0001 and input size as 512. At last, we use ‘all‘ data from GLDv2 and train for another 2 epochs with the initial learning rate as 0.0001 and input size as 512.</p> <h1>Conclusion</h1> <p>In this paper, we import techniques and bag-of-tricks from person re-identification. For the specific task of landmark retrieval, we propose continent-aware sampling strategy and Landmark-Country aware post processing, which has proven to be very effective on the private leaderboard. The relationship between landmark retrieval and landmark recognition would be studied in the future.</p>
Google Landmark Retrieval 2021
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my longtime teammate <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> and new teammate <a href="https://www.kaggle.com/hongweizhang" target="_blank">@hongweizhang</a> . It was again great teamwork. We got 3rd place in retrieval and 4th place in recognition competition. I'm posting the solutions in both forums for easy future reference, even though there is large overlap.</p> <h2>Approach</h2> <p>Qishen and I were part of the 3rd place team (together with <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> <a href="https://www.kaggle.com/alexanderliao" target="_blank">@alexanderliao</a> ) in recognition 2020. Last year we developed a strong model, i.e., sub-center ArcFace with dynamic margins. </p> <p>This year two competitions run simultaneously with tighter deadlines, so we largely re-used last year's pipeline, with newer architectures and better post-processing. We trained models the same way for both retrieval and recognition competitions, but selected different model combinations and different post-processing for final submissions.</p> <h2>Training recipe</h2> <p>We mostly follow our last year's solution (see <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/187757" target="_blank">here</a> for details). The recipes are</p> <ul> <li>sub-center ArcFace with dynamic margins</li> <li>progressive training with increasing image sizes</li> <li>the indispensable <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a> library</li> <li>cosine learning schedule with Adam/AdamW optimizer</li> <li>multi-GPU training with DistributedDataParallel</li> </ul> <h2>Model choices</h2> <p>There are quite a few new SOTA image models published in the past year, notably EfficientNet v2, NFNet, ViT, Swin transformers. We trained all these new models using last year's recipe. For transformers, we used 384 image size. For CNN models, we used progressively larger image sizes (256 -&gt; 512 -&gt; 640/768). </p> <p>We found that the CNNs and transformers perform equally well on local GAP scores (recognition metric), but transformers have significantly higher local mAP (retrieval metric) even with smaller 384 image size. We think this is because transformers are patch-based and can capture more local features and their interactions, which are more important for retrieval tasks.</p> <p>Besides the new models, we also reused some of our last year's models by finetuning, mostly EfficientNets.</p> <h2>The ensemble</h2> <p>Our final ensemble consists of 7 models: 3 transformers, 2 new CNNs and 2 old CNNs. Their specifications and local scores are below. Old CNNs' CV scores are not shown because they are leaky (last year's fold splits were different).</p> <p>Adding more CNNs to the ensemble hurts retrieval score but helps recognition score -- our recognition's best ensemble has 4 more CNNs, because transformers are more important for retrieval, and adding more CNNs would reduce transformers' weights.</p> <table> <thead> <tr> <th>Model</th> <th>Image size</th> <th>Total epochs</th> <th>Finetune 2020</th> <th>cv GAP (recognition)</th> <th>cv mAP@100 (retrieval)</th> </tr> </thead> <tbody> <tr> <td>Swin base</td> <td>384</td> <td>60</td> <td></td> <td>0.7049</td> <td>0.4442</td> </tr> <tr> <td>Swin large</td> <td>384</td> <td>60</td> <td></td> <td>0.6775</td> <td>0.5161</td> </tr> <tr> <td>ViT large</td> <td>384</td> <td>50</td> <td></td> <td>0.6589</td> <td>0.5633</td> </tr> <tr> <td>ECA NFNet L2</td> <td>512</td> <td>30</td> <td></td> <td>0.7021</td> <td>0.3565</td> </tr> <tr> <td>EfficientNet v2l</td> <td>640</td> <td>40</td> <td></td> <td>0.7129</td> <td>0.4158</td> </tr> <tr> <td>EfficientNet B6</td> <td>512</td> <td>10</td> <td>✓</td> <td></td> <td></td> </tr> <tr> <td>EfficientNet B7</td> <td>672</td> <td>20</td> <td>✓</td> <td></td> <td></td> </tr> </tbody> </table> <h2>Post-processing a.k.a. reranking</h2> <p>The reranking is inspired by and improved upon 2019 Retrieval's <a href="https://arxiv.org/abs/1906.04087" target="_blank">winning solution</a>. Their method was to rerank all the "positive" index images before all the "negative" ones. The downside of this approach is that, the "positive" and "negative" are predicted by the model, which may be incorrect.</p> <p>We use a softer approach: when reranking, give a boost to the similarity score for "positives" and give a penalty to the similarity score for "negative". The boost and penalty depends on how confident we are that an index image is a positive or negative.</p>
Google Landmark Retrieval 2021
5th Place Solution(Team: underfitting)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, very much thanks to the host for holding Google Landmark Competition.<br> these competitions are very interesting and impressive for us.<br> Alsoo thanks to my teammate( <a href="https://www.kaggle.com/ks2019" target="_blank">@ks2019</a> )he contributes to our team for many work.</p> <p>Our 5th solution summary is here</p> <h1>Brief Summary</h1> <ul> <li>Backbone + ArcFace, GLDv2 Training(4.1M)</li> <li>Post Process(WDA, bridged confidence etc….)</li> </ul> <h1>Solution</h1> <h2>Models</h2> <p>Model architecture is backbone+ArcFace module+Average Pooling.<br> The backbone list is here.</p> <ul> <li>EfficientNet v2s 800</li> <li>EfficientNet v2m 732</li> <li>EfficientNet v2m 640</li> <li>EfficientNet v2l 720</li> <li>EfficientNet v2xl 640(training 512)</li> </ul> <p>Training data is gldv2(4.1M). in order to train large image size with high batch size, we used gradient accumulation and mixed-precision training,(above v2s)</p> <p>We use Adam 15ep training with Cosine Annealing and extract 512 embedding vectors from each model. After we create concatenation vectors(2560).</p> <p>our first result is KNN of GPU(cuml) method.<br> KNN in GPU is faster than CPU, it's a great performance.</p> <p>Also, we use pre-compute embedding, it computes in local and uploads all.(it use post-process)</p> <h2>Post-Process</h2> <p>We apply some post-process in our first result.<br> WDA, bridged-confidence, power KNN etc…. detail is here<br> <a href="https://www.kaggle.com/c/landmark-retrieval-2021/discussion/275955" target="_blank">https://www.kaggle.com/c/landmark-retrieval-2021/discussion/275955</a></p> <h2>Hardware</h2> <p>TPU-v3 8core(GCP)</p> <h2>Our code</h2> <p><a href="https://www.kaggle.com/ks2019/landmark-retrieval-5th-place-inference-notebook" target="_blank">https://www.kaggle.com/ks2019/landmark-retrieval-5th-place-inference-notebook</a></p>
Google Landmark Retrieval 2021
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2021 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Fisrst of all, many thanks to Kaggle and the hosts for hosting such an interesting competition, and congratulations to all the winners. And a special thanks to all my teammates <a href="https://www.kaggle.com/takuok" target="_blank">@takuok</a> and <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> .</p> <h1>Data definition</h1> <ul> <li>train_clean: there are 1.6 million training images and 81k classes.</li> <li>train3.2m: there are 3.2 million images belonging to the 81k classes in train_clean.</li> <li>train4.1m: all train data in GLDv2, that is 4.1 million images.</li> <li>train4.9m: all train data in GLDv2 and index images of 2019 competition.</li> </ul> <h1>Tips in Retrieval</h1> <p>We used reranking techniques proposed by smlyaka team of the 2019 GLR champion. <br> <strong>Github</strong>: <a href="https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution" target="_blank">https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution</a><br> <strong>Paper</strong>: <a href="https://arxiv.org/abs/2003.11211" target="_blank">https://arxiv.org/abs/2003.11211</a><br> There are two steps, sort-step and insert-step. First, landmark_id and score of index set and query set are determined using train4.9m. Then, in the sort-step, retrieved index results are sorted based on their label similarity to the query images. In the insert-step, we added samples that are not retrieved by their image similarity but have same landmark_id as the query image.<br> This improved the base score from public/private 0.31324/0.32614 to 0.39905/0.43219.</p> <h2>Model</h2> <p>The models we used are almost same as Recognition ones.<br> We concatenate embedding vectors of our models shown below, then use them to calculate similarities.</p> <h3>takuoko part</h3> <p>I used almost the same architecture that the last year 1st team used. <code>ViT backbone -&gt; Dense(1024) -&gt; ArcFace</code> with class weight. I chose ViT backbone because it works well on <a href="https://paperswithcode.com/task/fine-grained-image-classification" target="_blank">FGVC tasks</a>. I had thought about using FGVC tricks to get local and global features, but I didn't have enough time.<br> I also suffered from the phenomenon of loss becoming nan when learning with fp16. I got around this by the grad_clip and calculating Attention layer using fp32.</p> <p>Base settings</p> <ul> <li>image size: 384</li> <li>optimizer: AdamW</li> <li>scheduler: cosine with warmup</li> <li>batch size: 16*2GPU</li> <li>use fp16</li> </ul> <p><strong>BeiT-large: 2epochs@(train_clean) -&gt; 8epochs@(train4.1m)</strong><br> <strong>Swin-base: 30epochs@(train_clean) -&gt; 6epochs@(train4.1m)</strong><br> Sadly, the 2nd part of my swin didn’t work well.</p> <h3>tascj part</h3> <p>I used augmentations, loss (sub-center ArcFace) and training strategies following last year’s top solutions.</p> <p><strong>b4: 10epochs@(train_clean, 512) -&gt; 6epochs@(train4.9m, 512)</strong><br> <strong>b6: 10epochs@(train_clean, 256) -&gt; 6epochs@(train4.9m, 512) -&gt; 2epochs@(train4.9m, 640)</strong><br> <strong>xcit_small_24_p16_384: 6epochs@(train_clean, 384) -&gt; 6epochs@(train4.9m, 384)</strong></p> <h3>inoichan part</h3> <p>Architecture is sub-center ArcFace with dynamic margins which was used in last year’s 3rd-place team. Backbone -&gt; Dense(512) -&gt; sub-center Arc. Training strategy is as following:</p> <h5>efficientnet v2m</h5> <p><strong>1st</strong><br> data: train_clean<br> image size: 256 crop from 300<br> epoch: 15<br> batch size: 64<br> scheduler: warmup</p> <p><strong>2nd</strong><br> data: train4.1m<br> image size: 512 crop from 600<br> epoch: 15<br> batch size: 8 x 16 steps w/ freeze batch norm<br> scheduler: cosine annealing</p> <p><strong>3rd</strong><br> data: train4.1m<br> image size: 640 crop from 720<br> epoch: 3<br> batch size: 4 x 64 steps w/ freeze batch norm<br> scheduler: cosine annealing</p> <h5>efficientnet b5</h5> <p><strong>1st</strong><br> data: train4.1m<br> image size: 256<br> epoch: 10<br> batch size: 32 x 8 steps w/ freeze batch norm<br> scheduler: cosine annealing</p> <p><strong>2nd</strong><br> data: train3.2m<br> image size: 512<br> epoch: 7<br> batch size: 16 x 16 steps w/ freeze batch norm<br> scheduler: cosine annealing</p> <p><strong>3rd</strong><br> data: train4.1m<br> image size: 512<br> epoch: 9 (freeze backbone in first two epoch, then unfreeze)<br> batch size: 16 x 16 steps w/ freeze batch norm<br> scheduler: cosine annealing</p> <h1>Acknowledge</h1> <p>takuoko is a member of Z by HP &amp; NVIDIA Data Science Global Ambassadors.<br> Special Thanks to Z by HP &amp; NVIDIA for sponsoring me a Z8G4 Workstation with dual RTX6000 GPU and a ZBook with RTX5000 GPU.<br> This competition has the big dataset.<br> So I tried pytorch's DDP parallel training on my dual RTX6000 GPUs and it helped a lot.</p>
Google Landmark Retrieval Challenge
14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, Everyone!</p> <p>We’d like to share our solution. </p> <p>Our approach consists of 4 main steps: global CNN descriptor, nearest neighbor search, re-ranking and query expansion. </p> <ol> <li>As global descriptor we’ve finetuned ImageNet-pretrained <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py#L13">PyTorch ResNet50</a> model on dataset from <a href="https://www.kaggle.com/c/landmark-recognition-challenge">Landmark Recognition Challenge</a> with <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">GeM pooling layer</a> from <a href="https://arxiv.org/abs/1711.02512">“Fine-tuning CNN Image Retrieval with No Human Annotation”</a> by Radenovic et.al. and <a href="https://github.com/DagnyT/hardnet/blob/master/code/Losses.py#L87">hard-in-batch triplet margin loss</a>.</li> <li>Fast nearest neighbor search done by <a href="https://github.com/facebookresearch/faiss">Faiss</a> library.</li> <li>We have re-ranked top-100 images by performing classic image matching: detect keypoints -&gt; extract local patches -&gt; describe -&gt; match -&gt; <a href="http://cmp.felk.cvut.cz/software/LO-RANSAC/Lebeda-2012-Fixing_LORANSAC-BMVC.pdf">RANSAC</a> geometric verification.</li> <li>The final stage is query expansion by diffusion, Iscen et.al <a href="https://arxiv.org/pdf/1611.05113.pdf">“Efficient Diffusion on Region Manifolds: Recovering Small Objects with Compact CNN Representations”</a> implemented in python here <a href="https://github.com/ducha-aiki/manifold-diffusion">https://github.com/ducha-aiki/manifold-diffusion</a> . </li> </ol> <p>For more details and links, please, take a look at <a>https://medium.com/@ducha.aiki/14th-place-solution-for-kaggle-google-landmark-retrieval-challenge-6d43e38eb513</a></p>
Google Landmark Retrieval Challenge
17th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry, I got 0.383/0.370 without query expansion.</p> <p>Hi Everyone,</p> <p>Ending final validation, I share my solution.<br> My approach have three stage. First, I train embedding model with triplet-loss. Next, index images are registered for KNN (indexing). At last, test images are used query images. In addition, I use global verification and query expansion same as other teams !</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-google-landmark-retrieval-challenge">https://github.com/toshi-k/kaggle-google-landmark-retrieval-challenge</a></p> <p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-google-landmark-retrieval-challenge/master/img/solution.png" alt="conceptual diagram"></p> <p>It improved my basic model (triplet embedding + KNN) about 0.11. My basic model achieved 0.318/0.319 (public/private), and my final score is 0.426/0.434.</p>
Google Landmark Retrieval Challenge
1st Place Solution Summary: CVSSP & Visual Atoms (0.627)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes and yes.</p> <p>Nope, I am not at university yet.</p> <p>For grandmaster, you need one of those gold medals to be a solo, so not quite yet ;) I'm working on it</p> <p>Thank you for the kind words though!</p> <p>Good question.</p> <p>Most of the work was done on two machines: one with 4 Pascal GPUs and another dual xeon machine (52 cores/768GB RAM, although we didn't need more than 200). At times we had access to more machines temporarily (mostly CPU).</p> <p>Hi everyone,</p> <p>Congratulations to everyone else who participated for making it such a fierce competition :)</p> <p>Here’s a detailed summary of our team’s solution to the competition for those of you who are curious. Our solution consisted of two main components: First, creating a high performance global descriptor that can represent the images in the dataset as singular vectors, and then building an efficient scheme for matching these vectors to find the most likely matches to submit to the leaderboard. <br> Below is a high level flowchart overview, with LB scores labelled at each applicable step, followed by a detailed description of each component of our solution.</p> <p><img src="https://i.imgur.com/GksVSIB.png" alt="Complete solution flow diagram"></p> <p>NB: Throughout this post we refer to scores as percentages for clarity, i.e. 62.5% = 0.625. All scores are on the public LB.</p> <hr> <h1>In-depth</h1> <h2>Global Descriptors</h2> <p>The main part of our solution involves several global descriptors - vectors which describe the entire contents of the image. We start off with two pre-trained CNN models (<a href="https://github.com/facebookresearch/ResNeXt">ResNet and ResNeXt</a>), and use four state-of-the-art aggregation methods to generate global descriptors from these models: Here are brief details of each method and its “raw” performance (i.e. without query expansion, database expansion, etc):</p> <ul> <li><strong>Region-Entropy based Multi-layer Abstraction Pooling (REMAP) [42.8% mAP]:</strong> Our latest design for a global descriptor, which aggregates a hierarchy of deep features from different CNN layers, trained to represent multiple and complementary levels of visual abstraction. We plan to present the details of our REMAP architecture at the forthcoming CVPR workshop.</li> <li><strong>Maximum Activations of Convolutions (<a href="https://arxiv.org/abs/1511.05879">MAC</a>) [32.9% mAP]:</strong> MAC descriptors encode the maximum local response of each of the final layer convolutional filters. In the MAC architecture, the last convolutional layer of ResNeXt is followed by a max-pooling layer, L2-normalization layer and PCA+Whitening layer.</li> <li><strong>Sum-pooling of Convolutions (<a href="https://arxiv.org/abs/1510.07493">SPoC</a>) [31.7% mAP]:</strong> In the SPoC pipeline, the last convolutional layer of ResNeXt is followed by Sum pooling layer, L2-normalization layer and a PCA+Whitening layer.</li> <li><strong>Regional Maximum Activations of Convolutions (<a href="https://arxiv.org/abs/1610.07940">RMAC</a>) [34.7% mAP]:</strong> In RMAC, the last convolutional features of ResNeXt are max-pooled across several multi-scale overlapping regions. The region based descriptors are L2-normalized, PCA+Whitened and L2-normalized again. Finally, the descriptors are sum-aggregated into a single descriptor.</li> </ul> <p>The underlying CNN networks (ResNet and ResNeXt) are pretrained on ImageNet, and fine-tuned on a subset of the <a href="http://sites.skoltech.ru/compvision/projects/neuralcodes/">Landmarks dataset used in the work of Babenko et al.</a>, that contains approximately 120k images of 650 famous landmarks sites.</p> <p>The images in this dataset were originally collected through textual queries in an image search engine without thorough verification, and therefore they contain a significant number of unrelated images which we filter out and remove. This procedure is semi-automatic, and relies on dense SIFT features detected with a Hessian-affine detector and aggregated with the <a href="http://epubs.surrey.ac.uk/812468/">RVD-W descriptor</a>. This cleaning process leaves about 25,000 images still belonging to one of the 650 landmarks, which is what we then used to fine-tune our models for this competition.</p> <p>We didn’t use the data from the landmark classification sister competition for training, as we wanted to see how well our solution would generalise to new dataset (as opposed to creating a solution which was specifically tailored to this dataset).</p> <h2>Combined Descriptor</h2> <p>Our final global descriptor was built by concatenating six fine-tuned global descriptors trained as explained above (LB score shown in brackets):</p> <ul> <li>ResNeXt+REMAP (<strong>42.8%</strong>)</li> <li>ResNeXt+RMAC (34.7%)</li> <li>ResNeXt+MAC (32.9%)</li> <li>ResNeXt+SPoC (31.7%)</li> <li>ResNet+REMAP (35.8%)</li> <li>ResNet+MAC (30.4%)</li> </ul> <p>We assign each descriptor a weight by scaling them to a fixed L2 norm, and concatenate as follows:</p> <pre><code>XG = [2× ResNeXt+REMAP; 1.5× ResNeXt+RMAC; 1.5× ResNeXt+MAC; 1.5× ResNeXt+SPoC; ResNet+MAC; ResNet+REMAP] </code></pre> <p>The weights are selected ad-hoc to reflect the relative performance of each method. After this, we perform PCA to reduce the dimensionality of the descriptor to 4K (not just to save computation, but also to discard noisy dimensions) and apply whitening so that all dimensions have equal variance. While PCA and whitening only gave a small improvement on its own, it improved the result of our query expansion step by several %.</p> <h2>Nearest Neighbour Search</h2> <p>After descriptor creation, each image is represented by a 4096-dimensional descriptor. Next, we use an exhaustive k-nearest neighbour search to find the top 2500 neighbours and L2 distances for each image – nothing fancy here. Submitting the top 100 neighbours for each test image at this stage nets a score of <strong>47.2%</strong>.</p> <p>This step is implemented using optimised NumPy code and takes 2 hours to find the top 2.5K neighbours for each of the 1.2M images (we needed the nearest neighbours for the index set too for the next steps).</p> <h2>Database augmentation</h2> <p>The next step is to perform <a href="https://arxiv.org/abs/1610.07940">database-side augmention</a> (DBA), which replaces every image descriptor in the database (including queries) with a weighted combination of itself and its top 10 neighbours. The objective is to improve the quality of the image representations by leveraging the features of their neighbors. More precisely, we perform weighted sum-aggregation of the descriptors, with weights computed using:</p> <pre><code>weights = logspace(0, -1.5, 10) </code></pre> <p>Interestingly, while on other datasets we find that using more than a couple neighbours for augmenting the <em>query</em> is detrimental to the score, we found 10 neighbours to be the optimum for both query and database images for this dataset.</p> <p>It should be noted that DBA was the last thing we added to the pipeline, and while it gave a huge improvement to our score on its own, when combined with the query expansion step, the improvement is only a modest 1-2%. We believe this is because the DB expansion acts very similar to the first level of our query expansion method.</p> <h2>Query Expansion</h2> <p>Query expansion is a fundamental technique in image retrieval problems which often nets a significant improvement in performance. It works off the principle that if images A &lt;-&gt; B, and B &lt;-&gt; C, then A &lt;-&gt; C (even if descriptors A and C are not explicitly matched). A simple example where the benefit of this is illustrated is in cases where there are three partially overlapping images:</p> <p><img src="https://i.imgur.com/wMCa17s.jpg" alt="Illustration showing query expansion"></p> <p>In this case, a query expansion scheme can help us match images A and C as being of the same scene, even though the descriptors themselves (especially global descriptors) are very unlikely to match. This also benefits us in other cases where, for example, we have several images with gradually different perspectives or lighting conditions, and intermediate images can help us to connect them.</p> <p>In this competition, we designed a novel, fast technique for query expansion which can be run recursively to capture long-distance connections between images with many intermediate steps, which lends itself well to this problem as some landmarks have very many images - only one image has to be matched to a query and all the images of the same landmark are then also appended to the top 100 list.</p> <p>We’ve opted not to discuss the exact method here and instead explain it in-depth in a paper in the coming weeks. But the basic principle is the same as other well-established query expansion techniques. Our query expansion gave an approx <strong>11%</strong> improvement when run for one iteration, and <strong>14%</strong> improvement when run recursively (with 30min runtime) - this is reduced when combined with DBA.</p> <h2>Simplified Model</h2> <p>In a production ML environment, it’s rare to use a monster ensemble of various models when the laws of diminishing returns apply - Usually most of the performance can be achieved with a small subset of the complexity. We expect a slimmed-down version of our solution with just ResNeXt-REMAP and query expansion to score 56-57% with &lt;12 hours total runtime from downloading the data to submission (on 4 GPUs) - largely aided by the fact that it did not have to be trained for this competition.</p> <p><img src="https://i.imgur.com/zWcCl2a.png" alt="Simplified model flow diagram"></p> <h2>Things that didn’t work</h2> <ul> <li><strong>Local descriptors:</strong> This was perhaps the most surprising thing for us in the competition. We tried several schemes based on various local descriptors, with or without geometric verification, for example using it for reranking our results (decrease in performance), or using it to scan through the top few thousand global neighbours to find very confident local matches that the CNN-based globals missed (helped, but the improvement was sub-0.1%). We’re very curious if/how other teams were able to harness the power of local descriptors in this competition. Perhaps CNN-based global descriptors are so good now that the era of locals is over?</li> <li><strong>Coping with rotated images:</strong> A glance through the dataset showed us that a significant number of images were rotated. We tried accounting for this in multiple ways, for example by also comparing rotated and non-rotated descriptors during kNN and taking the closest match for each pair of images, however we could not increase our score. This is probably because as this is a dataset with very many distractors, the increase in false positives outweighs the gain in true positives.</li> <li><strong>Ensembles:</strong> We tried various methods to combine very different submissions with methods such as rank averaging and interleaving predictions, however the gains we saw from this were very small. It seemed to work better to combine models earlier on within the solution than at the end.</li> </ul> <h2>Acknowledgments</h2> <p>Components of our solution have been supported by work done under the industry project "iTravel - A Virtual Journey Assistant", InnovateUK grant #102811 (funder: UK Research and Innovation). <br> As this is an industry-led project, we are unfortunately unable to release the code for our solution under the Apache 2.0 License (which includes royalty-free commercial use).</p> <hr> <p>Finally, I would like to say thank you to Kaggle and the organizers for organising this very interesting competition - it is unusual and refreshing to see both retrieval and semi-supervised learning competitions on Kaggle. I would also like to thank my excellent teammates Sameed and Mirek for their hard work and fun moments and everyone at both <a href="https://www.surrey.ac.uk/centre-vision-speech-signal-processing">CVSSP</a> and <a href="https://www.surrey.ac.uk/centre-vision-speech-signal-processing">Visual Atoms</a> for tracking our progress and supporting throughout. The whole competition has been a very enjoyable experience for us.</p> <p>We will do our best to answer any questions you might have about our solution, and we look forward to seeing the solutions of the other teams :)</p> <p>- Mikel (with the team) <br> CVSSP &amp; Visual Atoms</p> <p>Simple but effective!</p> <p>This is an interesting point you bring up, and not something I thought of. However, I'm not sure how much finding these negative relationships would benefit here.</p> <p>The competition is interesting in the sense that we have to provide top 100 matches in the training set for each query. In most queries, global descriptors may only find one or two positive matches (this is a case of high false negative rate and low false positive rate).</p> <p>Looking at the distribution of distances in this dataset, it seems that the queries are very far removed from the database images (images are very tough to match), while the database images are easily matched to each other. In this case, the job of the global descriptor is to find one or two matches, allowing us to "embed" the query somewhere in the database space, and then query expansion finds the 100 closest images in the database space that have the highest probability of matching the detected matches. </p> <p>However, this is not a binary matching/non-matching problem, the trick is each database image has to be assigned a probability of matching the query. It can be viewed as a graph problem with 1M nodes and 2.5B edges (top 2500 neighbours are known for each node) and you have to determine the probability of two nodes actually being connected (which is difficult given you have L2 distances and not probabilities - and trying to map them just doesn't work, we tried).</p> <p>It boils down to the fact that we don't have an issue with over-predicting, but with under-predicting (in my view). So while I think your approach could definitely help add performance, it might not be huge, and would need to carefully tuned not to increase the false negative rate too much. An interesting experiment either way.</p> <p>The model can be thought of in some ways as unsupervised - we have no training data and no validation data. We used the leaderboard and also looking at various statistics of the behaviour of kNN, QE etc. to try and judge what changes to the model were good or not.</p> <p>We could have created a dataset out of the data from the classification competition and use that for training/validation, but we decided not to and to instead try and create a generic solution. </p> <p>I think that our stability on the private LB may be largely due to the fact that we tried to create a robust model by not tuning it specifically for this data, so changes in the data distribution had less of an effect on us than it perhaps did on others.</p> <blockquote> <p>However when I tried doing an initial (naive n^2) similarity comparison I had a computation time estimate ranging from 4 months on a parallelized 8 core CPU to about 1 month on 2 1080ti GPU's. Even when using a smarter KD tree based nearest neighbor search my time to run a full submission is about 5 days. Not to mention the time taken for one off things like initial feature generation.</p> </blockquote> <p>This is what I also had initially. I spent some time optimising this n^2 similarity comparison and it now takes about 1 hour for the whole test set on 8 cores, or 12 hours for the whole index+test set (1.2M^2 comparisons). This is entirely doable without kd-tree or similar, although our approach only works with L2 distancr. I'll post the code we used for this when I can.</p>
Google Landmark Retrieval Challenge
6th Place Solution: VRG Prague
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hm, not sure about this model, because I don't see in our submissions that we tried single-scale with this model. Some of the experiments on earlier models show that multi-scale gives around +0.015.</p> <ol> <li><p>L2 after GeM was introduced when we didn't have FC layer in the pipeline, ie it was the final layer and it was important to be there. Then, we added (concatenated) FC+L2, and true, we could have removed L2 before FC. However, if I remember correctly, L2 before FC did help a bit during training, performance wise, but I am not 100% sure about this.</p></li> <li><p>In my experience, training with a size close to the one used at testing was important for the best performance. I didn't do heavy testing of different pooling procedures at different image sizes.</p></li> </ol> <p>Hope this helps!</p> <h1>Approach by Visual Recognition Group (VRG) Prague</h1> <p>Thanks to the organizers for the interesting challenge and kudos for the top ranked teams!</p> <h3>Super quick summary</h3> <p>Our method is based on CNN-based global descriptors trained on the training set of the recognition challenge, while retrieval is performed by diffusion-based similarity search. </p> <p>** Performance is presented in the format public / private score.</p> <p><img src="http://cmp.felk.cvut.cz/cnnimageretrieval/img/google_landmark_retrieval_pipeline.png" alt="pipeline"></p> <h3>GeM descriptor training</h3> <p>Our network consists of the convolutional layers of ResNet101 pre-trained on ImageNet, followed by generalized-mean pooling (<a href="https://arxiv.org/abs/1711.02512">GeM</a>), l2 normalization, a fully-connected (FC) layer, and a final l2 normalization. We train a two-branch network with contrastive loss on pairs selected from the <a href="https://www.kaggle.com/c/landmark-recognition-challenge">Google Landmark Recognition Challenge</a> training set with image resized to have the largest dimensions at most equal to 1024. We randomly sample at most 50 positive pairs per landmark, while the hard negatives are re-mined for each epoch of the training. The FC layer is initialized by the parameters of <a href="https://arxiv.org/abs/1604.02426">supervised whitening</a> learned on such pairs. </p> <p>Training was done using our publicly available <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">CNN Image Retrieval in PyTorch</a> toolbox. </p> <p>We additionally learn descriptor whitening without supervision on the index set. In particular, we use PCA whitening with shrinkage (work submitted to IJCV, extension of our <a href="https://arxiv.org/pdf/1707.07825.pdf">BMVC 2017</a>).</p> <h3>Descriptor extraction</h3> <p>During testing, we perform descriptor extraction at 3 scales (scaling factors of <code>1, 1/sqrt(2), sqrt(2)</code>), sum-aggregate the descriptors, and l2 normalize. The PCA whitening with shrinkage is learned/applied on the multi-scale descriptors. The dimensionality of the final descriptor is 2048.</p> <h3>Search</h3> <p>We use <a href="https://arxiv.org/pdf/1611.05113.pdf">diffusion</a>, a graph-based query expansion technique, to perform retrieval. We initially construct an affinity matrix with reciprocal k-nearest neighbors (k=50), and then initiate diffusion by the 5 Euclidean nearest neighbors of the query descriptor. </p> <p>We use our own <a href="https://github.com/ahmetius/diffusion-retrieval">MATLAB implementation</a>, while a third-party <a href="https://github.com/ducha-aiki/manifold-diffusion">Python implementation</a> is also available.</p> <p>** Concatenating our descriptor with <a href="https://arxiv.org/pdf/1610.07940.pdf">R-MAC trained</a> on an external dataset provides additional improvements. This composes our final submission of 59.1 / 57.6 on public / private score.</p> <h3>Timing</h3> <p><strong>Descriptor extraction:</strong> 170 ms per image (GPU).</p> <p><strong>Query time:</strong> 12 ms for Euclidean search (CPU) + 150 ms for diffusion per query (GPU).</p> <p>** Machine used for the search: Intel Xeon CPU E5-2630 v2 @ 2.60GHz with 256 GB of memory (12 threads), with one GPU Tesla P100 with 16 GB of memory.</p> <h3>Team members</h3> <p><a href="http://cmp.felk.cvut.cz/~radenfil/">Filip Radenovic</a>, <a href="http://cmp.felk.cvut.cz/~iscenahm/">Ahmet Iscen</a>, <a href="http://cmp.felk.cvut.cz/~toliageo/">Giorgos Tolias</a>, <a href="http://cmp.felk.cvut.cz/~chum/">Ondrej Chum</a></p> <p>In our GitHub implementation p is learned, it is initialized with p=3 but it is learned together with other parameters. The only difference is that for p we use weight decay equal to 0 (this is done in example train script). </p>
Google QUEST Q&A Labeling
11th Solution: A threshold oriented Competition is not good
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>0.385, yes, the score drops from 0.386 to 0.385. If I gave up 25 days before, I could have got a better rank. :p</p> <p>Thanks. Maybe I should update my profile to make me more real😏 </p> <p>Your team didn't use the threshold to post process result? That't pretty amazing! Waiting to see the solution.</p> <p>Based on 5 fold CV predicted values, I then used <code>for i in range(100)</code> to get the percentage that maximize the rho score.</p> <p>Score updated.</p> <p>I first tired map every value to 0.3 0.5 0.7 like that. But it didn't work. And I saw <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/123490">Khoi Nguyen complained about the column <code>question type spelling</code></a> (Thanks, Khoi Nguyen.), and I started to zero out column <code>quesiton type spelling</code>. Iit workd! So how about other columns? And I did a lot to experiments to probe the threshold.</p> <p>The model that I created 23 days before is doing better than my final submitted one. After that, I tried a lot, but what really works is tuning the threshold that what percent of the smallest value I should to zero out. </p> <p>I 'waste' my lot precious time to tuning the threshold. But it can help me nothing when I join the next competition. ``` def zero_out(df, column, percent): """Zero out all the values that close to 0""" index = df[column].nsmallest(int(percent*len(df))).index df.loc[index, column] = 0 print(column, " remain ", len(df)-len(index))</p> <p>zero_tupe = [ ("question_type_spelling", 0.99), ("question_not_really_a_question", 0.94), ("question_type_consequence", 0.92), ("question_type_definition", 0.91), ("question_type_entity", 0.88), ("question_type_compare", 0.96), ("question_type_choice", 0.44), ("question_conversational",0.78), ("question_multi_intent",0.55), ]</p> <p>for column, percent in zero_tupe: zero_out(df_sub, column, percent) ```</p> <p>The score compare is following: (PB ,LB) Without zero-out: 0.386 0.409 With zero-out (23 days ago): 0.424 0.450 After 23 days 'hard work' 0.424 0.465</p> <p>What I did later after turning the threshold is actually a harm to the PB score. </p>
Google QUEST Q&A Labeling
12th place post-processing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Scaling to [0,1] range then rounding to match the training target distribution (it's discrete).</p> <p><a href="/adityaecdrid">@adityaecdrid</a> Kind of, but I don't know if it can be called LCM since LCM is defined for integers. My mathematical English is quite lacking anyway.</p> <p>Find the smallest integer that when multiplied with all the unique values in a column you get all integers back.</p> <p>I used both this and the regular threshold trick, both boosted the public score (I guess that's why I was in the 0.48 zone). Since I got a shakedown I'm trying investigate whether this or the thresholds was the culprit however my late submissions crashed for no reasons. I'm guessing that the thresholds were the ones that lowered my score since this one requires no tuning as you said.</p> <p>``` raters = np.array([18, 18, 6, 6, 6, 6, 18, 18, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 3, 18, 18, 18, 18, 18, 90, 6, 6, 6, 18])</p> <p>mins = np.min(all_preds,axis=0) maxs = np.max(all_preds,axis=0) all_preds = (all_preds - mins)/(maxs - mins)</p> <p>all_preds = np.round(raters*all_preds).astype(np.float)/raters</p> <p>``` If you have time please try this on your submission, I'm really interested to know how much it can boost your own score.</p> <p>So it was nowhere optimal!</p> <p>I checked your team's solution and our modeling part is very similar with double headed transformers. The thing that demotivated me from posting more of my solution though, is that I don't think my models really do better than anyone else in the top 30 or even 50, it all came down to who had a more robust post-processing method. That and I desperately need to take a nap now.</p>
Google QUEST Q&A Labeling
15th solution with codes
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Learnt a lot from your team's solution, pesudo labeling is very impressive!!!</p> <p>Thanks!!</p> <p><strong>Congrats to all winners, and thanks to my teammates</strong>! It's a long journey to finish this competition and get a silver medal, it's the first time for me to work with nlp projects.</p> <h2>Our solution could be summarized as:</h2> <ol> <li>Post processing, our idea is to group values to discrete values existing in training set. We chose columns to apply post processing by comparing oof. Post processing could be found in <a href="https://www.kaggle.com/jionie/models-with-optimization-v5/">my inference kernel</a></li> <li>Ensemble, our final submission consists of 8 base models as you can see in the inference kernel, the best model is two part roberta base model which got 0.412 oof without post processing and 0.439 lb with post processing.</li> <li>Modified pooler layer and multiple hidden_states , we used multiple hidden_states for the input of pooler layer inspired from <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/123770">this great post</a> we followed source codes in transformers for pooler class. This could also be found in my inference kernel. We chose hidden_state_layers based on validation results.</li> <li>Differential learning rate, we used differential learning rate inspired by <a href="https://www.kaggle.com/melissarajaram/roberta-fastai-huggingface-transformers">this great kernel</a>. With differential learning rate, our models got higher oof and converged faster.</li> <li>Two part model, inspired from discussions, we trained models in two style. One is using title + question_body + answer as input for question labels and answer labels; one is use title + question_body for question labels and using title + answer for answer labels. The second style gives higher oof and lb, I think it's because more information for one problem. (Also using max_len 768 for xlnet gets higher oof is the same idea.</li> <li><p>Data augmentation, we used nlpaug for some basic data augmentation, also we used google translate to back translate some content from other languages. I decided to do this at very early stage, so I'm sorry I can't tell how much it could boost. It won't be much I think.</p> <h2>Things don't work for us:</h2></li> <li><p>T5</p></li> <li>Extra tokens</li> <li>Large models, large models gave similar oof comparing with base models and it's hard to inference, always slow and run out of resources. <br> <h2>Our best single model:</h2></li> </ol> <p><a href="https://www.kaggle.com/leonshangguan/roberta-single-models-with-optimization">https://www.kaggle.com/leonshangguan/roberta-single-models-with-optimization</a></p> <h2>Our codes:</h2> <p><a href="https://github.com/jionie/Google-Quest-Answer">https://github.com/jionie/Google-Quest-Answer</a></p> <p>I used models after my "swa" stage, lol. Thanks for your swa link! It looks promising, I found many people in cv competitions used swa so I tried to apply it here.</p> <p>Thanks!!</p> <p>I don't know how to use the official codes for swa, so I wrote a simple version for every epoch weights = ((epochs - 1) * (weights in last epoch) + weights in this epoch) / epoch; since I don't know it's correct or not so I didn't write it in solution. I used very small lr for that stage, it could boost a little for oof like 0.0005 to 0.002 in each fold.</p>
Google QUEST Q&A Labeling
162nd solution using only BERT
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thank you and congratulations to you too.</p> <p>Please try now , but we never used this approach. </p> <p>Congratulations to the winners , this was our first NLP competition we tried below things:</p> <ol> <li>At first we build a simple model using <a href="https://www.kaggle.com/mobassir/jigsaw-google-q-a-eda">https://www.kaggle.com/mobassir/jigsaw-google-q-a-eda</a></li> <li>We tried to tune the hyper-parameter but that didn't had much effect.</li> <li>Since we were concatenating Title, Body and Answer using 512 chars which cant understand the context , we tried to build model using word embeddings of GLove and Fasttext but didn't had much accuracy.</li> <li>Then we tried to create separate embedding for Title + Body and answer with additional metadata which worked.</li> <li>We finally created Title + Body and answer with additional metadata and BERT with concatenating title , body and answer with 10 folds.</li> </ol> <h3><strong>Ensemble of all the outputs gave us 0.4003 on LB.</strong></h3> <p><strong>What didn't work for us:</strong></p> <ol> <li>We tried to trim Title , body and answer - Many has done that to understand the context i feel we did something wrong here.</li> <li>We tried to create text summerisation of Body and Answer but the Rho remained the same and we didn't explore much of this idea.</li> <li>We tried to replace last layer as regression and loss function as MSE , we could have explore this idea a bit more as many top scorer has done some kind of post processing.</li> </ol> <p>*<em>Thing we should have done *</em></p> <ol> <li>Pre-Processing into the data like adding category column combining with title (Don't know how i missed this one)</li> <li>Post processing of the data to adjust more to "Human level scores"</li> <li>Build more models like XLNet , Ro-BERT</li> <li>We tried to use LAMB instead of ADAM but due to an error we couldn't progress. (TF 2.x :) )</li> </ol> <p>We missed the medal by 5 ranks , hopefully we can do better next time. And not to forget thank you @vinaydoshi for continuous encouraging on working on this problem.</p> <p><strong>If anyone is interested here is our notebook.</strong></p> <p>10 fold BERT concatenating title , body and answer : <a href="https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-8-fold">https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-8-fold</a> </p> <p>Hugging face BERT with Title + Body and Answer with additional meta-data :<a href="https://www.kaggle.com/vinaydoshi/tfbert-ensemble-preprocess-v1?scriptVersionId=28236333">https://www.kaggle.com/vinaydoshi/tfbert-ensemble-preprocess-v1?scriptVersionId=28236333</a></p> <p>BERT as regression : <a href="https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-bert-regression?scriptVersionId=26800228">https://www.kaggle.com/buntyshah/google-quest-q-a-labeling-bert-regression?scriptVersionId=26800228</a></p>
Google QUEST Q&A Labeling
16th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/tanlikesmath">@tanlikesmath</a> One of the thing I learned from all of my competitions was that getting high places on LB is never only about the model, you always need to really understand the metric and the data and adapt to it. Sometimes the adaption is easier, and sometimes it's more tricky, like in this competition. Sometimes somebody share the adaptation early in the competition, and some times you need to look for it. In this competition the biggest trick - how to handle the Spearman peculiarities, was solved by a lot of teams and it was obvious you need to look for it (<a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">The post about Spearman</a> is one of the most read, upvoted and discussed). Building a good data science solution is always a combination of a good and solid model and the way the training, inputs, and outputs are handled.</p> <p>And, yes <a href="/abhishek">@abhishek</a> and I don't see eye to eye on that matter, but, he will have to agree that the 15 teams that beat us in this competition did it because they had better models, not better tricks. </p> <p>First I would like to thank my teammates <a href="/abhishek">@abhishek</a> , <a href="/atsunorifujita">@atsunorifujita</a> , <a href="/pvduy23">@pvduy23</a> for a very good team work.</p> <h2>Solution Highlights</h2> <h2>Preprocessing</h2> <p>We didn't really do any preprocessing to the text, we tried some cleaning which were aimed especially to make the parts with code more readable (shorter) but it didn't make any difference.</p> <h3>Loss function</h3> <p>I'm surprised I didn't see more referance to this in the winning posts, but here we did a little trick. Instead of using the target columns as-is we first ranked and normalized them to [0,1] and then used them as targets, something like &gt; target = scipy.stats.rankdata(target,'average') target = (target-target.min())/ (target.max()-target.min())</p> <p>(I saw CV improvement of about 0.05 - 0.01) As the loss itself I used MSE and some other teammates used MSE+BCE</p> <h2>Models</h2> <p>The models we used where RoBERTa-Large, XLNet, BERT-base-uncased. We also tried some other models, like BERT-large, RoBERTa-base, XLMRoBERTa and others. (Not choosing XLMRoBERTa was a mistake - will be discussed later) We also tried to pre-train and fine-tune bert-base on stackexchange corpus, but it didn't seem to improve CV.</p> <p>To handle long text, I used the full <code>question-title</code> + equal parts from <code>question-body</code> and <code>answer</code>, truncated by taking the beginning and the end.</p> <p>We 5 folds for every model</p> <h3>Augmentation</h3> <p>I Tried several augmentations: * replacing random tokens with [PAD] or with another token - 20% [PAD} seems to give nice results * Translating to German and Back - didn't work (Russian also didn't work) * Truncating random parts when the text is too long - didn't do any difference</p> <h2>Post - Processing</h2> <p>Postprocessing was very important in this competition due to the special characteristics of spearman correlation</p> <h3>Spearman</h3> <p>We ended up with clipping some of the output columns. The columns we clipped where columns were most of the targets values were 0 or where most of them where 1. The clipping threshold value was chosen to preserve the same non zero values in test as there are in train. This method improved CV and LB by ~0.03-0.04</p> <h3>Question-Type-Spelling</h3> <p>As the number of unique non-zero examples for this class is very low, we understood no ML model could really do a reliable work here. So we tried some heuristics - we ended -up with setting all question where the host was 'english.stackexchange.com' or 'ell.stackexchange.com' to 1. We also tried a more ambitions trick, by selecting different key words, but this didn't work on Private LB.</p> <h2>Stacking</h2> <p>We tried different types of stacking and averaging. At the end it looks like simple averaging was as good as any other method.</p> <h2>Where did we fail?</h2> <ul> <li><p>We didn't use enough models - we used 5 folds for every model which limit the amount of different models we could run in 2h, for this reason we (I !) neglected some good models like XLMRoBERTa, which gave good results, but didn't seem better then other models. The best practice if probably more diversified models, and dropping some of the folds from each model.</p></li> <li><p>CV - It might be because of the spearman metric or because the training set was too small for 30 classes, or we did something wrong but the CV wasn't reliable enough for us (compared to itself, Public LB and Private LB), which meant we didn't do the right choices in selecting models and parameters</p></li> </ul> <h3>My Last observation</h3> <p>At some points in this competition it seems like the competition is not about NLP but about tricks. <strong>This is a mistake. This competition is about NLP</strong> and eventually tricks were only a small part of the full picture, and the winning teams were the teams with the best models</p> <p><strong>As always I want to thank Kaggle and the competition team for a well organized competition.</strong> </p> <p><a href="/mcggood">@mcggood</a> thanks. When I didn't replace with [pad] but with a meaningful token, the augmentation failed. I think it worked because these networks are originally trained to fill-in missing tokens</p>
Google QUEST Q&A Labeling
18th place solution (hypotheses on the datasets)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Google and Kaggle for hosting this exciting competition. Also, congratulations to all teams! </p> <p>I'm really happy to see many great solutions in discussion threads right after the end of the competition. It made me want to share my own solution and insights with others. </p> <h2>Hypotheses</h2> <p>Before describing modeling and post-processing, let me start by explaining about my hypotheses on this competition's datasets. </p> <h3>Meaning of Target Values</h3> <p>We see that each label has a strange set of discrete target values such as {3/9, 4/9, 1/2, 5/9, 6/9, 7/9, 5/6, 8/9, 9/9}). Although it has been already discussed in some threads like <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/120772">https://www.kaggle.com/c/google-quest-challenge/discussion/120772</a>, I have a different hypothesis on how these target values are generated. My hypothesis mainly consists of the following points: 1. Each target value is obtained from three raters. 2. Each rater chooses one of multiple answer values when annotating some label value. Each label has its own set of answer values. For example, question_well_written has {1, 2, 3}, question_type_choice has {0, 1}, and answer_satisfaction has {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}. I assume answer values 1, 2, and 3 for question_well_written corresponds to "No", "Maybe", "Yes", respectively. <br> 3. Each target value corresponds to (the sum of three annotator's answer values) / (the maximum possible sum of answer values). For example, if three annotators give answer values 2, 2, and 3, respectively for question_well_written, its target value becomes (2 + 2 + 3) / (3 + 3 + 3) = 7 / 9. 4. We are able to have target values {3/9, 4/9, 5/9, 6/9, 7/9, 8/9, 9/9} for question_well_written with 1., 2., and 3. However, it doesn't cover some of actual values 1/2 and 5/6. I guess these values are obtained by cases where there are only two raters for some target values. </p> <p>Although 4. makes me less confident on my hypothesis, I still believe having three raters for each target is more realistic in comparison with having more than 10 raters for each target. </p> <h3>Train/Test split</h3> <p>We can obtain the following information: - There are multiple question-answer pairs sharing common questions in the training dataset. - There are no such pairs in the test dataset (both public and private). - The training and test datasets don't share any common question. </p> <p>This list makes me assume the following dataset split: - Training: (questions with frequency = 1) + (questions with frequency &gt; 1). - Test (public): questions with frequency = 1 - Test (private): questions with frequency = 1. - These three datasets don't share common questions. </p> <p>I started using only questions with frequency = 1 for validation based on this assumption. It changed my single model's CV score from 0.420 to 0.382, which is much closer to my score in test datasets. </p> <h2>Modeling</h2> <h3>Loss function</h3> <p>I used binary cross entropy for each target after applying rank-based min-max scaling. Min-max scaling is same with the one mentioned in 3rd place solution (<a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129927">https://www.kaggle.com/c/google-quest-challenge/discussion/129927</a>). The use of min-max scaling is based on my hypothesis on target values. For example, I assume 3/9 for question_well_written means very strong "No". </p> <p>In addition, I replaced each target value with its rank before min-max scaling. It boosts models' performance on labels having skewed target value distributions. The following figure is a distribution of answer_well_written's target values. In this case, separating questions with target values 8 / 9 and 1 has more impact on Spearman's rho in comparison with separating questions with target values 0.5 and 8 / 9. The use of rank-based min-max scaling convert a set of target values from {3/9, 4/9, 1/2, 5/9, 6/9, 7/9, 5/6, 8/9, 9/9} to {0, 0.001, 0.004, 0.008, 0.042, 0.137, 0.233, 0.482, 1.000} and encourages models to detect differences between questions that result in larger impacts on Spearman's rho. I got +0.01 by rank-based min-max scaling in CV. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F145393%2Fe3c21971901fdc4cec33b6691f0835b6%2FScreenshot%20from%202020-02-12%2000-50-39.png?generation=1581436276754519&amp;alt=media" alt="answer_well_written"> </p> <h3>Architecture</h3> <p>My solution is based on two architectures. </p> <strong>Model 1</strong> (Public LB = 0.392, CV = 0.383) <p>It consists of a single BERT (or XLNet) encoder. Input comprises of question_title (no truncation), question_body (head-tail truncation), and answer (head-trail truncation). The figure below depicts the overview of its structure. Although all labels share the same encoder, each label has its own independent classification head. Features for each label are collected from three sections with three pooling methods, max-pooling, avg-pooling, and attention-pooling. Then, collected features are passed to a dropout and a single linear layer. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F145393%2Ff38ab7835bc79641a027e635c97b4711%2Fmodel1.jpg?generation=1581517455669978&amp;alt=media" alt="model1"></p> <strong>Model 2</strong> (Public LB = N/A, CV = 0.390) <p>My second architecture consists of two encoders. It's inspired by <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a>. The figure below depicts the overview of its structure. The first encoder takes question_title (no truncation) and question_body (head-tail truncation) as its input, and the second one takes question_title (no truncation) and answer (head-tail truncation) as its input. Its head classifier's structure and a features extraction method is almost same with Model 1. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F145393%2F39fb09d700dcec1777d1f84248d008dd%2Fmodel2.jpg?generation=1581517473360813&amp;alt=media" alt="model2"></p> <ul> <li>Ensemble: <ul><li>Model 1 (BERT) * 0.2 + Model 1 (XLNet) * 0.2 + Model 2 (BERT) * 0.3 + Model 2 (XLNet) * 0.3 </li> <li>CV: 0.404</li></ul></li> <li>Training settings <ul><li>Model 1 </li> <li>BERT: batch size = 12, optimizer = AdamW, lr = 5e-5</li> <li>XLNet: batch size = 10, optimizer = AdamW, lr = 4.5e-5</li> <li>Model 2</li> <li>BERT: batch size = 12 (6 * 2), optimizer = AdamW, lr = 5e-5</li> <li>XLNet: batch size = 12 (4 * 3), optimizer = AdamW, lr = 4.5e-5</li></ul></li> </ul> <h2>Post Processing</h2> <p>I tried to find a list of segments that should have same values for each label by using a greedy algorithm based on OOF predictions. 1. First, this algorithm tries to find a value x1 such that we can maximize Spearman's rho on OOF predictions by replacing prediction values in [0, x1) with x1 / 2. 2. Then, it tries to a value x2 (&gt;= x1) such that we can Spearman's rho by replacing prediction values in [x1, x2) with (x1 + x2) / 2 3. Keep this procedure while it successfully finds such xi. </p> <p>I checked score improvement on all labels by 3-fold validation, and I decided to use this post processing on labels that I could see score improvement larger than 5e-4. This post processing increased both of CV and LB scores by around 0.03. </p> <h2>External data for predicting question_type_spelling </h2> <p>If we open an URL for some stackexchange page (<a href="https://english.stackexchange.com/questions/522357/when-did-spelling-ic-words-ick-start-stop-being-popular">https://english.stackexchange.com/questions/522357/when-did-spelling-ic-words-ick-start-stop-being-popular</a>), we can see there are some tags at the bottom of question bodies. There are tags that are closely related to spelling (e.g., "orthography" and "suffixes"), and we can retrieve tags for almost all questions in the competition's datasets. </p> <p>I collected top-200 tags that most frequently appear with "orthography" and chose 70 from them by removing irrelevant ones like "american-english". I made a very simple classifier that just returns (host is "english.stackexchange.com" or "ell.stackexchange.com) &amp; (question has one of 70 tags). This classifier gives me high Spearman's rho (0.5-0.7) for question_type_spelling in the training dataset and the public test dataset. However, its score looks worse than scores obtained by a standard use of model prediction and post-processing. I'm so sad...</p>
Google QUEST Q&A Labeling
19th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you for holding this great competition google and kaggle team. I think the balance of need for latest NLP knowledge and ideas applied to this specific task is the exciting point of this competition! I could not get the gold medal, but I’ve enjoyed and learned a lot.</p> <p>I want to share my insights which I’ve achieved from more than 150 experiments. My <a href="https://github.com/guchio3/kaggle-google-quest">codes</a> and <a href="https://www.kaggle.com/guchio3/refactored-bert2-robert1-xlnet1-blend-pseudo1">kernels</a> are publicly available now.</p> <h1>preprocess</h1> <ul> <li>add special token to separate title and question. <ul><li>ex. [CLS] title [NEW_SEP] question [SEP] answer [SEP]</li></ul></li> <li>add category info to the head of the title. <ul><li>ex. [CLS] CAT-TECHNOLOGY title [NEW_SEP] question [SEP] answer [SEP]</li> <li>This method should be better than the case just concatenating the category to the last classification layer because the hugging-face model can use the category info while it retrieves the texts.</li></ul></li> <li>trim head and tail part of the texts <a href="https://github.com/guchio3/kaggle-google-quest/blob/decffc69d5657f5114970eb2ea226df8ec8cfaf6/scripts/refactor/datasets.py#L137-L157">like this</a>. <ul><li>ex. I have a dream that one day on the red hills of Georgia, the sons of former slaves and the sons of former slave owners will be able to sit down together at the table of brotherhood.
 -&gt; I have a dream that + at the table of brotherhood.</li></ul></li> </ul> <h1>modeling (used pytorch hugging-face library)</h1> <ul> <li>GKF using question body</li> <li>AVG pool final layer seq output and use it for the input 1 linear layer</li> <li>use 2 separate models for question type label and answer type label <ul><li>(title-max-len, question-max-len, answer-max-len) : (30, 239*2, 0) and (30, 0, 239*2) for each (if each part is less than the max-len, the other parts use the rest len).</li></ul></li> <li>Ensemble of four models (1 : 0.5 : 0.5 : 1) <ul><li>bert-base-uncased (best snapshot)</li> <li>robert-base (best snapshot, second-best snapshot)</li> <li>xlnet-base-cased (best smapshot)</li></ul></li> <li>batch size 8, 6 epochs (freezing is crucial) <ul><li>freeze only hugging-face model for the first 1 epoch, and unfreeze it the rest 5 epochs</li></ul></li> <li>BCEWithLogitsLoss</li> <li>Adam w/ cosine scheduler (3e-5 -&gt; 1e-5)</li> <li>1 cycle pseudo labeling <ul><li>concat soft target, hard target, and half hard target. Half hard target is the target where the threshold is applied to the col only if the application improve the oof score</li></ul></li> </ul> <h1>postprocess</h1> <ul> <li>Search and apply thresholds <ul><li>Get optimized thresholds using scipy.optimize.minimize for oofs. The initial value is calculated from the percentile point of the oofs. (<a href="https://github.com/guchio3/kaggle-google-quest/blob/master/scripts/get_optR3.py">ref</a>)</li> <li>Each thresholds is applied to a column only if the oof score improves by using it.</li> <li>Each thresholds is applied to a column only if the cardinality of a column is more than one for public/private part of the test set.</li></ul></li> <li>Set 0 or 0.5 to question-type-spelling based on the host (kernel based)</li> </ul> <h1>What did not work for me</h1> <ul> <li>Larger batch size</li> <li>Longer sequence size <ul><li>Longer position embedding matrix than 512</li> <li>Cyclic position ids (0 - 29 for title, 0 - 511 for quesiton, 0 - 511 for answer)</li></ul></li> <li>Shorter sequence size</li> <li>Using meta features except for category <ul><li>host, word counts, tokenized word counts, etc …</li></ul></li> <li>Using last N hidden outputs of the hugging-face models</li> <li>Augmentations</li> <li>Manual text preprocessings</li> <li>Changing loss functions <ul><li>MSE, Pair type loss, Focal loss</li></ul></li> <li>Changing optimizers and schedulers <ul><li>AdamW, RMSProp, SGD, Step type schedulers</li></ul></li> <li>Using SWA</li> <li>Expert models for difficult or imbalanced classes</li> <li>Oversampling for the minor values of imbalanced classes</li> <li>Using the other type of models for ensemble <ul><li>RNN based models, GPT2, XLM</li></ul></li> </ul> <p>P.S. Like some people say, I used so long time to reproduce the result of <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">great TF kernel</a> by pytorch. At first, I concluded that the situation is caused by the pytorch’s nature <a href="https://discuss.pytorch.org/t/suboptimal-convergence-when-compared-with-tensorflow-model/5099/18">like this</a>. However, I noticed that the kernel use snapshot ensemble in the callback function, and this should cause the difference of the performance. I think pytorch users are not familiar with callback functions, so we should overlook it.</p>
Google QUEST Q&A Labeling
1st place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Oh that's a nice question. We decided that post-processing leads only to better spearman corr but not to better classification distribution so we left them untouched. </p> <p>Updated with the solution. Thanks for your patience! <a href="https://www.kaggle.com/ddanevskyi/1st-place-solution">Solution kernel</a>. GitHub code is coming!</p> <p>Congratulations to all teams and especially to daring solo players! Thanks to organizers as well!</p> <p>Tremendous feeling for the whole team. I happened to finish one place away from the golden zone in 2 NLP competitions, and there it is. A few posts below I wrote down "respect your efforts, not results" and you'll be there. There is always a room for such motivational quotes. I don't know whether it's harder to compete with yourself or with such great minds. Thanks to the team ( @ddanevskiy , <a href="/kashnitsky">@kashnitsky</a> , <a href="/dmitriyab">@dmitriyab</a> ) for this thrilling race. And thanks to the whole community for ideas, criticism, and a sense of humor. </p> <p><strong>EDIT</strong></p> <p>The community seems to be waiting for a killer feature from us. I don’t reckon we’ve got a single one. Our solution is comprised of three cornerstones: - pretrained language models - pseudo-labeling - post-processing (yes we did one)</p> <p><strong>The basic pipeline</strong></p> <p>Less than 3 weeks ago just when we were done with the TF Q&amp;A competition, we jumped into this one. Oh yes, there were already so many great NLP folks seen on the leaderboard. We started from a public pytorch baseline [(0.377 I guess)] (<a href="https://www.kaggle.com/phoenix9032/pytorch-bert-plain">https://www.kaggle.com/phoenix9032/pytorch-bert-plain</a>) (kudos <a href="/phoenix9032">@phoenix9032</a>) and managed to update it effectively (0.396 for BERT-base and 0.402 for BERT-large) with a couple of tricks: - GroupKFold with question_title groups - Multi-Sample Dropout for Accelerated Training and Better Generalization, <a href="https://arxiv.org/pdf/1905.09788.pdf">paper</a> - Different learning rate settings for encoder and head (deserves a paper <a href="/ddanevskyi">@ddanevskyi</a>) - Using CLS outputs from all BERT layers rather than using only the last one. We did this by using a weighted sum of these outputs, where the weights were learnable and constrained to be positive and sum to 1</p> <p>We experimented with separate weights for different targets and with various loss functions but eventually ended up sticking to simple BCE loss. </p> <p><strong>Pretrained language models</strong></p> <p>When <a href="/dmitriyab">@dmitriyab</a> joined our team, he introduced stackexchange (SE) pretrained LMs with 110k-long dictionary including code and other domain tokens. In addition to the common MLM task, the model also predicted 6 auxiliary targets (<code>question_score, question_view_count, question_favorite_count, answer_score, answers_count, is_answer_accepted</code>) with a linear layer on top of pooled LM output. That looked beneficial enough for me to drop my code replacement experiments and update the existing bert pipeline with a huge pretrained embedding layer -&gt; 0.414. In the end <a href="/dmitriyab">@dmitriyab</a> and <a href="/kashnitsky">@kashnitsky</a> trained a few LMs (bert-large, roberta, roberta-large) on 7M SE data using whole-word-masking, but the initial model was still the best. </p> <p><strong>Pseudo-labeling</strong></p> <p>Pseudo-labeling is a well known semi-supervised learning technique within the Kaggle community and sometimes I wonder how they reinvent stuff in academic papers like <a href="https://arxiv.org/pdf/1911.04252.pdf">this one</a> (btw a great result). So we labeled additional 100k samples using our three basic bert models. First experiments with pseudo-labels showed huge improvement for SE pretrained bert (0.414 -&gt; 0.445). That was huge. But the leaderboard did not agree.</p> <p>Previously, my teammates and I used models from all folds to create pseudolabels. This strategy worked well in many cases including recent Recursion competition, where my team finished 6th. <a href="/ddanevskyi">@ddanevskyi</a> used the same approach for his 3rd place on Freesound. This way of creating pseudolabels gives quite accurate predictions as you are using a fold ensemble, but has an unobvious pitfall. If the pseudolabeled set contains a lot of similar samples to the training set, you could expect that some actual training labels could leak through those samples, even if there is no “direct” target leakage. </p> <p>Imagine that you’ve trained a 5-fold model on the training set then used those 5 folds to create pseudolabels on some external dataset. Then you add those pseudolabels to the training part of each of 5 folds. Very likely you will see an overoptimistic score on the validation part as 4 out of 5 models used to generate pseudolabels have “seen” the current validation set through their training process.</p> <p>We could expect a lot of very similar questions/answers in SO&amp;SE data, so we clearly couldn’t rely on that sort of pseudolabels. In order to fix the problem, we generated 5 different sets of pseudolabels where for each train/val split we used only those models that were trained using only the current train set. This gave much less accurate pseudolabels, but eliminated any possible source of target leakage.</p> <p>As expected, it didn’t bring such an improvement compared to a “leaked” approach (0.414 -&gt; 0.422) but that was a wise decision in terms of tackling overfitting. </p> <p><strong>BART</strong></p> <p>One week prior to the deadline, we noticed <a href="/christofhenkel">@christofhenkel</a> ’s post in the external data thread - the one about <a href="https://github.com/pytorch/fairseq/tree/master/examples/bart">fairseq BART</a> . Autoregressive sequence-to-sequence model for text classification? Looked strange to me but at the same time that was something new and daring, and we wouldn’t ignore a hint by such a prominent kaggler. It took us three whole days to train BART on 1080ti with batch size 2. The model scored 0.427 locally and it’s predictions decorrelated well with those of BERT. </p> <p>We also trained a few other bert-base, bert-large, roberta-base, and roberta-large models. They all scored around 0.420 locally but didn’t improve drastically our bert+bart baseline. </p> <p><strong>Post-processing</strong></p> <p>And finally, post-processing. Yes we did one. Based on <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">this discussion</a> we found that discretization of predictions for some challenging targets led to better spearman corr. So we used no thresholds but discretized such columns based on their distribution in the training set. That simple heuristics boosted score by 0.027-0.030 for almost every single model (up to 0.448 bert-base, 0.454 bart). </p> <p><strong>Finish</strong></p> <p>Our local validation and public score stayed the same for the whole last week - 0.468. Yesterday, since early morning with all this berts, barts, robertas we were not able to reproduce public score :harold:, having only 0.462 submissions but they seem to work the best -&gt; 0.430-0.433 private leaderboard (unfortunately we didn’t select the best one). But that was good enough. </p> <p><strong>What didn’t happen to work well</strong></p> <ul> <li>StackExchange meta-features used to boost our score in the very beginning but in the end they made no significant difference</li> <li>GPT, GPT2-medium, GPT2-large showed mediocre scores</li> <li>backtranslation</li> <li>stacking. Instead, linear blends worked unreasonably well</li> </ul> <p>Wonderful teamwork and team patience (not really haha). Thanks to you all. We are making our submission kernels and datasets public. By the end of the day I’m releasing our code on GitHub. </p> <p><strong>EDIT</strong> Code released <a href="https://github.com/oleg-yaroshevskiy/quest_qa_labeling">here</a>. There are 3 separate branches. I'll update with a nice readme soon. </p> <p><em>Oleg Yaroshevskiy, Dmitriy Danevskiy, Yury Kashnitsky, Dmitriy Abulkhanov</em></p>
Google QUEST Q&A Labeling
21st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I want to thanks my teammates <a href="/brightertiger">@brightertiger</a> <a href="/khyeh0719">@khyeh0719</a> <a href="/arvissu">@arvissu</a> <a href="/qbenrosb00">@qbenrosb00</a>, we have great works. And thanks google and kaggle to host the good competition. I will briefly summarize our methods</p> <h2>Preprocessing</h2> <p>We tried some preprocessing before trained BERT model and found the CV and LB are similar to no preprocessing. For the clean code, we didn't do any preprocessing to train SoTA models. Moreover, We used head and tail part of the texts as model input.</p> <h2>Input</h2> <p>As other teams, We train two BERT models for question labels(21) and answer labels(9). * Question : <code>[cls]+ title+[sep]+question+[sep]</code> * Answer : <code>[cls]+title+[sep]+question+[sep]+answer[sep]</code></p> <h2>Modeling</h2> <p>We tried BERT with different architectures. * Vanilla BERT: 0.35x LB * <a href="https://www.kaggle.com/m10515009/customizedbert-pytorch-version-training">Customized BERT+head and tail part of the texts:</a> 0.392 LB * Separate two BERT for question and answer: 0.396 LB * Separate two BERT for the question and answer with special tokens: CV 415 LB 0.405</p> <p>We find out using two BERT and special token get the best result on LB. Therefore we apply the method to train GPT2 and RoBERTa.</p> <p>We also used BERT outputs as embedding to train the LSTM and got great results CV: 0.418, LB: 0.396</p> <p>The summary of models we tried * BERT-base CV 415 LB 0.405 * RoBERTa CV 0.411 LB 0.398 * GPT2 CV: 0.418 LB: 0.396 * BERT-large * BERT-large-wwm * BERT-large-wwm-squad * BERT-RNN CV: 0.418 LB: 0.396 * Pretrained Embedding + NN * RNN+USE</p> <h2>Ensemble</h2> <p>The models we used for ensemble are BERT, RoBERTa, GPT2, BERT-RNN. We only used simple average for ensemble. </p> <h2>Postprocessing</h2> <p>We only used the threshold method for postprocessing and got ~0.004 improvement in LB. I don't like the postpocessing method. However, in this competition, <strong>postprocessing is all you need</strong>.</p> <p>``` def postProcessing(x):</p> <pre><code>x = np.where(x&amp;gt;=0.9241, 1.0, x) x = np.where(x&amp;lt;=0.0759, 0.0, x) return x </code></pre> <p>targets = ['question_conversational', 'question_type_compare', 'question_type_consequence', 'question_type_definition', 'question_type_entity', 'question_type_choice']</p> <p>sub.loc[:, targets] = postProcessing(sub.loc[:, targets].values) ```</p> <h2>Not work for us</h2> <ul> <li>BERT-large</li> <li>Pretained BERT-base by SQuAD2 dataset.</li> <li>Gradient accumulation </li> </ul> <p>Thank you</p> <p>謝謝^^</p>
Google QUEST Q&A Labeling
22nd place solution(NLP beginner)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks kaggle for holding a great competition!</p> <p>I'm novice at NLP, so I learned a lot from great kernels about preprocessing, how to build Bert model, etc... My code is based on <a href="https://www.kaggle.com/idv2005/pytorch-bert-baseline">this kernel</a> by @idv2005 and <a href="https://www.kaggle.com/adityaecdrid/pytorch-bert-end-to-end-with-cv">this kernel</a> by @adityaecdrid ! Thank you very much!</p> <p>Here is the summary of my work.(Sorry for poor English.)</p> <h1>Solution</h1> <ul> <li>bert-base-cased, bert-base-uncased, xlnet-base-uncased and Twin models(XLNet, Bert).</li> <li>Twin model idea is from <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a>. thanks a lot!. Twin models have Bert for Question and Bert for Answer and treat sequence output with 3 custom layers. <ol><li>Average pooling for QBert/ABert =&gt; concatenate(shape=768*2) =&gt; dense(30)</li> <li>Average pooling for QBert =&gt; dense(22), Average pooling for ABert =&gt; dense(8), concatenate QBert dense output and ABert dense output.</li> <li>concatenate QBert/ABert sequence output(shape=512, 768*2) =&gt; Transformers =&gt; average pooling =&gt; dense(30)</li></ol></li> <li>Final output: a * 0.25 + b * 0.25 + c * 0.5</li> <li>postprocess (thank you for <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">this discussion</a>) ``` best_bin_dict = {'question_type_spelling': 4, 'question_type_instructions': 8, 'question_type_entity': 9, 'question_type_definition': 8, 'question_type_consequence': 4, 'question_type_compare': 8, 'question_type_choice': 8, 'question_opinion_seeking': 50, 'question_not_really_a_question': 4, 'question_multi_intent': 9, 'question_interestingness_self': 9, 'question_interestingness_others': 100, 'question_has_commonly_accepted_answer': 5, 'question_fact_seeking': 10, 'question_expect_short_answer': 50, 'question_conversational': 8, 'question_body_critical': 100, 'question_asker_intent_understanding': 100, 'answer_well_written': 9, 'answer_type_procedure': 50, 'answer_type_instructions': 9, 'answer_relevance': 100, 'answer_level_of_information': 50}</li> </ul> <p>df_sub = pd.read_csv("../input/google-quest-challenge/sample_submission.csv") pred_final = np.array(preds).sum(axis=0) for i, col in enumerate(df_sub.columns[1:]): df_sub[col] = pred_final[:, i] not_type_spelling_idx = df_test.query("host not in ['ell.stackexchange.com', 'english.stackexchange.com']").index</p> <p>for col in df_sub.columns[1:]: if col in best_bin_dict: n_bins = best_bin_dict[col] binned = pd.cut(df_sub[col].values, n_bins, retbins=True, labels=np.arange(n_bins)/(n_bins-1))[0] if col == "question_type_spelling": binned[not_type_spelling_idx] = 0 df_sub[col] = binned df_sub.to_csv("submission.csv", index=False) ```</p> <h1>Not worked</h1> <ul> <li>DistilBert</li> <li>bert-large-cased, bert-large-uncased, bert-large-uncased-whole-masking</li> <li>Categories/hosts Embedding</li> <li>L1loss, L2Loss, Focal Loss (Finally I use BCELoss... but I can't understand why BCELoss is best unless it is not symmetric in this case... refer=&gt; <a href="https://www.kaggle.com/mavillan/understanding-the-binary-cross-entropy">https://www.kaggle.com/mavillan/understanding-the-binary-cross-entropy</a>)</li> <li>Subtask: predict categories.</li> <li>Concat CLS token sequence and average pooling.</li> </ul> <h1>Others</h1> <ul> <li>I consumed about $200 for this competition.</li> </ul>
Google QUEST Q&A Labeling
23th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for sharing! We have not tried RoBerta... </p> <p>sorry, I changed my kernel to public</p> <p>First, thanks to my teammate @agatan .</p> <h2>Pre-processing</h2> <ul> <li>apply html unescape to sentences</li> <li>sentances longer than maximum length was trimmed at the head and tail parts</li> </ul> <h2>Modeling</h2> <ul> <li>We trained Question Bert-base model and Question and Anser Bert-base model separetely on kernel</li> <li>Question model predicts question-targets and Q-A model predicts all targets</li> <li>3fold and 3epochs with GroupKFold</li> <li>BCE + margin ranking loss</li> </ul> <h2>Post-processing</h2> <ul> <li>used lightgbm as second-stage stacking model</li> <li>max_depth=1 and lr=0.1 was best from my experiments</li> <li>Input meta features such as text-length to lightgbm addition to predicted values</li> </ul> <h2>Best model</h2> <ul> <li>CV: 0.3851</li> <li>Public: 0.45979</li> <li>Private: 0.41440</li> </ul> <p>We shared our kernel, please see if you like. <a href="https://www.kaggle.com/shuheigoda/23th-place-solusion">https://www.kaggle.com/shuheigoda/23th-place-solusion</a></p> <p>Thank you !!</p> <p>Thank you! Details of the loss function can be found in our kernel. <a href="https://www.kaggle.com/shuheigoda/23th-place-solusion">https://www.kaggle.com/shuheigoda/23th-place-solusion</a></p> <p>Compared to single bce, boost +0.002 public score with custom loss.</p> <p>Thank you!</p>
Google QUEST Q&A Labeling
24th place - mixing all I can - BERT+XLNET+GPT2+LGB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My first share of anything in public. Got to Expert tier, so guess got to share back, even if the result is not in the very top!</p> <p>The key ideas which made impact: <strong>Pre-processing:</strong> - Feature engineering - number of punctuation, nouns and things, number of common words between question and answer etc. Those are used in separate model (LGB). - Two separate sets of tokenized inputs. First set consisting entirely of question title and question body and used to predict categories relying on question only (first 21 outputs). Second set consisting of question title, question body and answer, to use for remaining 9 categories related to how answer relates to question and answer only features.</p> <p><strong>Modelling:</strong> - Train BERT, XLNET, GPT2 all in base configuration - 1 model of each kind for each of 2 sets of outputs (question only and question + answer). Use one-cycle training, standard Adam, early stopping on decrease in validation metric (spearmanr). Train for 3 folds (but using 10-fold groupkfold so that each fold has larger train set) - Train LGB using just the engineered features, one LGB model per output variable</p> <p><strong>Post-processing:</strong> - Mix all the models using softmaxed validation scores as weights. Using beta of 18.5 as this maximised validation score (optimised in separate kernel) - Iterate through different rounding policies for each output variable, use best one for each. Rounding policies considered - normal with 1-5 decimal places, ceil/floor by multiplying prediction by 10^X (where X is in range 1-5) then using floor/ceil and then dividing by 10^X.</p> <p>PS: the initial kernel is forked from <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">Bert-base TF2.0 (now Huggingface transformer)</a>, big thanks @akensert for his share! Would not have been able to accomplish such results without this as I joined relatively late (19 days before the end).</p> <p>PSS: The actual code is relatively dirty and relies on saved model weights. Still in case anyone wants to have look, it is <a href="https://www.kaggle.com/bookerd/2nd-gen-predict-only-all-available-models">here</a> (v19 is the final submission)</p>
Google QUEST Q&A Labeling
26 place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Oh, really? In my case, all three model improves when add category. I can't explain it, sorry...</p> <p>Congrats to all winners! And thanks to organizers! The competition was very very difficult for me, but fortunately I got 26th place. I share my gotten knowledge.</p> <h3>modeling</h3> <p>I used a huggingface transformer and customized the head of the model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1483555%2F7d88db1391baa10f5ce96f74490d2bd1%2F1.png?generation=1581475630913742&amp;alt=media" alt=""></p> <p>As you know, I imitated the first solution of the <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/103280">jigsaw</a>.</p> <p>BERT-base-cased, BERT-base-uncased and XLNet-base-case with this header were weighted at 2:4:4.</p> <h3>Parameter</h3> <ul> <li>10 GroupFold</li> <li>epoch: 3</li> <li>batch size: 8</li> <li>optimizer: AdamW <ul><li>OneCycleLR</li> <li>max_lr=5e-5 (decay*0.9)</li></ul></li> <li>loss: BCE + MSE(MSE weighted by label frequency)</li> </ul> <h3>Token</h3> <p>I add category token. like below;</p> <p><code> [CLS] [CATEGORY_TOKEN] question_title [SEP] question_body [SEP] ansewr [SEP] </code> It raised 0.05 my CV.</p> <h3>Optimize binning</h3> <p>I read <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/119177">this Discussion</a>. I calculated the common denominator and binned the predicted values. Optimized binning with features like <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107">OptimizeRounder</a>.</p> <h3>not work for me</h3> <ul> <li>pseudo label</li> <li>MultiSample Dropout</li> <li>SWA</li> <li>AdaBound</li> <li>FocalLoss</li> <li>RoBERTa, ALBERT</li> <li>Prediction with Special Token for each target</li> </ul> <h3>my code</h3> <ul> <li>train: <a href="https://github.com/trtd56/KaggleQuest">https://github.com/trtd56/KaggleQuest</a></li> <li>predict : <a href="https://www.kaggle.com/takamichitoda/26th-place-solution?scriptVersionId=28524791">https://www.kaggle.com/takamichitoda/26th-place-solution?scriptVersionId=28524791</a></li> </ul> <p>Thank you!!</p> <p>My CV is that: - BERT-uncased: 0.4109153513 - BERT-cased:0.4047241234 - XLNet-cased:0.407151191</p> <p>and tried mix result is: |BERT-uncased|BERT-cased|XLNet-cased|CV| | -- | -- | -- | -- | |0.3|0.3|0.3|0.421838540812048| |0.5|0.2|0.3|0.42176727179908| |0.4|0.2|0.4|0.422223424005272|</p>
Google QUEST Q&A Labeling
30th Sliver Solution (Post-Processing Magic Discovered)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners! Learnt more about BERT-based model architectures in this competition, and would like to share some of my takeaways.</p> <h2>Text Preprocessing</h2> <p>I only joined the competition in the last two weeks, therefore I didn't spend too much time on the preprocessing step, but some simple tricks did improve the score. Assume that the built-in tokenizers from pretrained BERTs could do a good job for further preprocessing.</p> <ul> <li>Remove extra white spaces to make text more dense</li> <li>Unescape html entities, like <code>&amp; lt;</code>, <code>&amp; equals;</code>, <code>&amp; gt ;</code>, ...</li> <li>Extract plain text from html tags if found (using <code>get_text()</code> of BeautifulSoup)</li> <li>Remove words starting with <code>\</code> and <code>$</code> (mostly for Latex keywords and some syntax keywords)</li> </ul> <h2>Custom BERT Models for Ensembles</h2> <p>As a fan of ensemble learning, I always believe that the <strong>model diversity</strong> is the key for success. I trained 5 BERT-based models with slightly different custom layers, but they share the same structure for BERT embeddings: <code>[CLS] title [SEP] question [SEP]</code> for question text and <code>[CLS] answer [SEP]</code> for the answer text. I only used the last hidden state output embeddings of <code>[CLS]</code> token from BERT models to combine with other layers (pooler output performed worse).</p> <ul> <li><p><strong>Topology 1: Roberta-Base, Xlnet-Base-Cased</strong></p> <ul><li>2 BERT embeddings (CLS): q_embed, a_embed</li> <li>3 categorical embeddings: Concat(cate_embeds) -&gt; Dense(128, relu)</li> <li>2 separate FC layer paths <ul><li>Concat(q_embed, cate_embeds_dense) -&gt; Dense(256, relu) -&gt; Dense(21, sigmoid)</li> <li>Concat(a_embed, cate_embeds_dense) -&gt; Dense(256, relu) -&gt; Dense(9, sigmoid)</li></ul></li></ul></li> <li><p><strong>Topology 2: Roberta-Base, Bert-Base-Uncased, Bert-Base-Cased</strong></p> <ul><li>2 BERT embeddings (CLS): q_embed, a_embed</li> <li>2 separate FC layer paths <ul><li>q_embed -&gt; Dense(256, relu) -&gt; Dense(21, sigmoid)</li> <li>a_embed -&gt; Dense(256, relu) -&gt; Dense(9, sigmoid)</li></ul></li></ul></li> </ul> <p>I also discovered that splitting questions and answers into two separate fully-connected layer paths works better than mixing both. It makes sense to me as the labeling of classes by the voters may focus on the content of <code>title+question</code> and <code>answer</code> separately. Categorical embedding layers for <code>host</code>, 1st token of <code>host</code> and <code>category</code> columns contributed the ensemble score.</p> <p>The learning rate of all models is fixed to <code>2e-5</code>, also applied <code>ReduceLROnPlateau</code> for LR decay (factor=0.1) and a custom early stopping callback based on validation Spearman score.</p> <p>The final model is a weighted average of those models with a post processing to optimize ranks.</p> <h2>Determinism on TensorFlow 2.1</h2> <p>Reproducibility had been an issue for tensorflow/keras, but this repo from Nvidia helped me to control the determinism to a great deal! Now we can get almost the same result in multiple runs using the same random seed. This gives us a clear view about the relative performance of all experiments, and then we can gradually improve the models by the right setup and approaches. <a href="https://github.com/NVIDIA/tensorflow-determinism">https://github.com/NVIDIA/tensorflow-determinism</a></p> <h2>Post Processing Magic</h2> <p>Lost of people were discussing about what is the actual trick/magic that can boost the Spearman correlation score. I was originally having no clues about it, but after studying the definition of Spearman correlation and the patterns inside the training set labels, I discovered that we could utilize fixed percentiles of label values to approximate to the optimal rank in each class.</p> <p>I searched from 1 to 100 as the divisor for fixed percentile intervals using out-of-fold prediction from one of the best ensembles. I finally chose 60 as the fixed divisor because it consistently boosted the score on both local CV and public LB (+~0.03-0.05).</p> <p>The code is very simple, given the unique labels of training set as the distribution samples: ``` y_labels = df_train[output_categories].copy() y_labels = y_labels.values.flatten() unique_labels = np.array(sorted(np.unique(y_labels))) unique_labels</p> <p>array([0. , 0.2 , 0.26666667, 0.3 , 0.33333333, 0.33333333, 0.4 , 0.44444444, 0.46666667, 0.5 , 0.53333333, 0.55555556, 0.6 , 0.66666667, 0.66666667, 0.7 , 0.73333333, 0.77777778, 0.8 , 0.83333333, 0.86666667, 0.88888889, 0.9 , 0.93333333, 1. ]) ```</p> <p>I created 60 optimal percentiles: ``` denominator = 60 q = np.arange(0, 101, 100 / denominator) exp_labels = np.percentile(unique_labels, q) exp_labels</p> <p>array([0. , 0.08 , 0.16 , 0.21333333, 0.24 , 0.26666667, 0.28 , 0.29333333, 0.30666667, 0.32 , 0.33333333, 0.33333333, 0.33333333, 0.34666667, 0.37333333, 0.4 , 0.41777778, 0.43555556, 0.44888889, 0.45777778, 0.46666667, 0.48 , 0.49333333, 0.50666667, 0.52 , 0.53333333, 0.54222222, 0.55111111, 0.56444444, 0.58222222, 0.6 , 0.62666667, 0.65333333, 0.66666667, 0.66666667, 0.66666667, 0.68 , 0.69333333, 0.70666667, 0.72 , 0.73333333, 0.75111111, 0.76888889, 0.78222222, 0.79111111, 0.8 , 0.81333333, 0.82666667, 0.84 , 0.85333333, 0.86666667, 0.87555556, 0.88444444, 0.89111111, 0.89555556, 0.9 , 0.91333333, 0.92666667, 0.94666667, 0.97333333, 1. ]) ```</p> <p>And a mapping function to align BERT outputs to the closest percentile value. ``` def optimize_ranks(preds, unique_labels): new_preds = np.zeros(preds.shape) for i in range(preds.shape[1]): interpolate_bins = np.digitize(preds[:, i], bins=unique_labels, right=False)</p> <pre><code> if len(np.unique(interpolate_bins)) == 1: # Use original preds new_preds[:, i] = preds[:, i] else: new_preds[:, i] = unique_labels[interpolate_bins] return new_preds </code></pre> <p>weights = [1.0, 1.0, 1.0, 1.0, 1.0] oof_preds = val_ensemble_preds(all_val_preds, weights) magic_preds = optimize_ranks(oof_preds, exp_labels) blend_score = compute_spearmanr(outputs, magic_preds) ```</p> <p>The Spearman correlation will become NaN if the output column contains 1 unique value, because in this case the standard deviation will be zero and caused divide-by-zero problem (submission error). The trick I used is to use original predictions from BERT models for that column.</p> <p>Here is a summary table of original scores versus magic-boosted scores: | Model | Local CV without Magic | Local CV with Magic | Public LB with Magic | Private LB with Magic | |------------------------|-----------------------:|--------------------:|---------------------:|----------------------:| | Roberta-Base (T1) | 0.395972 | 0.414739 | 0.43531 | 0.40019 | | Xlnet-Base-Cased (T1) | 0.392654 | 0.407847 | 0.42771 | 0.39609 | | Roberta-Base (T2) | 0.398664 | 0.422453 | 0.43522 | 0.40242 | | Bert-Base-Uncased (T2) | 0.389013 | 0.398852 | 0.41844 | 0.39075 | | Bert-Base-Cased (T2) | 0.387040 | 0.400199 | 0.42026 | 0.38455 | | Final Ensemble | 0.392669 | 0.438232 | 0.44238 | 0.41208 |</p> <h2>Things That Didn't Work for Me</h2> <p>They produced worse results on both local CV and LBs - SpatialDropout1D for embeddings and Dense dropouts - Separate BERT embeddings for title and question - Batch normalizations for embeddings and dense layers</p> <h2>Source Code</h2> <p><strong>Final Submission Inference Code:</strong> <a href="https://www.kaggle.com/markpeng/ensemble-5models-v4-v7-magic">https://www.kaggle.com/markpeng/ensemble-5models-v4-v7-magic</a></p> <p><strong>Full Code:</strong> <a href="https://github.com/guitarmind/kaggle_google_quest_qa_labeling">https://github.com/guitarmind/kaggle_google_quest_qa_labeling</a></p>
Google QUEST Q&A Labeling
39th place solution, submission errors and memory leakage
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! I got it 😄 </p> <p>I also already had found correct link via his profile. Anyway thanks for sharing!</p> <p>Ordinary for tensorflow + keras I use mutiprocessing to fix such issues or to train different CV folds simultaneously on different GPUs. But in jupyter notebook it does not help. Pytorch looks much better here...</p> <p>First of all I would like to say thank you to the <strong><em>organizers</em></strong> and <strong><em>all teams</em></strong>! It was very interesting and fruitful competition.</p> <h3>Agenda:</h3> <ul> <li>Models and Training </li> <li>Post-Processing</li> <li><strong>Pain 1 - submission errors</strong></li> <li><strong>Pain 2 - memory leakage using "load_weights" in keras</strong></li> </ul> <h3>Models and training</h3> <ul> <li>Dedicated BERT models for questions, answers and both questions and answers</li> <li>GroupKFold with question_title groups</li> <li>Weighted BCE loss function was the best. The higher weights were assigned to columns with small amount of non-zero elements like the "question type spelling"</li> <li>Blending strategy</li> <li>I tried weighted MSE loss function and mix of MSE + BCE but results were worse than using BCE standalone</li> </ul> <h3>Post-Processing</h3> <ul> <li>"0,1 mapping" - mapping of high values to 1.0 and low values to 0.0 worked the best. The percentige of values to be mapped is chosen for each column individually (optimization problem was solved for each predicted column of training set)</li> <li>"Optimizer rounder" helped me a lot to achive silver medal in previous competition "2019 Data Science Bowl". I was happy to find discreate classes in our targets (caused by limited number of assessors). I tried different quantization strategies:</li> </ul> <p><em><strong>standard:</strong></em> global optimization when the set of classes was set up using all unique values in target columns over the whole training set (single codebook case)</p> <p><em><strong>advanced:</strong></em> column-wise optimization when the number of classes were equal to the number of unique values in each target column in training set (multiple codebooks case)</p> <p>Using of "Optimizer rounder" standalone (standard and advanced) gave significant LB (+0.02) improvement but using it jointly with "0,1 mapping" led to LB decreasing (see scores below). Also it is well-known that "Optimizer rounder" is a very risky approach from overfitting point of view. Therefore I decided finally to chose two submissions: with and w/o "Optimizer rounder". When private results were published I saw that I was right and "Optimizer rounder" made final score lower.</p> <pre><code>public 0.43318 -&amp;gt; private 0.40901 ("0,1 mapping") public 0.43050 -&amp;gt; private 0.40635 ("0,1 mapping" + "Optimizer rounder") </code></pre> <p>Solution w/o "Optimizer rounder" allowed to push me 30 positions up on private test set</p> <h3>Pain 1 - submission errors</h3> <p>A lot of time was spent to overcome submission errors: - <strong>Submission CSV Not Found</strong> - <strong>Submission Scoring Error</strong></p> <p>I have met such errors more than 30 times. It was really annoyed. The worst thing is that there is no additional information from the system to understand that was wrong. I experimentally found that <strong>"Submission CSV Not Found"</strong> is usually caused by RAM memory restriction. It occures because private test set is 7 times bigger that public one. <strong>"Submission Scoring Error"</strong> is caused by columns with small amound of non-zero elements. The reason is features of correlation metric calculation.</p> <h3>Pain 2 - memory leakage in keras</h3> <p>I have met the problem with "load weights" method in keras when perform blending. It loads model weights to RAM but does not free memory after clear_session(). It is well-known issue that was discussed many times in different resources but nothing changed. (for example here <a href="https://github.com/keras-team/keras/issues/7271">https://github.com/keras-team/keras/issues/7271</a> and here <a href="https://github.com/tensorflow/tensorflow/issues/27511">https://github.com/tensorflow/tensorflow/issues/27511</a>). Unfortunately the latest version of keras 2.3.1 still has this issue.</p> <p>Possible reason according to @robieta is the following: <strong>"The issue is with K.clear_session(), which clears the default graph. In 1.x, Keras uses the default tf graph, but in 2.0 Keras manages its own graph. So the clear session call isn't actually clearing the keras graph and models are accumulating there and preventing memory from being freed. It should be a simple fix to make clear_session clears the right graph in 2.0"</strong></p> <p>I tried multiple things to overcome it including but not limited to: - garbage collector calling ("del" for all variables + "gc.collect()"). - multiprocessing strategy where tensorflow and keras were imported in a separate thread as well as models with weights.</p> <p>But nothing helped. It has limited the number of models that I could use in blending. I hope that it will be fixed somewhen. Please let me know if you aware how to fix this bug.</p> <h3>Thanks for reading and good luck in future competitions!</h3>
Google QUEST Q&A Labeling
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>golden section search: 1. take two points and evaluate scores of each points. 2. take the inner dividing point at the golden ratio closer to the worse point. 3. take the better point and the new point, and evaluate scores. 4. repeat 2 and 3.</p> <p>It is better to see <a href="http://fourier.eng.hmc.edu/e176/lectures/ch3/node3.html">here</a> for more explanation.</p> <p>Golden section search assumes convexity of search space, so it is hard to fall to local optima. I didn't want to overfit CV nor public LB, so I used weak optimization method.</p> <p>I used google slides. :)</p> <p>ありがとうございます! </p> <p>Thank you! :)</p> <p>&gt; Why did you choose to clip predictions?</p> <p>I chose clipping because I want to avoid over-post-processing, which leads to unstable scores of CV and LB.</p> <p>&gt; Did you end up with narrow intervals for predictions?</p> <p>No, I restricted possible range of thresholds in advance. <code> self.ab_start = [(0., 0.2), (0.8, 1.)] </code></p> <p>First of all, I'd like to say thank you to the organizers and all teams, and congrats to all the winners. Here is a brief summary of my solution. - Solution overview - Model architectures - Training strategy - Post-processing - Blending strategy - What didn't work for me</p> <h2>Solution overview</h2> <ul> <li>text columns + category column</li> <li>various text truncation <ul><li>pre-truncate</li> <li>post-truncate</li> <li>head + tail tokens (<a href="https://arxiv.org/abs/1905.05583"></a><a href="https://arxiv.org/abs/1905.05583">https://arxiv.org/abs/1905.05583</a>)</li> <li>assign longer max_len to answer</li></ul></li> <li>ensemble of following models: <ul><li>LSTM + Universal Sentence Encoder</li> <li>BERT base uncased * 2</li> <li>BERT base cased</li> <li>BERT large uncased * 2</li> <li>BERT large cased * 2</li> <li>ALBERT base</li> <li>RoBERTa base</li> <li>GPT2 base</li> <li>XLNet base</li></ul></li> </ul> <h2>Model architectures</h2> <p>Surprisingly, stacking 2 Linear layers <strong>without</strong> activation performed better than single Linear layer in some cases. (I found it by a mistake. :P) Some of the following models have category embedding structure additionally.</p> <h3>LSTM model</h3> <p>You can check my LSTM code <a href="https://www.kaggle.com/sakami/google-quest-single-lstm?scriptVersionId=28487242">here</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F5ba7064f215c31d377bb991275d5fa5b%2F2020-02-11%2017.29.36.png?generation=1581409799179315&amp;alt=media" alt=""></p> <h3>BERT models</h3> <p>I used 2 types of model architectures.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Ff5da89f1cff533e174ae24d1cc459da2%2F2020-02-11%2017.14.05.png?generation=1581409945426377&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F80a15ecbc3f970aa7fad8308d01f2af1%2F2020-02-11%2017.14.17.png?generation=1581409964611182&amp;alt=media" alt=""></p> <h3>ALBERT model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F506270dad509f130a53c50d177bd4861%2F2020-02-11%2020.12.38.png?generation=1581419591299259&amp;alt=media" alt=""></p> <h3>RoBERTa model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Fd877c80ce341c7be39715eb3534627d9%2F2020-02-11%2017.14.26.png?generation=1581410441581392&amp;alt=media" alt=""></p> <h3>GPT2 model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Ff123dc23ada1495c68b5be43ccf0241a%2F2020-02-11%2017.14.44.png?generation=1581410465561097&amp;alt=media" alt=""></p> <h3>XLNet model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F483691fcbd3e72cb5a6053e5c8d4873a%2F2020-02-11%2017.14.55.png?generation=1581410490556290&amp;alt=media" alt=""></p> <h2>Training strategy</h2> <ul> <li>Min-Max target scaling <code> t_max = train_y.max(axis=0)[np.newaxis, :] t_min = train_y.min(axis=0)[np.newaxis, :] train_y = (train_y - t_min) / (t_max - t_min) </code></li> <li>Large weights for minor positive or negative samples <code> zero_inflated = (train_y &amp;gt; 0).mean(axis=0) &amp;lt; 0.1 positive_weighted = np.tile(zero_inflated, (len(train_y), 1)) positive_weighted *= (train_y &amp;gt; 0) one_inflated = (train_y &amp;lt; 1).mean(axis=0) &amp;lt; 0.1 negative_weighted = np.tile(one_inflated, (len(train_y), 1)) negative_weighted *= (train_y &amp;lt; 1) train_weights = np.where(positive_weighted + negative_weighted, 2., 1.) </code></li> <li>gelu_new activation for BERT-based models</li> <li>cosine warmup scheduler</li> <li>EMA</li> </ul> <h2>Post-processing</h2> <p>I just clipped predictions. The thresholds are decided by golden section search. This improve my score from 0.4434 to 0.4852. (CV score)</p> <p>before post-processing scores: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2Ff8ebe510f4885db659da10499a79daea%2Fnewplot.png?generation=1581416684489031&amp;alt=media" alt=""></p> <p>after post-processing scores: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2133818%2F04f51a001c9f734fed93003cfe8e935b%2Fnewplot%20(1" alt="">.png?generation=1581416716829926&amp;alt=media)</p> <p>``` class OptimizedRounder(BaseEstimator, TransformerMixin):</p> <pre><code>def __init__(self): self.threshold = [0., 1.] self.ab_start = [(0., 0.2), (0.8, 1.)] def fit(self, train_labels, train_preds): assert train_labels.shape == train_preds.shape assert train_labels.ndim == 1 self.best_score = self.score(train_labels, train_preds) self._golden_section_search(train_labels, train_preds, 0) # lower threshold score = self.score(train_labels, train_preds) if score &amp;gt; self.best_score + 1e-3: self.best_score = score else: self.threshold[0] = 0. self._golden_section_search(train_labels, train_preds, 1) # higher threshold score = self.score(train_labels, train_preds) if score &amp;gt; self.best_score + 1e-3: self.best_score = score else: self.threshold[1] = 1. def _golden_section_search(self, train_labels, train_preds, idx): # idx == 0 -&amp;gt; lower threshold search # idx == 1 -&amp;gt; higher threshold search golden1 = 0.618 golden2 = 1 - golden1 for _ in range(10): a, b = self.ab_start[idx] # calc losses self.threshold[idx] = a la = -self.score(train_labels, train_preds) self.threshold[idx] = b lb = -self.score(train_labels, train_preds) for _ in range(20): # choose value if la &amp;gt; lb: a = b - (b - a) * golden1 self.threshold[idx] = a la = -self.score(train_labels, train_preds) else: b = b - (b - a) * golden2 self.threshold[idx] = b lb = -self.score(train_labels, train_preds) def transform(self, preds): transformed = np.clip(preds, *self.threshold) if np.unique(transformed).size == 1: return preds return transformed def score(self, labels, preds): p = self.transform(preds) score = scipy.stats.spearmanr(labels, p)[0] return score </code></pre> <p>```</p> <h2>Blending strategy</h2> <p>I used TPE optimization, same method as <a href="https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/97471#562472">previous solution</a>.</p> <p>Blending weights: <code> test_preds = lstm_test_preds * 0.1145 + \ bert_uncased_test_preds * 0.0112 + \ bert_uncased_2_test_preds * 0.0911 + \ bert_cased_test_preds * 0.0446 + \ bert_large_uncased_test_preds * 0.1670 + \ bert_large_uncased_2_test_preds * 0.0487 + \ bert_large_cased_test_preds * 0.1094 + \ bert_large_cased_2_test_preds * 0.1369 + \ gpt2_test_preds * 0.0478 + \ albert_test_preds * 0.0120 + \ xlnet_test_preds * 0.1607 + \ roberta_test_preds * 0.0560 </code></p> <h2>What didn't work for me</h2> <ul> <li>Pre-training <ul><li>masked LM</li> <li>input-response prediction (<a href="https://arxiv.org/abs/1705.00652"></a><a href="https://arxiv.org/abs/1705.00652">https://arxiv.org/abs/1705.00652</a>)</li></ul></li> <li>Data augmentation <ul><li>inverse translation (<a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038"></a><a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038">https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038</a>)</li> <li>manifold mixup</li> <li>up-sample minor labels</li></ul></li> </ul> <p>My pleasure. ;)</p> <p>I used PyTorch. I'll share my code after cleaning it. :)</p> <p>Yes, we achieved the best case: both of us got a solo gold medal. :) Congrats, Khoi!!</p> <p>Thank you! I decided weights to maximize OOF score.</p> <p>Thank you!</p> <p>My blending method is almost same as <a href="https://github.com/sakami0000/kaggle_jigsaw/blob/master/compute_blending_weights.py">this</a>. The difference is that I didn't split OOF prediction this time.</p> <p>I used optimized rounder just once after blending.</p> <p>Thank you! You too, congrats CPMP. I didn't do anything special, I just used sequence bucketing and larger batch size than training.</p> <p>I used 5-folds for every model. Hmm... I didn't do anything special, I just used sequence bucketing. My XLNet-base (max length: 1024, batch size: 8) spends 26.5 seconds per fold.</p> <p>Thank you! I dynamically changed with each sentence, using <code>attention_mask</code> and <code>token_type_ids</code>.</p> <p>I'll share my code later. ;)</p>
Google QUEST Q&A Labeling
41-th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>DATA PREPROCESSING</strong></h1> <p><strong>NER</strong>: we pretrained NER-model (bert-base) for code blocks replacement with special token [CODE] <strong>Reg.exp.</strong>: we cleaned up LaTeX snippets and replaced them with [MATH] token <strong>Crop strategy</strong>: from both side (we cut the middle part, if sequence was longer than 450 tokens) <strong>Max. seq. lengths</strong>: 450</p> <h1><strong>MODEL PARAMETERS</strong></h1> <p><strong>Model architecture</strong>: roberta-large <strong>Pretrained</strong>: Yes (from transformers library) <strong>Model design</strong>: siamese bi-encoder (dot-product of question-answer pair -&gt; sigmoid) <strong>Pooling strategy</strong>: [CLS] token</p> <h1><strong>TRAINING PARAMETERS</strong></h1> <p><strong>Optimizer</strong>: AdamW (from transformers library) <strong>Max grad. norm</strong>: 5 <strong>Epochs</strong>: 5 <strong>Batch size</strong>: 8 (1 * 8 accum. steps) <strong>LR</strong>: 2.5e-05 <strong>LR-scheduler</strong>: linear <strong>Warmup steps</strong>: 50 <strong>Seed</strong>: 228 :)</p> <h1><strong>ENSEMBLE PARAMETERS</strong></h1> <p><strong>Blend of</strong>: 7-fold models <strong>Fold strategy</strong>: GroupKFold <strong>Blend strategy</strong>: probabilities averaging</p> <h1><strong>POST-PROCESSING</strong></h1> <p>For each target: 1. Sort values (ascending) 2. Initialize an anchor value (the smallest one) 3. Iterate over each value and calculate the difference between it and anchor 4. If difference &gt; TARGET-EPSILON, then collapse all values between anchor and current value into one bin 5. Update anchor (set current value) 6. Repeat from 3 until end Finally, normalize bins to (0, 1) range</p> <p>TARGET-EPSILONs (for each) target estimated via out of fold predictions (from 7 folds)</p> <h1><strong>ANOTHER TRICKS</strong></h1> <p><strong>SWA</strong>: implemented, but did not have time to submit <strong>Categorical features</strong>: was concatenated with bi-encoder output. No improvements <strong>Additional data</strong>: few additional data was added to fill up some flawed targets. Did not help <strong>Combine crop strategies</strong>: different crop strategies (from left, rigth, both and middle parts) were combined as a TT augmentations. Did not help <strong>Different pooling strategies</strong>: average pooling, average pooling of N last layers. [CLS] was the best one <strong>Freezing</strong> differential, full, only n layers, etc. No freezing was the best</p> <h1><strong>Reproducibility</strong></h1> <p>All code to reproduce our experiments could be found here: <a href="https://github.com/alexeykarnachev/kaggle_google_qa_labeling">https://github.com/alexeykarnachev/kaggle_google_qa_labeling</a></p>
Google QUEST Q&A Labeling
45th Place Simple Solution - 4 model average
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks everyone for posting the elegant solutions . Thanks to the organizers and my team mates @iru538 @tarandro @mobassir and @shahules </p> <p>I had joined the competition early and then left it for PKU .After failing to secure a medal spot in PKU , I joined back the competition and teamed up with Shahul and Mobassir . Both of them had by that time already trained around 13+ vanilla (public Kernel) TF bert + xlnet models and doing prediction at the range of .385 to .387 , but we were not able to creack .400 mark . Then I applied the below post processing (collected from one of the public Kernel ) </p> <p>``` test_preds = final_predictions y_train = df_train[output_categories].values for column_ind in range(30): curr_column = y_train[:, column_ind] values = np.unique(curr_column) map_quantiles = [] for val in values: occurrence = np.mean(curr_column == val) cummulative = sum(el['occurrence'] for el in map_quantiles) map_quantiles.append({'value': val, 'occurrence': occurrence, 'cummulative': cummulative})</p> <pre><code>for quant in map_quantiles: pred_col = test_preds[:, column_ind] q1, q2 = np.quantile(pred_col, quant['cummulative']), np.quantile(pred_col, min(quant['cummulative'] + quant['occurrence'], 1)) pred_col[(pred_col &gt;= q1) &amp; (pred_col &lt;= q2)] = quant['value'] test_preds[:, column_ind] = pred_col </code></pre> <p>``` We crossed the .401 first time and was able to extend it further with XLNET and Roberta etc till .417 </p> <p>In parallel Toru joined our team and had a great single pytorch model with .409 score without post processing . Since I was more comfortable with pytorch , I used that model as base and started training the GPT2 and other pytroch BERT models . This took us to .410 based on pytorch models . </p> <p>After Alexis joined he pointed us to a post processing trick used in Data Sceience Bowl 2018 1st place solution ."Data Science is Important- Bert says" . Here is our OptimizedRounder</p> <p>``` from functools import partial import scipy as sp</p> <p>class OptimizedRounder(object): def <strong>init</strong>(self,correlation): self.correlation = correlation self.coef_ = 0 self.score = 0</p> <pre><code>def _kappa_loss(self, coef, X, y): a= X.copy() b=y.copy() X_p = pd.cut(a, [-np.inf] + list(np.sort(coef)) + [np.inf], labels = [0,1,2]) a[X_p == 0] = 0 a[X_p == 2] = 1 #print("validation score = {}".format(spearmanr(a, b).correlation)) if spearmanr(a, b).correlation &lt; self.correlation: self.score = spearmanr(a, b).correlation return - spearmanr(a, b).correlation + (self.correlation - spearmanr(a, b).correlation + 1)**10 else: self.score = spearmanr(a, b).correlation return - spearmanr(a, b).correlation def fit(self, X, y,coef_ini): loss_partial = partial(self._kappa_loss, X=X, y=y) initial_coef = coef_ini self.coef_ = sp.optimize.minimize(loss_partial, initial_coef, method='nelder-mead') def coefficients(self): return self.coef_['x'] def score_fc(self): return self.score </code></pre> <p>``` Once Alexis was onboarded he started applying post processing to our pytorch models and it gave a lot of boost . So we scrapped the TF models (As it was taking more GPU as well) . </p> <p>Our final selected solution consists of Ensemble of : 1.BERT_Base 5 Folds (Question Model and Answer Model separate)(Tried three different architecture) + Statistical Features (Boost the score a lot). 2. Sequence model with Statistical Features 2. Distillbert Embedding Sequence Model 3. Distillbert Embedding LSTM model </p> <p>We got a slightly better Private Score using GPT2 in place of Sequential Model , but we didnt select that as final score . Our highest scoring Kernel has : 1.BERT_Base 5 Folds (Question Model and Answer Model separate)(Tried three different architecture) 2. GPT2 Base (10 Folds Question and Answer Separate Model) 3. Simple LSTM model 4.Sequence Model with USE features </p> <p>Here is a Kernel with the Inference which will give some idea of the architectures : <a href="https://www.kaggle.com/phoenix9032/threshold-base-4toru?scriptVersionId=28450007">https://www.kaggle.com/phoenix9032/threshold-base-4toru?scriptVersionId=28450007</a></p> <p>We have not used any trick in training time .</p> <p>This was originally posted here <a href="/cdeotte">@cdeotte</a> </p> <p><a href="https://www.kaggle.com/naveenasaithambi/optimizedrounder-improved">https://www.kaggle.com/naveenasaithambi/optimizedrounder-improved</a></p>
Google QUEST Q&A Labeling
47th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>Thanks ratthachat! Let's get gold medal next time!</p> <p>Thanks!</p> <p>Thank you for Google as a host and Kaggle team! I really learned many NLP technics especially BERT form this competition.</p> <h1>Key point of final model</h1> <ul> <li>Use first and last part of text if length exceed max_seq_length</li> <li>Averaging 4 BERT base-uncased model</li> <li>Postprocessing: Fit target distribution with train data (a part of target columns) <ul><li>Detail is in below.</li></ul></li> <li>Concatenate pool_output &amp; sequence_output from bert_layer for GlobalAveragePooling1D</li> <li>10 fold with MultilabelStratifiedKFold (thanks <a href="/ratthachat">@ratthachat</a> !)</li> <li>Strong heart for solo participation</li> </ul> <h1>Didn’t work for me</h1> <ul> <li>Pre-training with stackoverflow data (150,000 sentences)</li> <li>Multi sample dropout</li> <li>The other models <ul><li>Roberta</li> <li>Albert</li> <li>XLNet</li></ul></li> <li>Concatenate question only output &amp; answer only model </li> <li>Concatenate category MLP with BERT model</li> <li>LSTM head instead of Dense with BERT model</li> <li>Freeze half of BertLayer for reducing model complexity</li> <li>Skip half of BertLayer for reducing model complexity</li> <li>USE + MLP</li> <li>LSTM model with gensim embedding</li> <li>custom loss <ul><li>BCE &amp; MSE</li> <li>focal loss</li></ul></li> <li>Word count feature</li> <li>Concat title and question_body as a one block (removing ["SEP"] between them)</li> <li>Up-sampling for imbalance target column</li> </ul> <h1>My postprocessing class</h1> <p>``` class OptimPreds(object): def <strong>init</strong>(self, df_train): self.score_range_dict = {} for i, c in enumerate(df_train.columns[11:]): cnt = df_train[c].value_counts(normalize=True).sort_index() self.score_range_dict[i] = [cnt.index.values.tolist(), cnt.values.tolist()] def predict(self, preds, i): return pd.cut(rank_average(preds), [-np.inf] + np.cumsum(self.score_range_dict[i][1])[:-1].tolist() + [np.inf], labels = self.score_range_dict[i][0])</p> <p>def optim_predict(pred, do_round=True, target=[ 2, 5, 7, 9, 11, 12, 13, 14, 15, 16, 19]): for i in range(pred.shape[1]): if i in target: pred[:,i] = optim.predict(pred[:,i], i) return pred ```</p> <p>Thank you for your question, <a href="/guchio3">@guchio3</a> ! The followings are my answer to your questions :D</p> <p><code> 1. &gt; Use first and last part of text if length exceed max_seq_length Q. How did you set the ratio for the first and last? (ex. 1 : 1, or 2 : 1 or etc …) <br> </code></p> <p>-&gt; I tried various type of the ratio, however 1 : 1 was the best for me.</p> <p><code> 2. &gt; Averaging 4 BERT base-uncased model Q. What is the difference of the four models? </code> -&gt; Used the models described as the following figure. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F291723%2Fff85f6338fa415af7375021586f8704b%2F2020-02-19%2022.21.16.png?generation=1582119385493091&amp;alt=media" alt="fig"></p> <p><code> 3. &gt; Fit target distribution with train data (a part of target columns) 
Q. How did you choose the part of the target columns? Did you use oof? <br> </code> -&gt; Yes, I tried all of the columns and then, pick-up columns which has improvement with the post processing.</p> <p><code> 4. &gt; Freeze half of BertLayer for reducing model complexity 
Q. Which BertLayers did you freeze? <br> </code> -&gt; Set trainable as False for first half on the list of TFBertLayer class.</p>
Google QUEST Q&A Labeling
4th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Good question, We tried sharing bert weights model too, but this non-sharing model worked better in this case</p> <p>Thank you !!! You can both pip install google trans/textblob, this time I used TextBlob!</p> <p>Thanks :) Yes, then I did augmentation in 20% when dataload to make it works</p> <p>I don't remember the score without pp precisely, but it was like LB0.405 -&gt; 0.445</p> <p>I referred this&nbsp; ↓ <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038">https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038</a> It was just about +0.004-5 on LB</p> <p>Thank you!!</p> <p>Thanks :)</p> <p>the code is not only mine, then if it's possible after talking with teammates I can</p> <p>bert was better, we had better score after tuning lr</p> <p>Thanks! this time wasn't but let's meet at next competition!😁</p> <p>Thanks :)</p> <p>*Validations are both MultiStratifiedKFold</p> <p>Yes, right</p> <p>At first, I want to say thanks for all of my teammates :)</p> <h2>Our final submission is an ensemble of 2 models</h2> <h3>About Model</h3> <p>model1 takes 3 texts as input, and has 3berts in it Bert1. input is question_title + question_body and predict only columns that are relevant to question (first 21 of columns) Bert2. input is question_title + answer and predict only columns that are relevant to answer (last 9 of columns) Bert 3. input is question_body + answer and predict all columns and have 1 linear layer which input is concat(bert1_out, bert2_out, bert3_out) and calculate bce for loss1 for bert1 prediction &amp; columns[:21], loss2 for bert2 prediction &amp; columns[-9:], loss3 for bert3 prediction &amp; all columns, loss4 for last linear layer prediction &amp; all columns &amp;backward each loss.</p> <p>model2 is just xlnet version of it.</p> <h3>About Training</h3> <p>-&gt; Spanish -&gt; English argumentation</p> <p>Flexible rating module for encoding 2 texts (ex. if len(text1) &lt; max len1 &amp; Len(text2) &gt; max len2, max len2 = max len2 + (max_len1 - len(text1)))</p> <h3>About Post Processing</h3> <p>Baba shared the idea here ↓ <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129831">https://www.kaggle.com/c/google-quest-challenge/discussion/129831</a></p> <p>This competition was really fun, thanks all and let's compete at other competition again !!!</p> <p>Thanks, It's TextBlob You can pip install it</p> <p>Thanks!!</p>
Google QUEST Q&A Labeling
5th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Calculate the threshold from the training set, if value&lt; thresold 1 , then reset value to 0, and if value &gt;thresold2 ,then reset value to 1</p> <p>thank u ,brother, thank teammates</p> <p>thank u, my brother</p> <p>First of all I want to thank my teammates here.I will briefly introduce our solution here.</p> <h2>models</h2> <p>1.Model structure。we design different models structure. We mainly refer to the solution of ccf internet sentiment analysis,concat different cls embedding . here it is the link <a href="https://github.com/cxy229/BDCI2019-SENTIMENT-CLASSIFICATION">BDCI2019-SENTIMENT-CLASSIFICATION</a> <br> 2.We found 30 labels through analysis, one is the question-related evaluation, and the other is the answer-related evaluation. In order to make the model learn better, we have designed the q model to remove the problem-related label and the a model to process the answer Related labels。 it is better than qa models. <br> 3.different model test. roberta base &gt;roberta large &gt;xlnet base &gt;bert base &gt; t5 base. </p> <h2>Post-processing</h2> <ol> <li>Analysis and evaluation methods and competition data,we use 0,1 reseting way. it improve lb 0.05 or more. </li> </ol> <h2>Features</h2> <p>1.we want that our model learns features that are not only considered in text, so we add host and category embeeding features annd other Statistical Features。 it improve both cv and lb about 0.005. </p> <h2>Text clean</h2> <p>1.We also did text cleaning to remove stop words and some symbols, it improve about 0.002 </p> <h2>Stacking</h2> <p>1.Our Best Private model scored 0.42787 ,but we dont't select it. it is stacking by roberta large and roberta base and xlnet base. <br> <code>blend.loc[:,targets] = roberta_large_oof_test.loc[:,targets].values*0.4+0.3*roberta_base_oof_test.loc[:,targets].values+\ xlnet_base_oof_test.loc[:,targets].values*0.3</code> </p> <p>stacking improve both cv and lb about 0.02 . it help much.</p> <p>yes ,based oof train</p> <p>some statistical features like word num, answer word nums vs question word nums, but it help. little.</p> <p>based on oof train</p> <p>nice work and you have shared a lot of valuable things in a lot of games and learned a lot from you <a href="/cdeotte">@cdeotte</a> </p>
Google QUEST Q&A Labeling
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Not sure how you got that impression, it definitely does contain the training code.</p> <p>Thank you Kaggle and hosts for providing us with this challenging competition and congratulations to all winners! Also I'd like to thank my teammates <a href="/aerdem4">@aerdem4</a>, <a href="/zhesun">@zhesun</a> and <a href="/jingqinnl">@jingqinnl</a> for their hard work and great insights. This was my first time teaming up and definitely made the competition process all the more enjoyable for me.</p> <p>In short, our final submission is based on a weighted blend of 4 Siamese/double transformer architectures and one USE + feature engineering model coupled with a rounding based post-processing approach.</p> <h3>1. Post-processing/Ensembling</h3> <p>I'll start of describing our post-processing strategy, given that as for many other teams this had a massive impact on our performance. In combination with weighted ensembling it improved our 10 fold GroupKFold CV by ~0.05. The general idea is based on rounding predictions downwards to a multiple of some fraction <code>1/d</code>: <code> def scale(x, d): if d: return (x//(1/d))/d return x </code> So if <code>d=4</code> and <code>x = [0.12, 0.3, 0.31, 0.24, 0.7]</code> these values will get rounded to <code>[0.0, 0.25, 0.25, 0.0, 0.5]</code>. For each target column we did a grid search for values of <code>d</code> in <code>[4, 8, 16, 32, 64, None]</code>. </p> <p>In our ensembling we exploited this technique even further, applying the rounding first to individual model predictions and again after taking a linear combination of model predictions. In doing so we did find that using a separate rounding parameter for each model, OOF score improvements would no longer translate to LB. We addressed this by reducing the number of rounding parameters using the same <code>d_local</code> across all models: <code> y_temp = 0 for pred, w in zip(model_preds, ws): y_temp += w * scale(pred, d_local) / sum(ws) y_temp = scale(y_temp, d_global) </code> All ensembling parameters - 2 rounding parameters and 5 model weights - were set using a small grid search optimising the spearman rho metric on OOFs while ignoring question targets for rows with duplicate questions. For all these smart stacking and post-processing tricks the credit goes to <a href="/aerdem4">@aerdem4</a>. </p> <h3>2. Models</h3> <p>Our final ensemble consists of: - Siamese Roberta base (CV 0.416) - Siamese XLNet base (CV 0.414) - Double Albert base V2 (CV 0.413) - Siamese Bert base uncased (CV 0.410) - USE + Feature Engineering model (CV 0.393)</p> <p>Listed CV scores are 10 fold GroupKFold w/o post-processing. Although, the transformer models scored significantly higher in terms of CV, the USE + Feature Engineering still contributed significantly in the stack (about 0.005 boost on CV and LB).</p> <p>All transformer models were implemented using Pytorch and used the pretrained models from the huggingface Transformers library as backbones. Transformer models were trained locally on one RTX 2080Ti. The USE + feature engineering model was implemented with Keras and trained using Kaggle kernels (code available here <a href="https://www.kaggle.com/aerdem4/qa-use-save-model-weights">https://www.kaggle.com/aerdem4/qa-use-save-model-weights</a>. As this model was developed by my teammates I will rely on them to provide more details regarding features, architecture and training in the comment section if needed. </p> <p>Apart from the pretrained backbones all transformer architectures were very similar: - <code>question_title</code> + <code>question_body</code> and <code>question_title</code> + <code>answer</code> are fed separately as input to a transformer. As for other top teams, this was easily the biggest difference maker in terms of architecture, adding up to 0.01 to CV scores. - Average pooling. This improved CV for some models (~ 0.002), but was similar to CLS output for other models. - Custom 2 layer deep regression head also taking one hot encoded category feature as input. Improved CV ~0.005 relative to simpler linear regression heads.</p> <p>Only difference between the 4 transformer architectures is that Roberta, XLNet and Bert all used a Siamese design - i.e. the same transformer (shared weights) is used for both question and answer inputs. For Albert using a separate transformer (non-shared weights) worked better. </p> <h3>3. Training</h3> <p>The training followed the exact same format for all four transformers and consisted of 2 stages.</p> First stage: <ul> <li>Train for 4 epochs with huggingface AdamW optimiser.</li> <li>Binary cross-entropy loss.</li> <li>One-cycle LR schedule. Uses cosine warmup, followed by cosine decay, whilst having a mirrored schedule for momentum (i.e. cosine decay followed by cosine warmup). </li> <li>Max LR of 1e-3 for the regression head, max LR of 1e-5 for transformer backbones.</li> <li>Accumulated batch size of 8</li> </ul> Second stage: <p>Freeze transformer backbone and fine-tune the regression head for an additional 5 epochs with constant LR of 1e-5. Added about 0.002 to CV for most models.</p> <h3>4. Pre-processing</h3> <p>No special pre-processing on text inputs, just the default model specific tokenisers provided by huggingface. For target variables we did find a trick. First rank transform and follow up with min-max scaling. This made it so that target values were much more evenly distributed between 0 and 1, which played well with BCE loss. Gave 0.003 - 0.005 boost on CV.</p> <p>Code for training the transformer models and post-processing/ensembling is available here: <a href="https://github.com/robinniesert/kaggle-google-quest">https://github.com/robinniesert/kaggle-google-quest</a></p> <p>Thanks <a href="/sheriytm">@sheriytm</a>. Training code for transformer models can be found at the github repo linked at the bottom of the post. For training code for the USE + FE model follow the link to Ahmet's kernel.</p> <p>Thank you <a href="/bibek777">@bibek777</a> . I've made the inference kernel public for you here: <a href="https://www.kaggle.com/robin111/bert-use-fe-stacking-full-pipeline-v2">https://www.kaggle.com/robin111/bert-use-fe-stacking-full-pipeline-v2</a>. The code isn't as clean as on the github repo, but I hope you're still able to make some sense of it.</p>
Google QUEST Q&A Labeling
77th place post-processing method: DBSCAN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!! I also thought to use <code>numpy.clip</code> but that meant having to find the cut-off values. I decided to go for DBSCAN as it requires much less tuning, and gave me a great boost in score. </p> <p>Thanks to all of you how shared your work and ideas during the competition!</p> <p>I started this competition late and without any knowledge about BERT or the current transformers architectures. My goal was just to learn about BERT, so I'm very happy to be in the silver zone. </p> <h2>Base models</h2> <p>My base models were just variants of what was shared in the public kernels:</p> <ol> <li><code>bert-large</code> trained over <code>question_titile</code> + <code>question_body</code> + <code>answer</code>.</li> <li>Two <code>bert-base</code> trained over <code>question_title</code>+<code>question_body</code> and <code>answer</code>, to predict answer and question targets separately.</li> <li>Three <code>bert-base</code> trained over <code>question_title</code>+<code>question_body</code>, <code>question_body</code>+<code>answer</code> and <code>answer</code>, to predict all targets jointly. </li> </ol> <p>I used 5-GroupKFold grouped by <code>question_title</code>. For the training process I first trained the regression head with BERT fixed, and then trained the whole network for 3 epochs.</p> <p>My final submission was just the average of these three models.</p> <h2>Post-processing</h2> <p>Two days previous to the competition end, I was at ~800th position in public leaderboard with ~0.39 score with no chances to enter the medal zone. Thanks to @corochann in this <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">discussion</a> I found out why I wasn't able to go up in the leaderboard: Even when my models were pretty good at predicting scores, the only thing that matter for the evaluation metric is the order. <strong>The solution</strong>: find groups of target predictions which are very close and set them with the same value, i.e., <strong>cluster</strong> nearby target predictions, and <a href="https://en.wikipedia.org/wiki/DBSCAN">DBSCAN</a> is pretty good at this task.</p> <blockquote> <p>DBSCAN is a density-based clustering non-parametric algorithm: given a set of points in some space, it groups together points that are closely packed together (points with many nearby neighbors), marking as outliers points that lie alone in low-density regions (whose nearest neighbors are too far away).</p> </blockquote> <p>This algorithm requires two parameters: <code>eps</code> the maximum distance for two points to be considered neighbors (and be assigned to the same cluster), and <code>min_samples</code> the number of samples in a neighborhood for a point to be considered as a core point.</p> <p>Thus, my post-processing routine was:</p> <p><code>python def cluster_predictions(submission, q_value=0.95): submission = submission.copy(deep=True) for column in submission.columns[1:]: print(column) max_dist = (submission[column] .sort_values() .diff() .quantile(q_value)) print(max_dist) if max_dist == 0: continue X = submission[column].values.reshape((-1,1)) clustering = DBSCAN(eps=max_dist, min_samples=2).fit(X) print(np.unique(clustering.labels_)) for cluster_id in np.unique(clustering.labels_): if cluster_id == -1: continue cluster_idx = np.argwhere(clustering.labels_ == cluster_id) cluster_median = np.median(submission.loc[cluster_idx, column]) submission.loc[cluster_idx, column] = cluster_median return submission </code> for each target column, I compute the differences between the sorted predictions, and set <code>eps</code> as a high quantile of these differences. I tuned <code>q_value</code> through the 5-GroupKFold predictions and found 0.95 to be a good value. The post-processing gave me a boost of <code>0.39X -&gt; 0.42X</code> in public LB and <code>0.37X -&gt; 0.39X</code> in private LB. </p> <p>With more time and a better ensemble, I think this approach could have achieved a much better correlation in the private leaderboard. </p>
Google QUEST Q&A Labeling
79th place - Meta Feature Analysis
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Massoud. It's great to be back playing at Kaggle.</p> <p>I just use the same 5-Folds for every experiment to compare CV scores. The rest of the randomness, I don't care about.</p> <p>I did everything in TensorFlow. I'm training a XLNet model now. I'm gonna ensemble Bert, Roberta, and XLNet and see how well that does. It looks like ensembling different Berts was the way to go. (And then add post process and meta feature updates).</p> <p>Thank you everyone for sharing your great solutions. I'm learning so much!</p> <p>I had only a few days to work on this competition and I didn't know anything about NLP. So I spent a few days reading about BERT and a few days exploring meta features and analyzing the Spearman's rank correlation metric. </p> <p>I learned a lot from Akensert's great public kernel <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">here</a>. If you start with that and incorporate meta features and apply a Spearman rank transformation, you can achieve one position short of a Silver medal a.k.a The Best Bronze Solution :-)</p> <h1>Meta Features</h1> <p>Consider the following meta features * <code>question_user_name</code> * <code>question_title_word_count</code>, <code>question_title_first_word</code>, <code>question_title_last_word</code> * <code>question_body_word_count</code>, <code>question_body_first_word</code>, <code>question_body_last_word</code> * <code>answer_user_name</code> * <code>answer_word_count</code>, <code>answer_first_word</code>, <code>answer_last_word</code> * <code>category</code>, <code>host</code> * frequency encode all the above</p> <p>If we build LGBM models for each target using these 26 meta features and display the top 3 feature importances, we discover some interesting relationships. Above each image below is the target name and LGBM Spearman's rank correlation validation score:</p> <h2>Question Body Critical</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F273250fcf82c1639aa2504e0ab874d50%2Fbody_crit.png?generation=1581440405894463&amp;alt=media" alt=""></p> <p>Question body word count helps predict <code>question_body_critical</code>. That makes sense.</p> <h2>Question Interesting Others</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fa9828b713cfe7aaf7bd3c4e6649ed813%2Finteresting_others.png?generation=1581440530534709&amp;alt=media" alt=""></p> <p>Meta feature <code>host</code> helps predict whether a question is interesting to others. It's amazing that these meta features can achieve a score of 0.314 which is better than public BERT notebooks! Let's see how good <code>host</code> is by itself:</p> <pre><code>train, validate = train_test_split(train, test_size=0.2, random_state=42) mm = train.groupby('host')['question_interestingness_others'].mean().to_dict() validate['cat'] = validate.host.map(mm) spearmanr(validate.question_interestingness_others,validate.cat).correlation # This prints 0.311 </code></pre> <p>This code shows that we can build a model from just the feature host and achieve a validation score of 0.311 predicting <code>question_interestingness_others</code> ! Wow</p> <h2>Question Interesting Self</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fee6342f07a2ee54aa08cc170746098cc%2Finteresting_self.png?generation=1581441808653553&amp;alt=media" alt=""></p> <p>Once again, the meta feature <code>host</code> does a great job predicting <code>question_interesting_self</code>. By itself it scores 0.442 which does as good as public notebook BERTs!</p> <h2>Question Type Spelling</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F520ced1e879a9661265024c4b9dda477%2Fspelling.png?generation=1581441968092095&amp;alt=media" alt=""></p> <p>LGBM says that the most important feature to predict <code>question_type_spelling</code> is <code>answer_word_count</code>. From public kernels, we know that <code>host=='ell.stackexchange.com'</code> and <code>host=='english.stackexchange.com'</code> are very predictive too. (LGBM didn't find this which shows that LGBM is not optimizing Spearman's rank correlation). Using <code>host</code> and <code>answer_word_count</code>, we can predict <code>question_type_spelling</code> with score 0.300 ! Wow</p> <pre><code>validate['h_type'] = validate.host=='ell.stackexchange.com' validate['h_type'] |= validate.host=='english.stackexchange.com' validate['h_type'] &amp;amp;= validate.answer.map(lambda x: len(x.split()))&amp;gt;95 spearmanr(validate.question_type_spelling,validate.h_type).correlation # This prints 0.300 </code></pre> <h2>Question Type Definition</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F0dfbb4bf8645dd9f043f7a61f751fcfa%2Fdefinition.png?generation=1581443385397518&amp;alt=media" alt=""></p> <p>Question first word helps predict whether we have <code>question_type_definition</code>.</p> <h2>Question Well Written</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F69285f93b55a135fd87ba01f1d264754%2Fwell_written.png?generation=1581442904056045&amp;alt=media" alt=""></p> <p>Question first word frequency helps predict whether a question is well written. That's interesting. If your writing is similar to how other people write, it is deemed well written.</p> <h2>Answer Level of Information</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fd2b111e2e208ff739dbce612b5444f5d%2Finfo.png?generation=1581443008426365&amp;alt=media" alt=""></p> <p>Answer word count helps predict <code>answer_level_of_information</code>. That makes sense. The code below shows that this single feature alone can predict better than public BERT notebooks!</p> <pre><code>validate['a_ct'] = validate.answer.map(lambda x: len(x.split())) spearmanr(validate.answer_level_of_information,validate.a_ct).correlation # This prints 0.392 </code></pre> <h2>Answer Plausible</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fbad5aaee8d43f49f3a9282c3381ae25f%2Fplausible.png?generation=1581443262147882&amp;alt=media" alt=""></p> <p>Answer user name frequency helps predict whether an answer is plausible. This makes sense. Active users are probably more plausible.</p> <h1>Meta Features Applied</h1> <p>Now that we know important meta features, we can either incorporate them into our BERT model inputs, or we can ensemble them with BERT's outputs. Using my OOF, I ensembled meta features to my BERT outputs. For example, the two best CV/LB increases came from replacing BERT's outputs for <code>question_type_spelling</code> with </p> <pre><code>test['a_ct'] = test.answer.map(lambda x: len(x.split())) test['h_type'] = test.host=='ell.stackexchange.com' test['h_type'] |= test.host=='english.stackexchange.com' test['h_type'] &amp;amp;= test.a_ct&amp;gt;95 sub.question_type_spelling = test.h_type.values </code></pre> <p>And updating <code>answer_level_of_information</code> with:</p> <pre><code>sub.answer_level_of_information += 7 * test.a_ct.values/8158 sub.answer_level_of_information /= sub.answer_level_of_information.max() </code></pre> <h1>Spearman's Rank Correlation</h1> <p>As pointed out in this discussion post <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">here</a>, the Spearman's rank correlation metric can be improved if you convert continuous predictions into more discrete predictions if the true targets are discrete. By applying a grid search on my OOF, I found that applying <code>numpy.clip( pred, clippings[col][0], clippings[col][1] )</code> can maximize this metric using</p> <pre><code>clippings = { 'question_has_commonly_accepted_answer':[0,0.6], 'question_conversational':[0.15,1], 'question_multi_intent':[0.1,1], 'question_type_choice':[0.1,1], 'question_type_compare':[0.1,1], 'question_type_consequence':[0.08,1], 'question_type_definition':[0.1,1], 'question_type_entity':[0.13,1] } </code></pre> <h1>UPDATE</h1> <p>During the competition, I only built and trained one <code>bert-base-uncased</code> model. Other models didn't have good CVs. But after reading all the great solutions, I just built a <code>roberta-base</code> model by first applying <code>def cln(x): return " ".join(x.split())</code> which removes white space (as described in 8th place solution <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129857">here</a>). </p> <p>It turns out that <code>RobertaTokenizer.from_pretrained('roberta-base')</code> does not remove white space by itself. (whereas the bert-base-uncased tokenizer does). Using the same architecture as my bert-base, my Roberta has CV 0.395. When I ensemble it with my bert-base CV 0.392, the new CV is 0.412. That's an increase of 0.020 over my bert-base. </p> <p>I just submitted to the leaderboard and it increased my LB by 0.015 and my rank by 40 spots! Wow! I just learned how to make better NLP models. I can't wait for the next competition ! </p> <p>Thanks. I tried using them as inputs but I couldn't get that to work well. Instead I ensembled them with BERT outputs. The boosts were as follows: <code>question_type_spelling</code> gave +0.007, <code>answer_level_of_info</code> gave +0.003, <code>question_interesting_self</code> and <code>question_interesting_others</code> gave +0.001. And post process gave +0.030. So In total it increased CV/LB by +0.041.</p>
Google QUEST Q&A Labeling
7th place post-processing and ensembling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <h2>Post Processing</h2> <p>Our post-processing strategy is very similar to <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840">the 1st place team</a>. </p> <p>The observation is that spearman is related to the ranks of samples, i.e. it based on the relative values but not the absolute values of the predictions and it prefer predictions that has the similar distribution to the labels.</p> <p>Our assumptions are:</p> <ol> <li><p>The model we built cannot predict the absolute values well for those very imbalanced columns like <code>question_type_spelling</code>, but it can, hopefully, differentiate the examples of higher labels and lower labels.</p></li> <li><p>For each column, the distribution of labels of the public/private test set are close to that of the trainning set.</p></li> </ol> <p>So what we have to do is just help our model to <strong>align</strong> the predictions according to the distribution of this column in the trainning set. It turns out that our assumptions may not hold true for every column, so we analyze the performance of this method on every column and ban the post-processing on several of them.</p> <p>The related notebook is <a href="https://www.kaggle.com/pyxiea/postprocessing-for-google-quest">Here</a></p> <h2>Ensembling</h2> <p>Our final solution consists of 5 models: bert-base-uncased, bert-wwm-uncased, bert-wwm-cased, bert-large-uncased, roberta-large. This schema lacks the diversity, but we cannot get good results on many other pre-trained models. We train each model with 5 folds and 5 epochs. </p> <p>We noticed that generally the better the original prediction, the more benefits we can gain from post-processing, so for each fold, we simply average predictions of 5 models to get a more accurate prediction and then apply PP on it, thus giving us 5 predictions for the test set. Then average the prediction of all folds and get only one prediction.</p> <p>It turns out that the average process, though very powerful and necessary, will change the distribution of the prediction to some extent, so we just heuristically apply the PP again to the prediction.</p>
Google QUEST Q&A Labeling
83rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Github repo: <a href="https://github.com/btrotta/kaggle-google-quest">https://github.com/btrotta/kaggle-google-quest</a></p> <p>My code relies heavily on the following public notebook: <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a> My BERT modelling code is almost identical to that kernel; the only change I made was to insert a special token between the question title and body. (I'm not sure whether this had any real effect.) As in the script, I did 5 folds and 3 epochs.</p> <p>To optimise the Spearman rho metric, I found it was helpful to limit the predicted scores to a smaller number of distinct values. The way I did this was by post-processing using a LightGBM model with small leaf size (3) and few iterations (20). The model just takes a single predictor (the output prediction of the BERT model) and optimises the cross-entropy loss. This gives a boost of around 0.02 on the private leaderboard, for a final score of 0.39507.</p>
Google QUEST Q&A Labeling
9th public and private solution: 30 models, 1 for each column?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I'd thank the organizers and my teammate, <a href="/murphy89">@murphy89</a> for this competition, and congrats to all the teams participating. I know the metric is not ideal, but I guess we all gained something from the competition. Secondly, well, that is a click bait title, but we did something sort of the same when coming to blending parts. Anyway here is the overview of our solution:</p> <h1>Overview</h1> <ul> <li><p>There is no special pre-processing of text, just basic cleaning like replacing &amp; with &amp; and &lt; with &lt; and some more broken signs.</p> <h2>Models</h2></li> <li><p>We all split the models into 2 parts: models for predicting first 21 question columns and for last 9 answer columns.</p> <h3>My models</h3></li> </ul> <p><strong>Question part</strong></p> <p>Question models take only text from question title and question body. If the text length is long, use head and tail method. Using special tokens like site pages, categories... Freezing last 4 layers, concatenating them and feeding into 1-layer LSTM before taking the average pool and max pool. Then 1-2 more FC layers at the end. I used bert base, bert large and xlnet. Then from each model I vary a bit like different layers to freeze or instead of concatenating layers, just take the sum. Sum of BCELogitsLoss on each column work better for me. I guess it's similar to weighted loss.</p> <p><strong>Answer part</strong></p> <p>Answer models take text from all 3 columns and also use head-tail method. Not using special tokens, as my CV is worse with special tokens in answer part. Freezing last 4 layers, and doing almost the same as question part, except that this time it's most ly 2-layer BiLSTM. Mostly bert base and bert large of simply varying structures from the main model. Normal BCELogitsLoss. </p> <h2>Blending</h2> <p>I believe this is the key part in boosting our score. We discovered the right blending strategy only on the last day, when we saved the oof predictions and cross validated offline. Although our single models have different CV score 0.411-0.426, they seemed better when blending together at certain weights. Later I found out that each model better in predicting some columns, so I use heuristic search to search for best CV score of each column. For those columns that depending heavily on post-processing, the search is on the final values after post-processing. This is why there is such title: 30 models, since I believe tuning some specific models for some columns would be better with post-processing part. In fact, some models with initial lower CV score seem to have higher weightage in this part. The best CV score after blending is 0.435, and after post-processing is 0.478. The only regret is that when we were doing this, it was only 4 hours till the deadline, so some of our columns are not maximized in the final solution. In fact, when I tried again after the deadline, our CV could reach 0.482 with post-processing.</p> <h2>Post-processing</h2> <p>Our post-processing is not different from others'; on the contrary, I think it's even less effective since it's the simple rounding of predictions of some columns to the nearest values with certain thresholds. </p> <h2>What did not work for me</h2> <ul> <li>Translation augmentation. I had translated back and forth in French and Italian, but adding them did not help.</li> <li>SWA.</li> <li>Multi-dropout for me, but it worked for my teammate's, Morphy, architecture.</li> <li>Other models: xlnet-large, albert... The scores are worse in my case.</li> </ul> <p>thanks, it's my pleasure 😄 </p>
Google QUEST Q&A Labeling
A general walk through my solution with trick 55th
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, this is how I do it. <code>python def closest(num_list, num): <br> ''' num_list is an array of all values of the corresponding last 30 columns in the training set. ''' idx = (np.abs(num_list - num)).argmin() return num_list[idx] </code> But be careful. Some columns have a unique value after I did this modification. You need to find it and manully design a threshold so that the column values are not unique. Otherwise, you will get a submission error. It frustrated me a lot but I finally figured it out by checking the discussion.</p> <p>Congratulations to all winners of this competition. Your hard work paid off!</p> <p>First, I have to say thanks to the authors of the following three published notebooks: <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a>, <a href="https://www.kaggle.com/abhishek/distilbert-use-features-oof">https://www.kaggle.com/abhishek/distilbert-use-features-oof</a>, <a href="https://www.kaggle.com/codename007/start-from-here-quest-complete-eda-fe">https://www.kaggle.com/codename007/start-from-here-quest-complete-eda-fe</a>.</p> <p>These notebooks showed awesome ways to build models, visualize the dataset and extract features from non-text data. </p> <p>Our initial plan was to take <em>question title</em>, <em>question body</em> and <em>answer</em> all into a Bert based model. But after we analyzed the distribution of the lengths of question bodies and answers, we found two major problems: 1. If we fitted all three parts as input, we had to adjust the input space for both <em>question body</em> and <em>answer</em> due to the limitation of the input size of the Bert based models. In order to do this, we had to trim a bunch of text, which was a waste of training data. Also, the question of how to trim a long text immediately presented itself to us. 2. Roughly half of the question bodies and answers had code in them. When implementing tokenization, it really brought some troubles for us. The tokenization of the code looked extremely wired, and we didn’t know if these would have a large effect on the model prediction.</p> <p>We later did some experiments (I will list some at the end of the post and these turned out to not help a lot) and finally decided to train two separate models. The first one was to fit Bert based models with only <em>question title</em> and <em>question body</em>. The second one was to fit with only <em>question title</em> and <em>answer</em>. In this way, almost all question bodies and answers can be fitted into models. </p> <p>Like other authors in the discussion, we split 30 output features into two parts. We used the <em>question title</em> and <em>question body</em> to predict features related to questions and used the <em>question title</em> and <em>answer</em> to predict features related to answers. We trained weights for several times, and the final output is the average of several models with these weights. </p> <p>After doing all of these, we received a 0.415 on the public LB (ranked around 90 at that time). For the rest of the days, we shifted our focus to the data post-processing part cause at that time with the highest scores being over 0.480. But we didn’t believe the models we used had such a big difference. There must have been some trick parts that we had not come to realize. </p> <p>So, I checked and compared the distribution of the model prediction and the standard distribution of the training set. We had then found out the “magic” part: the ratings were DISCRETE in the training set. It makes sense, right? Probably raters could only choose certain scores that “make sense”. No one is going to give a score like 0.76354237 in real life :). So, I just did a simple adjustment: pick out every prediction output, find the closest score that “makes sense”, and then replace it. That simple change raised my scores and rank dramatically (LB over 0.450).</p> <p>I then plotted the distribution of prediction and the standard distribution of the training set again and fine-tuned (actually did some hard coding) every column. This process took me about two days. I submitted many versions, but it did not turn out well with the risk of overfitting. Finally, I selected one kernel with no fine-tuning and one with fine-tuning (hard coding) as my final submission, and the one with no fine-tuning won out.</p> <p>Things that didn’t work: 1. Since humans can still understand a sentence even if it missed some words, I tried to define a function that can randomly drop some words from the text and then make it compatible with the input. This did increase my public LB score around 0.003-0.004, but after we decided to build a model with separate two parts, it had little effect. 2. I roughly went through the text of the dataset and tried to find something in common among texts that have codes (It spent me almost a whole day &gt;_&lt; ). I then used some regular expressions and tried to delete all codes and links in all texts. However, I am not entirely sure if it was good to delete them. Creating some special tokens might be a better idea, but we really didn’t have enough time to do that. It turned out to not work that well and even made the score lower, so we decided to not use it. </p> <h2>UPDATE:</h2> <p>Here is an example when I tried to compare the distribution of the training dataset and the prediction.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1801021%2F496a430d255033e9002f4a38e7b9b828%2Fexample%20of%20dsitribution.png?generation=1581441524382191&amp;alt=media" alt=""></p> <p>In the following image, I was trying to mark which columns I need to do fine-tuning. The horizontal yellow lines indicated that I didn't need to do too much about those columns. The scores like 0.5 and 1.0 in other graphs indicated that I need to do some extra thresholding at those values. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1801021%2F13ff762ee5a47a3461971b82e3351f10%2Fstandard.png?generation=1581441885285144&amp;alt=media" alt=""></p> <h2>Update Technical Part:</h2> <h3><strong>Feature Engineering:</strong></h3> <p>Inspired by pubic kernels, we extract URLs from the <em>url</em> column and implemented one-hot encoding to both the URLs and category columns. There are 64 new feature columns (51 for <em>url</em> and 13 for <em>category</em>) after one-hot encoding. According to the paper <em>Universal Sentence Encoder</em>, The Universal Sentence Encoder (USE) encodes text into high dimensional vectors that can be used for diverse tasks. The input is the variable-length English text, and the output is a 512-dimensional vector. So, we encoded question title, question body, and answer into 512-dimensional vectors and then compared the L2 distance and cosine similarities between them. We then added all of these as new features. There are 512 + 512 + 64 + 2 = 1090 new features. Under the concern of overfitting, I tried to decrease the number of generated features by briefly reading through a few examples of each category and then manually reclassifying 51 categories into four sub-categories: <em>have code, daily life, academic, literature readings</em>. However, this didn’t improve the prediction.</p> <h3><strong>Model Structure:</strong></h3> <p>We used a combo of two pretrained Roberta base models to train the dataset. The first Roberta handles the question title and question body pairs, while the second Roberta handles the question title and answer pairs. We only fitted texts into Roberta models and ignored 1090 new features for now. The Roberta base model has 12 hidden layers and there are 13 layers in total (which includes the final output layer). Every layer has the output dimension *batch_size* x 512 x 768 (*batch _size x max_seq_len x emb_size*). Initially, I tried out convolutions with different filter sizes such as 4 x 768, 6 x 768 or 8 x 768 to capture the meaning of the combination of words. However, it turned out to not work that well, and the average of the whole sequences outperformed them. We took the last three output layers out, concatenated them together and then applied an average-pooling on it. After this, we averaged over 512 tokens for each input in the batch to capture the meaning of the whole sequence and hoped the model would learn some lower level features and added them to the final representation. Finally, we concatenated 1090 features with the average embedding and then added a fully connected layer with 21 units for the title-body pair model and 9 units for the title-answer pair model. I didn’t use the first unit of the output layer of the Roberta model since Roberta removes the <em>Next Sentence Prediction (NSP)</em> task from BERT’s pre-training, and the first unit seems not as useful as in Bert. The final predictions are the average of 6 pairs of Roberta models (6- fold). We were supposed to freeze Roberta models and then fine-tune the added fully connected layer, but we didn’t due to the time limit.</p> <h3><strong>Customized Learning rate:</strong></h3> <p>We used a range test in the beginning, to find out the proper learning rate during training. We then wrote up a customized scheduler inherited from PolynomialDecay to change the learning rate dynamically. We set up warm-up steps proportional to the total decay-steps to avoid the primacy effect at the beginning of the training process. The learning rate increases linearly over the warm-up period and we didn’t set up the learning rate to have cyclic behavior.</p> <h3><strong>Training:</strong></h3> <p>Like most of the other groups, we use GroupKFold with n_splits=5, did 6-fold cv for 6 epochs and saved the training weights with the highest cv scores. We used Adam optimizer with mixed-precision data types, which dynamically and automatically adjusting the scaling to prevent Inf or NaN values and saved training time.</p> <h3><strong>Post-processing:</strong></h3> <p>We first considered using one-hot encoding for all output columns, given that every column has finite discrete values, but this shouldn’t work well since it didn’t take the order of the values into account. Because of the time limit, we just compared the distribution of predictions for each column with the distribution of actual values in the training set and then adjust the threshold based on the distribution. We compared the scores between before post- processing and after post-processing to see if it improved performance.</p> <p>Discussion updated.</p>
Google QUEST Q&A Labeling
How We Found "Magic" (13th Solution Overview)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. External features concatenated with CLS tokens then passed into regression heads.</p> <p>This is my first NLP competition and I am grateful to have gained my first gold. It was a great pleasure working with my talented and extremely hard working teammates <a href="/chanhu">@chanhu</a> <a href="/xiaojiu1414">@xiaojiu1414</a> <a href="/serigne">@serigne</a> <a href="/seif95">@seif95</a>.</p> <p>Our ensemble is nothing different from what other teams have shared, which includes bert base, bert large, xlnet, and roberta. I wanted to share a little bit my thought process on the submission trick. </p> <h3><strong>Before that</strong></h3> <p>a few modeling tricks we have used are 1) separate regression heads for questions and answers, 2) head and tail tokenization 3) concatenated [CLS] tokens as regression input 4) USE difference and entity embedding as external features in Bert</p> <p>I personally thought the magic was going to be clever loss functions and I tried many different losses including customized BCE based on ranking, but nothing significantly outperformed vanilla BCE. One thing worth noting is that, in the early stage of the competition, I trained an MSE model, which gave me a decent boost after simple blending with my BCE models, but it does not work well with the "magic".</p> <h3><strong>Back to the "magic"</strong></h3> <p>My teammate has shared <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129926">our post-processing code</a>. I am going to explain how we came up with each one of them. </p> <p>The first small trick is the culture list. This part was only meant for the 19th column, which is <code>question_type_spelling</code>. This came from the observation that this column is only non-zero when the host is <code>english.stackexchange.com</code> or <code>ell.stackexchange.com</code>.</p> <p>The second (and the more significant) part of the trick is building out equalities among the predictions, based on <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">this discussion</a>. </p> <p><em><strong>The very first attempt</strong></em> was to round the output assuming there were 90 raters. I believe many of you might be familiar with this assumption. This gave us a big boost about 0.03. This was when I found out that my blending of BCE and MSE did not work so well when applying this trick. The blended model performed far worse than my single BCE model after post-processing. </p> <p><em><strong>An improvement</strong></em> on this trick came from a deeper look at the train labels. Assuming the 90-rater assumption were true, if I were among the raters, I would have given random points to some of the questions. This means if these scores are really given by humans, there should be 90 (or close) unique values for each column. It was easy to deduct the actual number of raters for each column based on the number of unique values. One comforting fact is that the lists of unique values are always the same for the columns with the same number of raters. </p> <p><em><strong>This improvement did NOT work as I expected initially.</strong></em> But I really believed this was the way to go from my observations. What I then realized was that because the number of raters was too sparse and the original predictions of some columns were really small, the rounding might have squeezed many of the predictions to 0. <strong><em>Min-max scaling</em></strong> these predictions followed naturally considering the competition metric only cares about ranking rather than the actual values. Scaling gave us another ~0.02 boost. </p> <h3><strong>At the end</strong></h3> <p>Again, I had great time working on this competition and learning about NLP. But as you can see, the main boost came from post-processing. I guess one takeaway is that if you hit the bottleneck of a competition, always take another look at the data. </p> <p>Cheers!</p> <p>The feeling is mutual! </p> <p>Thanks!</p>
Google QUEST Q&A Labeling
My First Medal (Bronze) Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>words of wisdom :)</p> <p>Thank you Google and Kaggle for the challenging competition (a good mixture of science and art imo)! </p> <p>Also, congrats to all teams! :)</p> <h2><strong>Pre-Processing:</strong></h2> <ul> <li>unidecode and html unescape</li> </ul> <h2><strong>Modelling:</strong></h2> <ol> <li>Fine-tune Bert base uncased on raw data + statistical features (LB: 0.384)</li> <li>Fine-tune Bert base uncased on pre-processed data + statistical features (LB: 0.384)</li> <li><p>Feature extraction model using GloVe, FastText, USE 5, Paragram and distilBert (LB: 0.371)</p> <ul><li>I used TF + hugging face (on hindsight, reproducibility was really problematic)</li> <li>All had similar architecture using Adam + BCE loss with sigmoid output on all 30 labels</li> <li>I used spatial dropout to reduce overfitting when fine-tuning Bert</li> <li>Plain vanilla 5 KFolds (lucky seed 42) combined with 2 epochs after early stopping</li> <li>TTA using pre-processed text and raw text</li></ul></li> </ol> <p>Ensembling them using simple mean (across folds, epochs, and models) got me LB: 0.394</p> <h2>Post-Processing:</h2> <ul> <li>Simple quantile threshold on "easy targets" mimicing train set labels (with a margin) e.g. <code>threshold = submission[c].quantile(0.75)</code> <code>submission[c].apply(lambda x: 1 if x &gt; threshold else x)</code></li> </ul> <p>"easy targets" selected based on std and nunique of train set labels - Rounding prediction to 4 significant figures to account for "tied ranks"</p> <p>Post-processing outputs gave me a boost to LB: 0.410</p> <h2>What I tried that didn't work:</h2> <ul> <li>GRUs/LSTMs </li> <li>Regression NN with target transformation (similar CV but doesn't generalise well to public LB)</li> <li>Data augmentation (random deletion, random insertion, switching question/answers sequences, etc)</li> <li>XLNet base, Roberta base, Albert base, Bert cased could not beat my Bert base uncased baseline</li> <li>Rank averaging only helped if no post-processing was done</li> <li>Rounding predictions to nearest 1/90 </li> </ul> <h2>Biggest take-away:</h2> <p>Don't focus <em>too much</em> on public kernels and discussion. Just have a look, get some ideas, look at the data again and try your own ideas. (p.s. I think someone mentioned this on the discussion thread but I forgot which and who)</p>
Google QUEST Q&A Labeling
Private LB 10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, I agree he gave in a lot of effort to 2 of the most important parts of our solutions.</p> <p>Credits for Weighted Loss goes to <a href="/cl2ev1">@cl2ev1</a>. He figured out based on some analysis of data and experiments with weights. I think he'll add more to answer your question.</p> <p>Congrats to the winners and Thanks to my teammates. ( <a href="/theoviel">@theoviel</a> <a href="/cl2ev1">@cl2ev1</a> <a href="/mathurinache">@mathurinache</a> <a href="/titericz">@titericz</a> ). Here's a brief overview of things that matter in our solution</p> <h3>Loss function</h3> <ul> <li>We sticked for a long time with the BCE and couldn’t improve results by changing it. We however managed to find weights that nicely fitted our post-processing policy.</li> </ul> <p><code> loss_fct = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([ 0.9, 1, 1.5, 0.8, 0.8, 0.8, 0.96, 1.1, 1.1, 3, 1, 1.1, 2, 3, 3, 2, 1, 2, 1, 2, 0.9, 0.75, 0.9, 0.75, 0.75, 0.7, 1, 2.5, 1, 0.75])) </code></p> <ul> <li>Multiple other experiments with custom loss functions were conducted, but BCE performed the best.</li> </ul> <h3>Training stuff</h3> <ul> <li>1 epoch with frozen encoder + 2 epochs with with everything unfrozen</li> <li>Linear lr scheduling, with a custom learning rate depending on the layer. The transformer has a lower one as it is pretrained, and the closer to the output, the larger the learning rate</li> <li>AdamW with <code>betas=(0.5, 0.999)</code> and a no bias decay of 1</li> <li>Batch size is 64 for the first epoch, and then the larger we can fit on our gpus with sometimes an accumulation step</li> </ul> <h2>Models</h2> <ul> <li><p>Our solution is a Ensemble of 4 models, (3 BERT-Large + 1 BERT-Base)</p></li> <li><p>We made most experiments using the bert-base-uncased architecture, and managed to build a strong pipeline about 1 week before the end of the competition. This enabled us to switch easily to bigger ones which in the end made the strength of our ensemble. </p></li> <li><p>We build 4 different architectures on top of the transformer, and used two of them overall.</p></li> </ul> <h3>Bert Base Uncased</h3> <ul> <li><p>Here is the idea about the custom arch one we picked for our bert-base approach :</p> <ul><li>Input is <code>[CLS] title [Q] question [A] answer [SEP]</code> and <code>[0, … 0, 1, …., 1, 2, …, 2]</code> for ids. We initialed special tokens with values of the <code>[SEP]</code> token. Custom ids were also initialized with the model values.</li> <li>Custom head : take the <code>[CLS]</code> token of the last <code>n=8</code> layers, apply a dense of size <code>m=1024</code> and a <code>tanh</code> activation, then concatenate everything. </li> <li>Embeddings for the <code>category</code>and host <code>column</code> (+ <code>tanh</code>). We concatenate them with the output of the custom pooler and obtain the logits. Some text cleaning (latex, urls, spaces, backslashes) was also applied</li></ul></li> <li><p>This model is the only one that uses text cleaning and embeddings, it helps for diversity I guess. </p></li> </ul> <h3>Bert Larges</h3> <ul> <li>They repose on the same architecture. We use two inputs : <br> <ul><li>[Q] title [SEP] question [SEP]<code>for tokens and</code>[0, … 0, 1, …., 1]<code>for ids&lt;/li&gt; &lt;li&gt;[A] title [SEP] answer[SEP]&lt;code&gt;for tokens and&lt;/code&gt;[0, … 0, 1, …., 1]</code> for ids</li> <li>[Q] and [A] start with the value of the [CLS] token</li></ul></li> <li>Again, custom pooling head. Values of <code>n</code>and <code>m</code>are below. </li> <li><p>Depending on the column, we either predict the value with only the pooled [Q] token, only the * </p> <ul><li>[A] token or both. The policy chosen is the following : <code> self.mix = [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 2] # 0 = both, 1 = q, 2 = a </code></li></ul></li> <li><p>Concerning the pooler, it differs a bit :</p> <ul><li>Bert large cased : <code>n = 24</code>, <code>m = 128</code></li> <li>Bert large uncased : <code>n = 24</code>, <code>m = 128</code></li> <li>Bert large uncased wwm : <code>n = 8</code>, <code>m = 768</code></li></ul></li> </ul> <h2>Post-processing</h2> <p>Detailed here : </p> <p><a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129901">https://www.kaggle.com/c/google-quest-challenge/discussion/129901</a></p> <h2>Results</h2> <h3>Single model</h3> <ul> <li><p>Our best model is the BERT-Large Whole Word Masking one, trained using Weighted BCE Loss gave : </p></li> <li><p>With Post-Processing:</p> <ul><li>Private LB : <strong>0.41919</strong></li> <li>Public LB : <strong>0.46428</strong></li> <li>CV: <strong>0.454</strong></li></ul></li> <li><p>Without post-processing :</p> <ul><li>Private LB : <strong>0.38127</strong></li> <li>Public LB : <strong>0.40737</strong></li> <li>CV: <strong>0.415</strong></li></ul></li> <li><p>Which is a +0.06 boost on public and +0.04 on private. As you can see our single models are not that strong.</p></li> </ul> <h3>Ensemble</h3> <ul> <li>Our best selected solution is a simple average of the 4 mentioned model. <ul><li>Private LB : <strong>0.42430</strong></li> <li>Public LB : <strong>0.47259</strong></li></ul></li> </ul> <p>Thanks for reading, happy kaggling !</p>
Google QUEST Q&A Labeling
Team Bronze Medal Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all thanks to Google and Kaggle for the interesting competition, and to my teammates, <a href="/patilaum">@patilaum</a> and <a href="/amancframewala">@amancframewala</a> for the generous contribution and patience. Our approach involved ensembling several BERT-family architectures like Bert, Distil Bert, XLNet trained separately on Questions and Answers to better predict the multi-labels. We used 5 fold GroupKFold CV for validating our models. We also were inspired by some of previous winning approaches of concatenating the hidden states of the models for prediction. The models were averaged using rank-averaging which boosted the score by +0.002 Finally, we used some basic thresholding for an easy boost for highly imbalanced predictors.</p> <p>What did'nt work for us(but some worked for others): BART /T5 encoder modelling SWA Training the head separately StackExchange statistical features BERT+USE GLOVE LSTM</p> <p>Wish we had properly implemented stuff like BART given that the top solutions found a great boost from them. We learned a lot from this competition and thank all the people who posted on the forums with tips and their results. Special thanks to our lab, CoE@ VJTI, for the hardware infra. Looking forward to more such NLP competitions in the year.</p>
Google QUEST Q&A Labeling
Two BERTs are better than one (2nd place solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the sponsors and kaggle for hosting such an interesting and challenging competition! Our work is an amazing team effort and sprint over a very short time period of two weeks during which we made progress each day. <br> Why two weeks? Because some of us just finished the TF2 QA competition and decided to reuse whatever NLP knowledge that was acquired there. Thanks to all teammates: @christofhenkel @cpmpml @philippsinger @dott1718 @maxjeblick. </p> <p><strong>Please</strong> also give them upvotes, we made a lottery roll of who will post the solution and everyone contributed equally to it.</p> <p>Our only regret is that the final score depends too heavily on the question_type_spelling column that has only 11 non zero target values in train data. Late submission experiments show we could easily improve, or degrade, our final rank by little tweaks on how we handle this column.</p> <h2>Brief Summary</h2> <p>Like most teams, we used pretrained models from huggingface implemented in pytorch. Our final blend consists of 5 transformer models, that have several differences to push diversity. However, they all have in common to use 2 separate inputs of up to 512 tokens length representing questions respectively answers. These 2 inputs go either through the same transformer or through 2 separate transformers. We changed the targets to an ordinal representation (setting &gt;target to 1) resulting in 170 target columns with each original column having n-unique target columns. Predictions were then generated by calculating the expected value.</p> <p>Most models were trained with differential learning rate, where the transformer gets a lr of 3-e5 and the model head(s) a lr of 0.005 and used a cosine schedule for training 3 epochs including warmup over one epoch. Optimizer were either AdamW or RAdam with weight decay of 0.01 using an effective batchsize of 8 (gradient accumulation).</p> <p><strong>Dual roberta-base:</strong> - double roberta-base - added question title as input to answer-transformer - individual head for each target - mean of last layer output</p> <p><strong>Dual roberta-base:</strong> - double roberta-base - added question title as input to answer-transformer - individual head for each target - mean of last layer output</p> <p><strong>Siamese roberta-base with softmax layer weight:</strong> - double roberta-base - added question title as input to answer-transformer - individual head for each target - mean of last layer output</p> <p><strong>Dual roberta-large:</strong> - only use 256 tokens for question / answer - double roberta-large - only fits in memory with batchsize 1 and fp16</p> <p><strong>Dual xlnet:</strong> - same as dual roberta-base but with xlnet-base backbone - We used a simple average of probabiity predictions to ensemble our models. </p> <h2>Longer Summary</h2> <h3>Binary Encoded Targets</h3> <p>Given the metric is rank-based, and given targets are not binary, it seemed important to be able to predict values that are neither 0 or 1 correctly. We tried mse loss and other variants but results were not satisfactory. We then decided to use binary cross-entropy with binary targets. The first try was to use on hot encoding of targets, given that targets have a small number of distinct values. This wasn’t satisfactory either, because this representation loses the ordering of values. We ended up using an encoding of the form <code>(t &gt; v)</code> for all v values of target t, except last value. For instance, if a target t has unique values <code>[0, ⅓, ⅔, 1]</code> then we would get 3 binary targets: <code>t &gt; 0, t&gt; ⅓</code>, and <code>t &gt; ⅔</code>. Assuming we get perfect probability predictions <code>p(0), p(⅓ )</code>, and <code>p(⅔)</code> for the binary targets, we can compute predictions for each value of the original target as: <code>t(0)= 1 - p(0) t(⅓)= p(0) - p(⅓) t(⅔)= p(⅓) - p(⅔) t(1)= p(⅔) - 0</code> And the original target value as <code>t = 0 * t(0) + ⅓ * t(⅓) + ⅔ * t(⅔) + 1*t(1)</code> We keep that computation with the actual predictions <code>p(0), p(⅓ )</code>, and <code>p(⅔)</code> for the binary targets. <br> When the values are evenly spaced, as in this example, then the formula simplifies into: <code>t = mean(p(0), p(⅓ ), p(⅔))</code> If we assume that what matters is only the order of values, then we can stick to this simplified form. We used the simplified form throughout the competition as it was simpler to code, and faster. In our final two selected subs we reverted to the exact computation because it was better on both cv and public LB, but this didn’t change private LB significantly (0.0004 difference). </p> <h3>Validation scheme</h3> <p>Similar to most teams, we use a 5-fold GroupKFold on question body for fitting our models and evaluation. After a bunch of experiments we saw though, that this is not sufficient mainly due to the following aspects: - Test data is different to training data in the sense that it only has one question-answer pair sampled out of a group of questions in train data. There can be stark noise for labels of the same question, which is why this needs to be addressed robustly. - There are a few columns with very rare events and also a lot of noise within those rare events. The prime example is spelling which has a huge impact on CV and LB, but can completely blind your view when trying to judge the overall strength of your model.</p> <p>That is why we settled after some time on the following full validation scheme: - Use 5-fold GroupKFold on question body - For each validation fold, sample 100 times randomly a single question-answer pair out of multiple questions. - Calculate the median score across these 100 samples and report. - Ignore spelling column. - Final CV is a mean of 5 folds. This setup also allowed us to properly test any type of postprocessing in a realistic manner as described next.</p> <h3>Postprocessing</h3> <p>Even though we had really strong models, post-processing was very important to us and we spent significant time on it to find a robust way that we can trust. It is based on threshold clipping. For each target column, we separately attempt to find optimal thresholds from both sides (starting from lowest predictions and highest predictions) based on which we clip the data. So let’s assume our predictions look like <code>x=[0.01, 0.015, 0.02, 0.03, 0.04]</code> and our optimal thresholds are <code>coefs=[0.016, 0.029]</code> then we would clip the data with <code>np.clip(x, coefs[0], coefs[1])</code> leading to <code>x=[0.016, 0.016, 0.02, 0.029, 0.029]</code> effectively generating ties at the edges.</p> <p>The CV setup from above gave us a perfect way to fully validate any of our approaches by simply generating the thresholds on training folds, and applying them on truncated samples of the validation folds. For final PP, we get the thresholds on full oof and apply them on the test set. In a nutshell, our routine looks like the following: - Sample 1000 times single question-answer pairs from multiple questions - Generate thresholds that optimize the median score of all 1000 samples</p> <p>We tested quite a few different strategies, but this one was the most robust one we found and we are quite happy with it. Only for spelling, it was still a bit shaky, which is why our two final subs just differ in how we handle spelling column, one uses the calculated thresholds, and one hard-sets the 6 highest predictions in private LB to 1 and rest to 0 which is based on experiments on samples. Unfortunately, if we would have post-processed all columns except spelling and would have kept spelling as is, we would have reached 0.432 on private LB as apparently spelling is very differently distributed on private. No CV experiment would have let us make this decision though. It would have been so much better though to not include this column into the competition.</p> <h3>Architectures and training models</h3> <p>Similar to other recent competitions at first it was quite difficult to beat public kernels, which we normally see as a kind of baseline. All “normal” tricks that worked in past computer vision or other NLP competitions (e.g. concat of Max and Mean pooling) did not improve cv. Also using a sliding window approach to capture more text did not work. We saw a first big improvement when using 2 transformers instead of one (+0.01), but still, any slightly more complex architectures lead to a worse result. That changed when we tried to freeze the transformers and only train the model head for 1 epoch, before fine-tuning. With this approach, we were able to try more fancy things and that is also how we finally came to the two main architectures. We further improved than this 2 step approach by using different learning rates for transformer and head, together with a warm up schedule, which enabled us to get rid of the freezing step in general. We illustrate the two main architectures in the following figure (For the sake of simplicity we show the architecture for the original 30 targets. It was adapted to work with the binarized targets.). <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1435684%2F0bb9b707b885e27cb1a09b0df625a0dd%2FBildschirmfoto%202020-02-11%20um%2012.11.51.png?generation=1581431173679092&amp;alt=media" alt=""></p> <h3>Dual Transformer:</h3> <p>The upper architecture shows the dual transformer model, a combo model where one transformer handles the question text, while a second transformer handles the answer text. the output of the last layer is then averaged over the 512 tokens and both resulting tensors are concatenated. The resulting representation is then fed into 30 little 2 fully connected layers which result in the target predictions. 30 little heads enable each target to have its own head and gain individuality.</p> <h3>Siamese Transformer with soft weighted layers:</h3> <p>It is inspired by the <em>Elmo paper</em>, where the final embedding representation is a weighted average of all LSTM layers. We use the output of every layer of a single transformer model in which we put 512 question and 512 answer tokens. For roberta-base that will give us twelve 512x768 tensors (for roberta large it would be 24 512x1024 tensors). We then average over the 512 tokens for each layer which results in twelve 768 representations. We then take a weighted sum of these 12 representations (where the weights for adding the representations are trainable!). This results in a 768 representation. The weighted average of all layer outputs enables to capture low level features in the final representation which was quite important for some answer related targets. Finally, we add a single prediction head for getting our targets. </p> <p>Our final ensemble contains 5 models which belong to the one or the other architecture, and only differ in used pretrained backbone. We ended up with - 2x dual roberta-base - dual roberta-large (2x 256 tokens) - dual xnet-base - siamese roberta-large with weighted averaged layers as this combination had the best cv score while fitting in the 2h kernel runtime requirement. </p> <p>We even trained one dual roberta-large model which on a V100 can only be trained when using a batch size of 1 and fp16. and was our best single model. Although this model could not fit into the ensemble due to runtime reasons, the promising cv results let us to a daring experiment: reducing the number of tokens used. There we saw something quite surprising. We could reduce the number of tokens to 256 for question as well as for answer without losing much quality. </p> <p>Most models were trained with differential learning rate, where the transformer gets a lr of 3-e5 and the model head(s) a lr of 0.005 and used a cosine schedule for training 3 epochs including warmup over one epoch. Optimizer were either AdamW or RAdam with weight decay of 0.01 using an effective batchsize of 8 (gradient accumulation).</p> <h3>Blending</h3> <p>Our submission includes an equally weighted blend between all of above mentioned 5 models (leading to 25 test predictions with 5-fold). The blend is conducted on raw expected values calculated as described above. We experimented quite a bit with ranked blending as it seemed natural, but results were always worse. Our final blend is both our best one on CV, public LB and private LB meaning that we had a robust CV setup in the end and our selection was solid.</p> <h3>Wrapping things up and putting into kernel</h3> <p>Training was conducted offline and inference in kernel. Our final runtime was close to two hours and we had to spend some efforts to squeeze the 5 models in. All our models are implemented in Pytorch using the amazing Huggingface library. Fitting was done either locally or on cloud providers. </p> <p>Thanks for reading.</p>
Google QUEST Q&A Labeling
our magics
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for sharing. It's a very strong model !</p> <p>Congrats to the winners. It is a hard and funny competition. Thanks my teammates. ( @Yang Zhang @chizhu <a href="/chensheng">@chensheng</a> @Guan_Yuhang ). </p> <p>We got some magics in this competition.</p> <h1>1. We split the 30 targets into two parts.</h1> <p>One is question part, another is answer part. The question part has 21 targets. The answer part has 9 targets. We train two model to predict the two part. It let us get a higer score(0.39x) in the very beginning. </p> <h1>2. We use the customed model for transformers.</h1> <p>a) Add the additional layers. For roberta base model, we add the 13th layer with random initialize. For roberta large model, we add the 25th layer with random initialize. b) concate CLS token embedding and SEP token embedding together. </p> <p>c) concate the category features and length features</p> <h1>3. Pre-process for data</h1> <p>We tried many pre-process methods for transformers. Only one is useful. def cln(x): return " ".join(x.split())</p> <p>The very simple function boost 0.005 for the roberta base lb score .</p> <h1>4. We use external datas for train .</h1> <p>It comes from <a href="https://www.kaggle.com/stackoverflow/stackoverflow">https://www.kaggle.com/stackoverflow/stackoverflow</a>. We got the ideas from paper 'Self-training with Noisy Student improves ImageNet classification'. The teacher gives big size weak pesudo labels for the students.The students use the weak pesudo labels for train.Firstly we use one model to predict the psuedo labels for the unlabeled datas. Then we use the psuedo labels to train our models. </p> <p>For 6k pesuedo labels, we boost the lstm model lb score from 0.365 to 0.372. (private score from 0.332 to 0.343) For 50k pesuedo labels, we boost the roberta base lb score from 0.405 to 0.415. For 100k pesuedo labels, the roberta base lb score is still 0.415.</p> <p>We do not have too much time for training powerful student models. </p> <h1>5. Post-process for data</h1> <p>It is the big magic for this competition. Without post-process, our roberta large mode got the 0.411 lb score. When we use the simple round post-process, it jumped 0.448 lb score. </p> <p>Then we combine several optimization methods for post-process, these make roberta large jumped 0.46x lb score. Very big boosting!</p> <p>Thanks my teemates again. You are genius.</p> <h1>6. Ensemble</h1> <p>We ensembled 6 models(roberta large, roberta base, bert base and xlnet. The best model is roberta base lb 0.415). We used post-process for them and got the final score.</p> <h1>which do not work:</h1> <ol> <li>data argument. We spend much time on it, but do not get boost.</li> <li>loss funtion. We try mse loss, dice loss, focal loss, they do not boost our scores</li> <li>Train one model for one target. </li> <li>lstm+use. We can not get a high score for lstm+use. We want to see <a href="/sakami">@sakami</a> lstm solutions.</li> </ol> <p>Question asker is the same one , but the answer people are quite different. Split two parts, we can use more sentences for bert models. </p> <p>Can you share your solution? You teem is powerful too. </p> <p>last encoder SEP like these: layer, _,encoded_layer = self.mdl(x, attention_mask=attn_msk) pooled_last_21 = layer[:, -1]</p> <p>No. We use the post-processing. My post can not display all informations. :(</p> <p>@Serigne You are welcome. :)</p> <p>As I mentioned, firstly you need a teacher model to predict the unlabeled datas and get the pseudo labels. Then you can add the pseudo labels into the train data for your student models. You can see the details from paper ''<a href="https://arxiv.org/abs/1911.04252">Self-training with Noisy Student improves ImageNet classification'</a>' . </p> <p>I got this idea in 'Jigsaw Unintended Bias in Toxicity Classification' competition. I want to copy one of the all 12 encoder layers and extend bert base model into 13 encoders. But it did not work. Tried many times, it suddenly worked when I added a new encoder layer with random initialize.</p> <p>Thanks for your good explanations. In fact , we just want to remove many '\n' tokens and let the sentences shorter. And this function works. : )</p>
Google QUEST Q&A Labeling
silver solution (44th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>USE - <a href="https://tfhub.dev/google/universal-sentence-encoder/4">https://tfhub.dev/google/universal-sentence-encoder/4</a> I did fine tune the Bert sequence classification class from Huggingface. Once you customize this class you can pass whatever features you like, I just had a simple linear layer in the customized class. </p> <p>Congratulations to winners! And Thank you to Kaggle for hosting this. </p> <p>My Bert model had USE features + Crawl embeddings. USE features improved my score quite a bit. Crawl not so much. I had answers and questions model separate, a little later I realized I could have saved lot of time by having QA branch in the model itself (too late). My post processing was from public kernel. (All PyTorch) </p> <p>Roberta was surprisingly good pretty close to Bert on public LB, considering the amount of time I spent on Roberta vs Bert. This morning I realized that my Roberta model isn't using USE Features - it was too late, had to let it go. </p> <p><strong>Preprocessing:</strong> None for Bert, For crawl I used the preprocessing from public kernel. <strong>Tokenization</strong> Head + Tail approach. Custom loss function</p> <p>I'll have my training code on GitHub and I'll make my inference kernel also public. </p> <p>Look forward to reading better solutions ! </p> <p>Edit: <a href="https://www.kaggle.com/rashmibanthia/googlequest-inference?scriptVersionId=28474231">Inference Kernel</a></p>
Google Research - Identify Contrails to Reduce Global Warming
102nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!<br> May the duck T-shirt with you.</p> <p>First of all, I want to thank the organizers who hosted such a great competition and kagglers who generously shared their knowledge. <br> I was not intended to write my solution because my rank is not fascinating and I'm busy with the entrance exam for graduate college, but I couldn't resist the temptation of a duck T-shirt…</p> <h1>Model</h1> <p>I created two models: TwoStageModel and TwoStageModelMultiChannel.</p> <pre><code>config.seg_model = (nn.Module): (): ().__init__() self.config = config self.model = seg_models[config.seg_model]( encoder_name=config.encoder_name, encoder_weights=, in_channels=, classes=, activation=, ) self.model2= seg_models[config.seg_model]( encoder_name=, encoder_weights=, in_channels=, classes=, activation=, ) self.conv2d = nn.Conv2d(, , config.kernel_size, padding=config.padding) (): out = self.model(x) out2 = torch.cat([out, x], dim=) out2 = self.conv2d(out2) out2 = self.model2(out2) out2, out (nn.Module): (): ().__init__() self.config = config self.model = seg_models[config.seg_model]( encoder_name=config.encoder_name, encoder_weights=, in_channels=, classes=, activation=, ) self.model2= seg_models[config.seg_model]( encoder_name=, encoder_weights=, in_channels=, classes=, activation=, ) self.conv2d1 = nn.Conv2d(, , config.kernel_size, padding=config.padding) self.conv2d2 = nn.Conv2d(, , config.kernel_size, padding=config.padding) self.conv2d3 = nn.Conv2d(+, , config.kernel_size, padding=config.padding) (): out = self.conv2d1(x) out2 = self.conv2d2(out) out3 = self.model(out2) out2 = torch.cat([out3, out, x], dim=) out2 = self.conv2d3(out2) out2 = self.model2(out2) out2, out3 </code></pre> <h1>Encoder</h1> <p>I used 'timm-resnest26d', 'efficientnet-b6', 'vgg19_bn', and so on. Large models game me better performance.</p> <h1>Data</h1> <p>I used false-color images for ContrailsModel2Stage and all 9 bands for TwoStageModelMultiChannel.</p> <h1>Augmentation</h1> <p>HorizontalFlip(p=0.1),<br> VerticalFlip(p=0.1),<br> RandomRotate90(p=0.1),<br> ShiftScaleRotate(shift_limit=0.0625/2, scale_limit=0.2/2, rotate_limit=15, p=0.1, border_mode=cv2.BORDER_REFLECT)</p> <h1>TTA</h1> <p>As mentioned in others' write-ups, data flip/rotation TTA severely hurt cv scores. I felt something wrong but thought this is because of <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/419994" target="_blank">data leakage</a> and ignored.<br> Instead of flip/rotation TTA, I applied a 1-pixel slide TTA and calculated the max value of original predictions and TTA predictions. 1-pixel right-down slide and left-up slide were effective and gave me about a 0.01 score boost, but 1-pixel right-up slide and left-down slide didn't work. I should have given more thought to why this happens…<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9268451%2Fe104d2aef5442825ea965e711a0b5856%2F2023-08-18%20191942.png?generation=1692354026668584&amp;alt=media" alt=""></p>
Google Research - Identify Contrails to Reduce Global Warming
105th Place Solution for the Google Research - Identify Contrails to Reduce Global Warming Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1. Context section</h1> <ul> <li><a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview" target="_blank">Business context</a></li> <li><a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">Data context</a></li> </ul> <h1>2. Overview of the Approach</h1> <p>In this competition, the motivation is to contribute to the improvement of contrail prediction models, which are used to predict the formation of contrails generated by aircraft engine exhaust. Contrails are clouds of ice crystals with a line shape. They form when aircraft fly through super humid regions in the atmosphere and contribute greatly to global warming by trapping the heat in the atmosphere. Our goal is to validate the models that use satellite imagery as input data and then provide airlines with more accurate ways to mitigate the problem of climate change by avoiding the formation of contrails.</p> <p>In this study, we addressed the contrail detection challenge using geostationary satellite imagery from the GOES-16 ABI. Our approach involved data preprocessing for standardization and ash RGB image creation, contrail annotation with IoU-based object detection, and model training with EfficientNet and Dice coefficient loss in a K-Fold cross-validation setup. A distinctive feature was threshold selection for image masking, with a 0.52 threshold derived from the fifth image's Dice coefficient distribution. Validation included an error metric assessing labeling accuracy. Importantly, the Dice coefficient distributions from training on the fifth image alone and additional-image training closely aligned, confirming the chosen threshold's effectiveness.</p> <h1>3. Details of the submission</h1> <h2>3.1 Data Exploration</h2> <p>The data used in this competition are geostationary satellite images, sourced from the GOES-16 Advanced Baseline Imager (ABI). The images are provided as a sequence of 10-minute interval snapshots, where each sequence consists of eight images, with labeling applied only to the fifth image. Each dataset entry is represented by a unique record_id and contains precisely one labeled frame. The training dataset offers both individual label annotations and aggregated sound truth annotations, while the validation data only contains the latter.</p> <h2>3.2 Data Preprocessing</h2> <h3>3.2.1 Image Standardization</h3> <p>Data preprocessing plays a crucial role in this competition. The provided data consists of eight different spectral bands, and the pixel values are not in the standard 0-255 range. To make the satellite data suitable for our models, we first normalize and transform them using the following two steps:</p> <ul> <li>Map the pixel values of each spectral band to the standard range of 0 to 255 to create visualizable images;</li> <li>Create an ash RGB image by combining these eight bands, forming a three-channel color image.</li> </ul> <h3>3.2.2 Contrail Annotation</h3> <p>Next, we use the Intersection over Union (IoU) method to locate and annotate the contrails in the images. Our steps are as follows:</p> <ul> <li>Utilize object detection models to detect contrails within the ash RGB images. Outline them using bounding boxes;</li> <li>For potentially overlapping boxes, calculate the IoU to determine the degree of overlap between the boxes;</li> <li>Based on the calculation, merge smaller boxes with significant overlap into larger bounding boxes to enhance annotation accuracy and readability.</li> </ul> <h2>3.3 Model Training</h2> <h3>3.3.1 Random Crop and Image Selection</h3> <p>Random cropping is implemented as a form of data augmentation. We select only the fifth image from each sequence for training, as it contains the binary mask information (0 to 1). </p> <p>We applied a total of 25 different methods of cropping and the four labeled green are the most effective ones: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F782ddf910435c2bca436c491cfae2869%2Fcropping%20methods.png?generation=1692827919871994&amp;alt=media" alt="cropping methods"></p> <p>The following are the training results of each method: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2Ff6aba1d97ceee02ef0296aebd4ee3e82%2Fcropping%20results.png?generation=1692827970836162&amp;alt=media" alt="cropping results"></p> <p>Some of the blank data is due to the fact that during training, we initially didn’t head in the right direction, so the experiment didn’t continue on those methods. In the end, we opted for the seventh method: Randomized square cropping with at least one complete contrail and the size of the cropped image is always greater than 128 pixels. The cropped images could be proportionally scaled as needed. For instance, we could expand them to 512 pixels by random cropping images greater than 256 pixels and then resizing them. Likewise, if we wanted to increase the image size to 1024 pixels, we followed a similar process by random cropping images larger than or equal to 512 pixels and resizing them to 1024 pixels. </p> <h3>3.3.2 EfficientNet with K-Fold Validation and Dice Coefficient</h3> <p>The model is trained using the EfficientNet architecture. The K-Fold cross-validation is employed to assess the performance of the model across different subsets of the training dataset and to ensure the robustness and generalization of the model. For loss calculation, the Dice coefficient function is used, which is useful for segmentation tasks.</p> <h3>3.3.3 Threshold Selection for Masking</h3> <p>After training the model with all the fifth images from each sequence, the next step is to apply the trained model to the rest of the images in each sequence (the first to fourth and sixth to eighth) to create the masks. The most creative part of our solution is to determine an appropriate threshold (or confidence) within the range from 0 to 1 for masking. This threshold is based on the distribution of the fifth image of each sequence. The following are the steps in detail:</p> <ul> <li>Apply the trained EfficientNet models to the first to fourth and sixth to eighth images in each sequence;</li> <li>Generate binary probability (ranging from 0 to 1) based on the model’s predictions;</li> <li>Determine an optimal threshold within the 0-1 range by analyzing the distribution of the fifth image;</li> <li>Apply the selected threshold to the rest of the images (the first to fourth and sixth to eighth).</li> </ul> <p>This approach leverages the power of our EfficientNet models and ensures that the models can better identify and retain the relevant images. </p> <p>The training results are illustrated in the following two figures. The left figure displays the distribution of Dice coefficients obtained when training exclusively with the fifth image of each sequence. Conversely, the right figure showcases the distribution of Dice coefficients when training with additional images, specifically the first to fourth and sixth to eighth images within each sequence. Remarkably, these two distributions exhibit a remarkable similarity, enabling us to determine a suitable threshold based on the distribution derived from the fifth image.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F38af041ab03c025ae9fb897aaee999c3%2FDice%20Coefficient%20Distribution%20.png?generation=1692828079636644&amp;alt=media" alt="dice coefficient distribution"><br> To assess the accuracy of our approach, we introduced an error metric. If the fifth image was not correctly labeled, it was counted as an error. Subsequently, we calculated the proportion of results that contained errors. Our statistical analysis revealed that a Dice coefficient exceeding 0.5275080198049544 (approximately equal to 0.52 in our case) reliably indicated utility. Therefore, the threshold was set at 0.52, ensuring that the chosen threshold effectively distinguishes valuable information in the images. </p> <h1>4. Sources</h1> <ul> <li><a href="https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html" target="_blank">EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling</a></li> <li><a href="https://eumetrain.org/sites/default/files/2020-05/RGB_recipes.pdf" target="_blank">Compilation of RGB Recipes</a></li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
11th Place Solution Write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi thanks for your questions. </p> <blockquote> <p>What was the secret behind the 0.68+ timm-resnest200e models? It is impressive how high you scored with this backbone. Was the long training run?</p> </blockquote> <p>I'd say was due to the multi-stage training and the training setup (balanced dataloader, aggresive augs on pseudo to avoid overfit) </p> <p>In general by multi-stage here we mean</p> <ul> <li>i) train with mixed loss and t=5 image only for 50-80 epochs</li> <li>ii) finetune with pseudo images (t=1-4, 6-8) and dice loss for 20-50 epochs </li> <li>iii) continue finetuning for 10-20 epochs more with SWA (disable mixup/cutmix)</li> </ul> <p>to have more diversity each of us used the above as guideline with some variants on number of epochs, number of stages, pseudo slices to include etc - particularly in my runs I pretrained first with pseudo and finetuned with t=5 only, dice loss and SWA. Also, in some exp (ii) and (iii) combined to a single one.</p> <p>ps: the model you are referring has been trained by <a href="https://www.kaggle.com/yousof9" target="_blank">Yousef Rabi</a>, he may add some details if I missed anything</p> <blockquote> <p>SWA</p> </blockquote> <p>well SWA helped a lot not only to improve scores (up to ~ 0.01 in some cases) but also to stabilise/smooth loss curves.<br> For the implementation we adopt it from here <br> <a href="https://github.com/izmailovpavel/contrib_swa_examples" target="_blank">repo</a><br> <a href="https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch" target="_blank">blogpost</a></p> <blockquote> <p>TTA d4</p> </blockquote> <p><code>d4_transform --&gt; (flips + rotation 0, 90, 180, 270)</code><br> see here <a href="https://github.com/qubvel/ttach" target="_blank">qubvel/ttach</a></p> <h2><strong>Summary [TLDR]</strong></h2> <ul> <li>Weighted ensemble (optimal weights by Optuna)</li> <li>Multi-stage long training runs (70-100 epochs)</li> <li>Pseudo-labeling (time slices = 1-4, 6-8)</li> <li>SWA [3]</li> <li>TTA d4 / FlipTransforms [4]</li> <li>Image size x2 </li> </ul> <h2><strong>Modelling Approach</strong></h2> <p>We used <code>human_pixel_masks</code> as ground truth. We tried to incorporate <code>human_individual_labels</code> in different ways, but couldn’t improve using them. We didn’t try to average them and train on the soft labels. Our ensemble includes a model that calculates the losses on all individual masks and includes the one with the largest loss as an aux loss.</p> <p>We tried to improve a single model using temporal information and also couldn’t improve over 2d setup, so we moved to create a diverse ensemble using diff encoders and decoders. We used different CNNs and transformers as backbones and we also had models with a transformer decoder (UNetFormer).</p> <p>Many experiments conducted to utilise the additional data in the form of additional time periods so we incorporate that information in at least some way if not directly modelling for it. What ended up working for the pseudo setup in terms of both improving the single model and adding to the ensemble was the following:</p> <ul> <li>Balance the batch to continue ½ pseudo ½ real examples.</li> <li>Apply heavy augmentation on the pseudo data and light to normal augmentation on real data.</li> <li>Mixup the pseudo examples with the real examples.</li> </ul> <h2><strong>Training setup</strong></h2> <ul> <li>Optimizer: AdamW </li> <li>LR: 1.0e-3 or 1.0e-4</li> <li>Scheduler: Cosine Decay with 2 epochs warmup.</li> <li>Augs: Flips / RandomResizedCrop / RandAugment / Mixup / Mosaic</li> <li>Mixed loss: BCE / Dice / Focal </li> </ul> <h2><strong>CV strategy</strong></h2> <p>For most of our experiments we used the split provided by the hosts, i.e. train on full train data and evaluate on <code>validation.csv</code>. We did earlier some experiments with KFold but OOF CV was overestimated and the CV scores on the holdout (validation.csv) weren’t good. </p> <h4>Evaluation</h4> <p>We used 2 variants for evaluation against ground truth labels (256x256): <br> i) Model outputs x2 size --&gt; <code>interpolate(256, mode='nearest')</code> --&gt; calculate Dice score <br> ii) Model outputs directly 256 size </p> <p>PS: Most of our experiments used <code>(i)</code>, although we believe 2nd setup without interpolation is more reliable </p> <h2><strong>Architectures / Backbones</strong></h2> <p>Final ensemble </p> <h4><code>CV: 0.7026</code> / <code>Public LB: 0.71868</code> / <code>Private LB: 0.71059</code></h4> <table> <thead> <tr> <th>Backbone</th> <th>Architecture</th> <th>CV</th> <th>TTA</th> <th>Pseudo</th> <th>Mixup</th> </tr> </thead> <tbody> <tr> <td>efficientnet-b7</td> <td>Unet (SMP)</td> <td>0.6428</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>tf_efficientnet_b8</td> <td>Unet++ (Timm)</td> <td>0.6704</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>tf_efficientnet_b8</td> <td>Unet++ (Timm)</td> <td>0.6752</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>maxvit_small_tf_512</td> <td>Unet (Timm)</td> <td>0.6677</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>convnext_base</td> <td>Unet++ (Timm)</td> <td>0.6618</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>convnext_large</td> <td>Unet (Timm)</td> <td>0.6670</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>timm-resnest200e</td> <td>Unet</td> <td>0.6869</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>timm-resnest200e</td> <td>Unet</td> <td>0.6853</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>timm-resnest200e</td> <td>Unet</td> <td>0.6885</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>convnext_large_384_in22ft1k</td> <td>UnetFormer</td> <td>0.6788</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>tf_efficientnet_b6</td> <td>UnetFormer</td> <td>0.6532</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <td>eva02_large_patch14_448</td> <td>Unet</td> <td>0.6542</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>convnext_large_384_in22ft1k</td> <td>UnetFormer</td> <td>0.6841</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> </tbody> </table> <h4>The selected ensemble was our <strong>best CV</strong> / <strong>best Public LB (6th place)</strong> and the <strong>best PVT LB (11th place)</strong></h4> <h2>CV-LB Plot</h2> <p><img src="https://raw.githubusercontent.com/i-mein/Kaggle_images/main/2023-08/CV-PVT.png" alt=""></p> <p><img src="https://raw.githubusercontent.com/i-mein/Kaggle_images/main/2023-08/CV-Public.png" alt=""></p> <h2><strong>Things didn't work</strong></h2> <ul> <li>Use of ConvLSTM, UTAE, 3D-Unets to leverage information from other time slices </li> <li>Post-processing with t-2, t-1, t slices </li> <li>diff image resolutions (384, 768, 1024) </li> </ul> <h2><strong>Acknowledgements</strong></h2> <p>We would like to thank hosts and Kaggle team for organizing such an interesting research competition. We are also grateful to all kagglers that share their ideas in discussions and notebooks. </p> <h2>Libraries</h2> <p>[1] <a href="https://github.com/huggingface/pytorch-image-models" target="_blank">Timm</a> <br> [2] <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation_models.pytorch (SMP)</a> <br> [3] <a href="https://github.com/izmailovpavel/contrib_swa_examples" target="_blank">SWA implementation</a><br> [4] <a href="https://github.com/qubvel/ttach" target="_blank">ttach</a></p> <h2><strong>Team GIRYN</strong></h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2596066%2Fc292aef766c287db0c594c6ed07ababb%2FScreenshot%202023-08-18%20at%2015.39.37.png?generation=1692362680181936&amp;alt=media" alt=""></p> <ul> <li><a href="https://www.kaggle.com/rohitsingh9990" target="_blank">Rohit Singh (Dracarys)</a></li> <li><a href="https://www.kaggle.com/phoenix9032" target="_blank">Nihjar (doomsday)</a> </li> <li><a href="https://www.kaggle.com/imeintanis" target="_blank">Ioannis Meintanis</a></li> <li><a href="https://www.kaggle.com/yousof9" target="_blank">Yousef Rabi</a></li> <li><a href="https://www.kaggle.com/titericz" target="_blank">Giba</a></li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
13th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>13th Place Solution</h1> <p>First of all, we would like to thank the organizers and the Kaggle team for hosting the competition.</p> <h2>Overview</h2> <p>We ensembled total 20 models including 2.5D-model and 2D model.</p> <p>We used the nelder-mead approach to determine the weight with constant threshold.</p> <p>I think taking an average of the individual labels is the best way for score.</p> <p>Here, the following shows Masaya's part in brief.</p> <h2>Masaya part</h2> <h3>Data preprocessing</h3> <p>I prepared 2 sets of data for evaluation, which are only validation data and cross-validation.<br> This is because I also wanted to use validation data as training data while that was very correlated with Public LB.</p> <p>Maybe, the 2 most important thing to raise my score are</p> <ul> <li>label data are averaged over individual labels.</li> <li>image resolution are set to 512pix or higher.</li> </ul> <h3>Model</h3> <ul> <li>EfficientNetb7-UNet</li> <li>EfficientNetb7-Unet++</li> <li>MiTb5-UNet</li> </ul> <h3>Training</h3> <ul> <li>amp (Sometimes the loss gets nan. Why?)</li> <li>Optimizer: AdamW<ul> <li>No weight decay</li></ul></li> <li>Sheduler: CossineAnnealing with warm up<ul> <li>initial lr: 3e-4 ~ 5e-4</li></ul></li> <li>Loss: SoftBCE</li> <li>CV strategy<ul> <li>hold out validation</li> <li>StratifiedKFold<ul> <li>train data or validation data</li> <li>including mask</li></ul></li></ul></li> <li>Augmentation<ul> <li>hflip and vflip (long epoch worked)</li> <li>only ShiftScaleRotate</li> <li>none</li></ul></li> </ul> <h3>Postprocess</h3> <p>The threshold varied a lot, depending on how the CV was cut. <br> So ensemble by average got less accurate. <br> Therefore, we fixed the threshold and determined the weight that maximized the validation by using the nelder-mead.</p> <h3>Not work for me</h3> <ul> <li>Pseudo labels</li> <li>training for only positive image</li> <li>screening for positive label</li> <li>2nd stage model with segmentation prediction of 1st stage model</li> <li>tta</li> <li>another band image</li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your comment and Congratulations on 1st prize ! </p> <p>We trained B7 together as a team and that too in last 1-2 days. LR was important parameter here. <br> We missed 0.5 pixel shift, I guess it would take fewer epochs with correct pixel shift. <br> All we needed was to set threshold to 0.4 instead of 0.5 to be in the money/gold. </p> <p><a href="https://postimg.cc/ftFVr4qq" target="_blank"><img src="https://i.postimg.cc/RhCwq47m/contrails2.jpg" alt="contrails2.jpg"></a></p> <p>Hi <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, <br> Thanks for your comment, prompted me to run an experiment.<br> I decided to run one fold of timm-efficientnet-b7 for comparison with and without pixel correction (no other change in code except the pixel correction part)</p> <ol> <li><p>Results with pixel correction<br> cv: 0.7087 private: 0.69119 public: 0.67749 (Best score @ epoch 92)</p></li> <li><p>Previous timm-eff-b7 without pixel correction: <br> cv: 0.7067 private: 0.69131 public: 0.69833 (Best score @ epoch 89)</p></li> </ol> <p>Just with this change, there isn't significant difference at least for this one, <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> might have more to say with another experiment</p> <p>Thank you Kaggle and Google Research for hosting this competition. </p> <p>Here's our solution in brief. </p> <p><strong>Preprocessing / Dataset</strong></p> <ul> <li><p>384 and 512 Image size - Bilinear resize images </p></li> <li><p>Augmentations (Following two slightly different augmentations were part of our selected ensemble:) </p></li> </ul> <pre><code>def (): transform = albu.( [ albu.([ albu.(p = ), albu.(p = ), ], p = ), albu.(limit = , p = ), albu.(p = ), albu.( mean = (, , ), std = (, , ), max_pixel_value = ), (transpose_mask = True), ], p = , is_check_shapes=False ) return transform </code></pre> <pre><code>self.transform = { : A.Compose([ A.ShiftScaleRotate(=0.20, =0, =0.1, =0.25, =cv2.BORDER_CONSTANT, =0), A.GridDistortion(=0.25), A.HorizontalFlip(=0.25), A.VerticalFlip(=0.25), A.Resize(G.img_size), A.RandomCrop(G.crop_size), A.Normalize(=CFG.pp_params[],=CFG.pp_params[]), A.pytorch.transforms.ToTensorV2(=) ], =1.0), } </code></pre> <p><strong>Validation</strong> (Two different CV schemes in ensemble) </p> <ul> <li>4 fold CV with geographical binning i.e similar images were kept in same fold<br> Also sampling with no duplicates (excerpt from dataset class)</li> </ul> <pre><code>.dup_ids = .df.dup_id.unique() (): dup_idx = .dup_ids[idx] rows = .df[.df.dup_id == dup_idx] … </code></pre> <ul> <li>Random 5 fold split on training and given validation dataset. </li> </ul> <p><strong>Things that worked</strong></p> <ul> <li>Soft labels - Instead of using human_pixel_masks, we used average of human_individual_masks from the training set and for validation we kept human_pixel_masks. </li> <li>Loss function - We used SoftBCEWithLogitsLoss with soft labels. Also worked combination of dice loss and BCE loss </li> <li>Usual Ash color images</li> </ul> <p><strong>Models (UNet)</strong> <br> <a href="https://postimg.cc/rKVRrBfX" target="_blank"><img src="https://i.postimg.cc/CLDGWYD1/models-table.jpg" alt="models-table.jpg"></a><br> Cosine scheduler with warm up = 0.02 or 0.03 for all models</p> <p><strong>Thresholds</strong><br> A threshold change could have landed us in the money zone. After the deadline submission, threshold of 0.45 was best with pvt. 0.713x and public 0.712x and we selected 0.5 threshold. </p> <p><strong>Things that didn’t work for us</strong></p> <p>Pseudo labels (PL)</p> <p>Test time augmentation</p> <p>Training with other bands, frames</p> <p>Weighted loss</p> <p>Adjusting decoder channels </p> <p>Various other backbones </p> <p><strong>Partial Code available here</strong> <br> <a href="url" target="_blank"></a><a href="https://github.com/furu-kaggle/ICRGW" target="_blank">https://github.com/furu-kaggle/ICRGW</a></p> <hr> <p>A big thank you to my amazing team mates <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> and <a href="https://www.kaggle.com/kunihikofurugori" target="_blank">@kunihikofurugori</a> 🙏</p>
Google Research - Identify Contrails to Reduce Global Warming
152nd solution (single model)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations, all winners.</p> <p>And thank you for holding a competition with a lot of learning. This was my first time doing a segmentation task.</p> <p>Not great results, but public notebooks helped me learn the basics.</p> <p>I made various models with <a href="https://www.kaggle.com/stgkrtua" target="_blank">@stgkrtua</a> , but I made a mistake in the ensemble code, so the result was a single model :)</p> <p>A simple solution is given below.</p> <ul> <li>model<ul> <li><code>Unet</code> (segmentation_models_pytorch)</li> <li>backbone<ul> <li><code>'timm-resnest26d'</code></li></ul></li></ul></li> <li>image size<ul> <li>512*512</li></ul></li> <li>fold<ul> <li>5</li></ul></li> <li>use validation data on training</li> <li>Ash color</li> <li>Use only one target frame</li> <li>threshold<ul> <li>0.3</li></ul></li> <li>CV 0.688, Public0.66273, Private0.66321</li> </ul> <h3>If the ensemble was successful</h3> <ul> <li>Public 0.67199, Private 0.67366(119th not medal…)</li> </ul> <table> <thead> <tr> <th>model</th> <th>backbone</th> <th>img_size</th> <th>weight</th> </tr> </thead> <tbody> <tr> <td>Unet</td> <td>timm-resnest26d</td> <td>512</td> <td>0.4</td> </tr> <tr> <td>UnetPlusPlus</td> <td>timm-efficientnet-b2</td> <td>512</td> <td>0.1</td> </tr> <tr> <td>DeepLabV3Plus</td> <td>timm-efficientnet-b4</td> <td>512</td> <td>0.15</td> </tr> <tr> <td>Unet</td> <td>tu-tf_efficientnetv2_s</td> <td>512</td> <td>0.15</td> </tr> <tr> <td>SegModel (Details below)</td> <td>tf_efficientnet_b6</td> <td>512</td> <td>0.2</td> </tr> </tbody> </table> <pre><code> (nn.Module): (): ().__init__() self.encoder = Encoder(CFG) self.model_name = CFG.model_name self.out_indices = CFG.out_indices channel_nums = self.get_channel_nums() self.decoder = Decoder(CFG, channel_nums) self.head = nn.Sequential( nn.Conv2d( channel_nums[-], CFG.out_channels, kernel_size=, stride=, padding=, ), ) (): self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , ] self.model_name == : channel_nums = [, , , ] self.model_name == : channel_nums = [, , , ] channel_nums = channel_nums[-(self.out_indices):] channel_nums (): skip_connection_list = self.encoder(img) emb = self.decoder(skip_connection_list) output = self.head(emb) output </code></pre> <h3>not work</h3> <ul> <li>Input a total of 9 channels including the front and rear frames</li> <li>Input 9ch as 3ch using 1x1Conv2d</li> </ul> <p>weights were determined manually in order of CV</p>
Google Research - Identify Contrails to Reduce Global Warming
15th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for my late reply.<br> Yes, pseudo labels are just added to the dataset.</p> <p>Sorry for my late reply.<br> I didn't do anything special to resize image, just used <code>torch.nn.functional.interpolate</code> with <code>mode="bilinear"</code>.</p> <p>The score of each image sizes are as follows.</p> <ul> <li>512x512: 0.6885</li> <li>768x768: 0.6906 and 0.6921 (two different seeds)</li> <li>1024x1024: 0.6901</li> </ul> <p>Thank you for organizers hosting the competition. I also appreciate my team mates cooperation.</p> <h1>Summary</h1> <p>Our solution has nothing special, it's just ensemble of 4 EfficientNetV2 with different image sizes.</p> <ul> <li>2x l with 768x768 image size (different seeds)</li> <li>1x l with 1024x1024 image size</li> <li>1x s with 1536x1536 image size</li> </ul> <p>First 3 models give similar score around 0.690 ~ 0.692 for <code>validation</code> folder images and the last gives 0.685. Ensemble score is 0.6936.</p> <h1>What worked</h1> <ul> <li>Augmentations<ul> <li>HV Flip</li> <li>Rotate</li> <li>Random scale then crop</li></ul></li> <li>TTA<ul> <li>HV Flip</li> <li>Rotate</li></ul></li> <li>set <code>"drop_path_rate": 0.2</code> and <code>"drop_rate": 0.2</code></li> <li>Adding time steps 3 and 5 as pseudo labels</li> <li>Focal loss</li> </ul> <p>But I guess improvements from above things are pretty small compared with addressing label misalignment and using individual human annotated mask as new labels as mentioned other discussions (I'm plan to do experiment later).</p> <h1>What not worked</h1> <ul> <li>Other augmentations<ul> <li>ColorJitter</li> <li>HueSaturationValue</li> <li>RandomBrightnessContrast</li> <li>RandomGamma</li> <li>RandomFog</li> <li>RandomShadow</li> <li>CoarseDropout</li> <li>MixUp</li></ul></li> <li>Adding more time steps as pseudo labels</li> <li>Adding pseudo label rounds</li> <li>I'm really focused on training 2.5D and 3D models, but nothing worked at all.<ul> <li>Just 3D model (e.g. Resnet3D)</li> <li>3D encoder + 2.5D decoder</li> <li>Input other time frames predictions as additional channels</li> <li>Stack 2 Unets, first predicts segmentation masks for multiple time frames, second takes those masks as input by stacking channel dimensions</li></ul></li> </ul> <p>For the first question, I haven't trained on individual annotations. Just ground truth label is used.</p> <p>For the second, I'm not sure why they are worked but I guess</p> <ul> <li>Pseudo label mitigates label issue because pseudo label may not be misaligned.</li> <li>Also large image size may alleviates label issue because the effect of label misalignment is maybe smaller for large masks compared with small one.</li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! I don't remember exactly now, but I think I came up with an hypothesis first that the reason for augmentation not working is the shifted labels. Thought about 1 pixel shift and then discover the 0.5 pixel with the visualization presented above. I thought there must be a special reason that rot90 and flip augmentation do not work.</p> <p>I modified the segmentation-models-pytorch U-Net code.<br> This change is very simple but it would be interesting if ChatGPT can write models. Currently it is much easier for me to code by myself than thinking about the prompt to generate the correct code.</p> <p>Thanks! Sounds like a very nice intern program. Understanding the data is important, but it is also difficult for deeplearning problems; often we only see how annotators are inconsistent. In this problem, it probably worked because I also had a hypothesis and visualize the data with a specific purpose. </p> <p>Thanks! 512 x 4 panel was better than 512 single time. I wonder if was competitive compared to other teams 2.5D and 3D model.</p> <p>Yes, I use multiple A100 GPUs in the last week to train the final models with 1024-size image.<br> Until then, I use my local machine with one RTX3090 GPU and 1TB M2 nvme SSD (3500 MB/s bandwidth). SSD is also important to load large data quickly for many times (many epochs)</p> <p>I think so, and that seems easier. Is that what TASCJ did, because he did not write about shifting back 0.5 pixel in method 2? Somehow I didn't think about that during the competition.</p> <p>You are welcome!</p> <p>Thanks! I apply torch.nn.functional.grid_sample function to y. It returns interpolated values on the given grid. I hope it will be clear when I finish with my public notebook.</p> <p>Yes! I tried that too and both works equally well. I started from 256x256 y_sym and I used 3x3 at that time, and I continue to do that way.</p> <p>I thank the organizer and Kaggle for hosting this interesting competition. I also thank many Kagglers who wrote solutions in past competitions, without which I could not compete for segmentation tasks.</p> <h2>Overview</h2> <ul> <li>U-Net with MaxViT encoder</li> <li>Input is the standard ash color image</li> <li>Target y is soft; the average of individual masks</li> <li>BCE (binary cross entropy) loss</li> <li>U-Net trained for symmetrized label, y_sym, which is the label shifted by 0.5 pixels</li> <li>Shift-scale-rotate augmentation</li> <li>Additional tiny convolution trained to map y_sym to y</li> </ul> <h2>Rotation augmentation</h2> <p>Augmentation is extremely important in this competition to suppress overfit and train longer. With rotation augmentation I could train 40-50 epochs, compared to 10-20 epochs without augmentation. The test-time augmentation (TTA) is also very effective, adding ~0.006 to the score for free.</p> <p>This is basic but does not work as usual in this competition because the label is shifted 0.5 pixels to the right and bottom with respect to the contrails. Random rotation augmentation would shift the label to random directions and make the model impossible to learn the right-bottom aligned ground-truth labels.</p> <h2>Discover the 0.5-pixel shift</h2> <p>I was very confused when the score dropped with flip and rot90 (multiples of 90° rotation) augmentations. In physics, what symmetry the system has is the first thing to consider, and although westerly winds or Coriolis force could make the physics asymmetric under flip or rotation, I could not believe that those affect the contrail detection. I visualized a prediction with a model trained without augmentation and applied it to a 180°-rotated input image to see why the augmentation did not work.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F954117%2Ffe56c0bee942eb6591adabafbe0daf39%2Fcontrail_label_shift.png?generation=1691669316963209&amp;alt=media" alt=""> </p> <p>The blue-green-red stripe pattern from top to bottom shows false negative, true positive, and false positive, which means that the rotated label is shifted up compared to the predicted label; that is, the original labels are shifted down compared to the contrails. I also observed the same pattern for left and right.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F954117%2Fbb92508e8a6b1d8392819ac63cb36cd6%2Fcontrail_fish.png?generation=1691669701057157&amp;alt=media" alt=""></p> <p>If the labels are shifted 0.5 pixels to top left (right panel in the figure above), then the rotated label is consistent with the original label in terms of the contrail-label offset.</p> <ul> <li>I first shift the label by 0.5 pixels and create y_sym</li> <li>y_sym has size 512×512 and is sampled from y on a shifted regular grid with bilinear interpolation</li> <li>U-Net is trained with y_sym </li> <li>Additional tiny convolution of 5x5 with stride 2 is trained to learn the mapping from y_sym to right-bottom aligned y, using 5% of the data when the augmentations are not applied randomly (note that y cannot be augmented).</li> <li>At inference time, y_sym is averaged for 8 patterns of rot90 and flip TTAs. The conv5x5 is applied after the y_sym.</li> </ul> <p>The augmentation is,</p> <pre><code>import albumentations as A A.Compose([ A.RandomRotate90(=1), A.HorizontalFlip(=0.5), A.ShiftScaleRotate(=30, =0.2) ]) </code></pre> <p>I refer to my public notebook for the details:</p> <p><a href="https://www.kaggle.com/junkoda/base-unet-model-for-the-1st-place" target="_blank">https://www.kaggle.com/junkoda/base-unet-model-for-the-1st-place</a></p> <h2>Models</h2> <p>The final prediction is the weighted mean of two models with a threshold ~0.45. I tuned the threshold and the weights using the validation set. Both models are U-Net using maxvit_tiny_tf_512.in1k as the encoder, but one uses single time t=4, and the other use four times t = 1 - 4. The input image size is 1024×1024 for both models.</p> <h2>Single-time model</h2> <ul> <li>Upscale the input image to 1024×1024</li> <li>Drop the final decoder layer of upscale and output 512×512</li> </ul> <h2>4-panel model</h2> <p>In order to use the time information, I tried 3D ResNet and ConvLSTM, but I was not able to make them work at all. The only thing I could do was to pack four 512×512 image at t=1,2,3,4 into one 1024×1024 image and hope that self attention in MaxViT look at different times.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F954117%2F291ed2fb89c1e4b492f0d5890ad06868%2Fimg4.png?generation=1691671228909061&amp;alt=media" alt=""></p> <p>In the U-Net, I only pass the quarter of the features (H/2, W/2) to the decoder, which corresponds to the t=4 quarter. The rest is the same as the single-time model. Concatenating images in the spacial direction appear in Kaggle once in a while. I remember <a href="https://www.kaggle.com/competitions/g2net-gravitational-wave-detection/discussion/275433" target="_blank">CPMP's solution</a> for the G2Net blackhole merger competition, in which he stacked 3 detector data horizontally rather than stacking them as 3 channels. </p> <p>Training details:</p> <ul> <li>Train 5 out of k=10 folds, using the out-of-fold to monitor the validation loss</li> <li>0.5 epochs of linear warmup to learning rate 8e-4, followed by cosine annealing</li> <li>batch size 4 × gradient accumulation 2</li> <li>AdamW with weight_decay = 0.01</li> <li>35 or 40 epochs of training</li> </ul> <pre><code>Scores CV Public Private input size Single time 512 0.697 0.707 0.712 Single time 1024 0.703 0.719 0.716 4-panel 1024 0.704 0.719 0.722 Ensemble 0.706 0.725 0.724 </code></pre> <p>All scores are for 5-fold × 8 TTA mean.</p> <h2>Things I was not able to do</h2> <ul> <li>I did not use pseudo labeling. I tried pseudo labeling at an early stage. It boosted the single model score a lot, but the benefit was unimpressive after 5-fold mean.</li> <li>I was not able to train large models. I tried MaxViT small and base, and many other models, but the improvement was unclear. I increased the input image size instead. This failure must be due to my insufficient experience. </li> <li>I used only positive data (dropping data with no positive labels) for model evaluation because it takes 1/2 time to train. However, I was not able to use it in the real prediction; it performs very bad for negative samples, and my classifier was not good enough to exclude such false negatives.</li> <li>Removing small masks is a basic technique in segmentation competition, but here, removing even 1-pixel clusters (connected components) was not a good idea. I tried to remove false positive clusters based on some cluster statistics (size, density, max density, etc), but U-Net was cleverer than my post processing.</li> </ul> <h2>Codes</h2> <ul> <li>Training: <a href="https://github.com/junkoda/kaggle_contrails_solution" target="_blank">https://github.com/junkoda/kaggle_contrails_solution</a></li> <li>Inference notebook: <a href="https://www.kaggle.com/code/junkoda/contrails-submit" target="_blank">https://www.kaggle.com/code/junkoda/contrails-submit</a></li> </ul> <p>Update 2023-08-20: Add links to the codes and minor english corrections.</p> <ol> <li><p>Yes. I thought 10% validation set is sufficiently large and additional 10% from 80% to 90% in the train might help compared to 5 fold. I didn't see a difference between 5 fold and 10 fold at least with small models though.</p></li> <li><p>The quarter is the upper-left quarter for t=4. That is what the U-Net decoder should get from skip connection to output segmentation mask at t=4.</p></li> <li><p>I assumed the upscale to 1024 is unnecessary and 512 is sufficient but I don't know. I didn't tried.</p></li> </ol> <p>I tried 10 times smaller learning rate for encoder and it improves the first 10 epochs nicely but underperformed at the end of 40 epochs. I also tried completely freezing the encoder, but that was not good at all. Training the encoder is necessary.</p> <p>There must be some learning rate for encoder better than the single learning rate, but I could not find it and spend time on other issues.</p> <p>Thanks for the question, and I must agree that many reader have the same question. That's why I am going to write the notebook, and I hope it become clear with the code. I'll read my text again tomorrow and think how I can revise, too.</p> <ul> <li>where do you put the convolution?<br> Yes it is a head,</li> </ul> <pre><code>y_sym_pred = unet(x) y_pred = conv5x5(y_sym_pred) y_sym_pred, y_pred </code></pre> <p>but both y_sym_pred and y_pred are returned and both are compared to their labels. Maybe something strange or wrong is going on here for deeplearning experts.</p> <ul> <li>when do you train?</li> </ul> <p>I always train y_sym_pred with y_sym, but I cannot train y_pred if augmentation is applied.</p> <p>loss = bce_loss(y_sym_pred, y_sym)</p> <ul> <li>what is 5%?</li> </ul> <p>I apply augmentation with probability 0.95, and do not with 0.05. 5% is the probability that the augmentation is not applied. When augmention is not applied, I train both,</p> <p>loss = bce_loss(y_sym_pred, y_sym) + bce_loss(y_pred, y)</p> <p>(I am not accurate about the batch, I hope that is a technical detail)</p> <ul> <li>Why not just training with centered masks (y_sym) and then apply the invert shift on the predictions ?</li> </ul> <p>Yes, you can do that with 512x512 y_sym, but I was also working with 256x256 y_sym, and I used a similar method.</p> <ul> <li>Just to be sure I understood<br> Yes that's right. conv5x5 is outside the standard TTA.</li> </ul> <p>I'm afraid I'm not the right person to ask about automl tools. It must be very efficient in real applications, but I've never used. I think the effect is, if they are competitive (and probably they are in several area) such competition would not appear in Kaggle because there is little room to make difference.</p> <p>To learn how to modify, I guess we have to read codes, rewrite and run them. Many simple codes are available in the internet and Kaggle is also encouraging to publish top solutions. If you like making something and see them work, you can make small models. I've created small CNN models and a mini ResNet model seeing public codes.</p> <p>But, writing your special model is not always important; often that's too ambitious and unsuccessful. The "2.5D model" adopted by many to teams is rather an exception. There are often good public Kaggle Codes/notebooks, and you can start from small improvements.</p> <p>Hi, Thanks for checking it out. unet1024 is the single time 1024x1024 model. The 4-panel model is in vit4 directory.</p>
Google Research - Identify Contrails to Reduce Global Warming
25th Place Solution for the Google Research Identify Contrails to Reduce Global Warming Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. Well, actually, I didn’t conducted detailed comparison (which I should have done). However, I believe augmentation in this competition (without misalignment correction) has both positive and negative effect.</p> <p>In the negative effect, v/h flips and rotation surely introduces label noise, but after enough steps of training, the model would learn “expected contrail label” which is close to mean of flipped/rotated contrails.</p> <p>On the positive effect, the augmentation would inject model with certain amount of generalization as in the general context of machine learning. Additionally, the larger model with time frame information might even have chance to learn flipped/rotated direction, where the model could learn shifted labels correctly.</p> <p>What discussed above is just hypothesis, but considering some top solution says they conducted intense augmentation with larger models, I believe my hypothesis is somehow true.</p> <p>Well, it's nothing special. It is just as I wrote on the block diagram (3 layers of Conv3D + BN + ReLU). I think you can implement it by yourself.</p> <p><a href="https://www.kaggle.com/RB" target="_blank">@RB</a><br> Because I want to make the pixel value greater than 0.5 makes 1 (not 0.5). However, I believe it's not big difference to simply taking average, since setting proper threshold should gave us equivalent result.</p> <h2>Context</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data</a></p> <h2>Overview of the Approach</h2> <p>My approach is simplified version of the 3D model found in the author's paper [1].</p> <ul> <li>extracting feature maps of each time frames in 2D backbone</li> <li>modulating feature map from neighboring time frame's feature map, and obtains modulated 2D feature maps (<strong>Temporal Feature Modulator</strong>)</li> <li>passed 2D feature maps to 2D decoder (U-Net)</li> </ul> <h2>Details of the submission</h2> <h3>Temporal Feature Modulator</h3> <p>First, I hypothesized that features on earlier layers are not contributed much to the final prediction, since contrails are drifted from frames to frames. So I come up with the idea of only applying temporal modulation in the later layers of feature maps.</p> <p>I tried some experiments with applying temporal modulation to earlier layers, and found only applying last layer gives the best performance. Note that the rest of the feature maps are just sliced by the current time frame (T=4).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2F99cce57c10157369fcc5c263ebfff4be%2Fcontrail.drawio.png?generation=1691723146332719&amp;alt=media" alt=""></p> <h3>Data processing &amp; Augmentations</h3> <p>In order for faster loading and saving disk space, I quantized false-color image to <code>uint8</code>.</p> <p>I did it based on the assumption that 'if humans label at <code>uint8</code> resolution, then training an AI model at that resolution would be sufficient'.</p> <p>As other participants are already discussed in publicly, any kind of geometric augmentations fails to improve models' performance (after competition, it turns ouf to be because of label misalignment though).</p> <p>So I just applied light augmentations of <code>RandomResizedCrop</code> and <code>HorizontalFlip</code>.</p> <pre><code>cfg.geometric_transform = A.Compose( [ A.RandomResizedCrop(height=, width=, scale=(, ), p=), A.HorizontalFlip(p=), ] ) </code></pre> <h3>CV strategy</h3> <p>My CV strategy is very naive. I only trained by images on <code>training</code> frames and evaluated on <code>validation</code> images.</p> <p>In the final submission, it might have been advisable to use all the images for training or to take the average across folds, but considering the remaining time until the competition's end, it was not realistic, so I gave up.</p> <h3>Ensemble &amp; TTA</h3> <p>The ensemble policy is:</p> <ul> <li>TTA: <code>hflip</code> (x2)</li> <li>seed ensemble: x4</li> <li>different backbones and input resolutions: x3</li> </ul> <p>I tested several backbones including recently published ones, and find EfficientNet and PVTv2 as ensemble seeds have good training time &amp; performance tradeoff.</p> <p>The final submission was composed of 24 models in total.</p> <table> <thead> <tr> <th>backbone</th> <th>used time frames</th> <th>input resolution</th> <th>TTA</th> <th>random seed</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b3</td> <td>[1, 2, 3, 4]</td> <td>768</td> <td>hflip (x2)</td> <td>x4</td> </tr> <tr> <td>pvt_v2_b3</td> <td>[1, 2, 3, 4]</td> <td>640</td> <td>hflip (x2)</td> <td>x4</td> </tr> <tr> <td>pvt_v2_b5</td> <td>[1, 2, 3, 4]</td> <td>512</td> <td>hflip (x2)</td> <td>x4</td> </tr> </tbody> </table> <p>The CV score on validation set is <strong>0.692</strong>, and private LB score is <strong>0.700</strong>.</p> <h3>Other findings to be noted</h3> <h4>Technical tips of training with larger resolutions</h4> <p>In my moderate machine environment (RTX 3090 Ti x1), Training on higher resolution tend to get CUDA memory overflow error, or result in too small batch sizes. So I used gradient check-pointing and FP16 training to overcome this issue.</p> <h4>Geometric Distribution</h4> <p>Using metadata, I found the geometric distribution of contrails are very different between training and validation sets.</p> <p>As the authors state in paper [1], I believe there are geographical observation points that is only appeared in the training data, and not included in the validation. I also implemented a fold split based on geographical distribution, but I was unable to effectively utilize this information.</p> <blockquote> <p>To further boost the number of positives in the dataset, we also included some GOES-16 ABI imagery at locations in the US where Google Street View images of the sky contained contrails.</p> </blockquote> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2Fca513333ca52eca20a62c9ed0a5cc082%2Fgeometric_distributions.png?generation=1691723168388936&amp;alt=media" alt=""></p> <h3>Things that did not worked</h3> <ul> <li>PL</li> <li>randomly chose individual labels</li> <li>DeepLabv3 as a decoder</li> </ul> <h2>Acknowledgements</h2> <p>Thank you for holding this competition. In addition to tackling an interesting challenge, I was able to review and recap all the methods I used in previous CV competitions.</p> <h2>Sources</h2> <ul> <li>[1] <a href="https://arxiv.org/abs/2304.02122" target="_blank">Joe et.al., OpenContrails: Benchmarking Contrail Detection on GOES-16 ABI, 2023</a></li> <li>[2] <a href="https://github.com/opencv/opencv/issues/11784" target="_blank">warpAffine: correct coordinate system, documentation and incorrect usage</a></li> </ul> <h2>Appendix</h2> <h3>A. Effect of misalignment correction and soft labels</h3> <p>In late submission, I tested how misalignment of labels affect the score.<br> I also tested the effect of soft labels as some participants said effective.</p> <h4>Baseline model</h4> <p>To reduce training cost, I simply tested on the basic 2D U-Net architecture where only current frame is used, and no temporal feature modulator.<br> Other setups are like these:</p> <table> <thead> <tr> <th>Parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>backend</td> <td>efficientnet_b3</td> </tr> <tr> <td>crop_size</td> <td>512x512</td> </tr> <tr> <td>loss</td> <td>BCE(pos_weight=8) + Dice</td> </tr> <tr> <td>augmentation</td> <td>ShiftScaleRotate(p=0.6), hflip</td> </tr> <tr> <td>ensemble</td> <td>4x seed ensemble</td> </tr> <tr> <td>TTA</td> <td>hflip</td> </tr> </tbody> </table> <h4>Misalignment correction</h4> <p>Since label was misaligned by (0.5, 0.5) pixels, I upscale the input image by 2x, and shifted (1, 1) pixels. It is simpler than correcting labels on training time and inverse transform them on inference time.<br> Correction was implemented by the below algorithm.<br> Note that additional shift coefficient (0.5, 0.5) is added to transformation matrix as a workaround of the open issue of OpenCV's <code>warpAffine</code> function[2].</p> <pre><code>M = np.array([[, , + ], [, , + ]], dtype=np.float32) img = cv2.warpAffine( img, M, ( * W, * H), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT ) </code></pre> <h4>Soft Label</h4> <p>Soft labels are generated from <code>human_individual_masks.npy</code>.<br> Since the data tab on the competition page saids:</p> <blockquote> <p>Pixels were considered a contrail when &gt;50% of the labelers annotated it as such. Individual annotations (<code>human_individual_masks.npy</code>) as well as the aggregated ground truth annotations (<code>human_pixel_masks.npy</code>) are included in the training data.</p> </blockquote> <p>I created the soft labels in the below processing:</p> <ul> <li>take the mean of individual annotations</li> <li>multiplied by 2</li> <li>clip by (0, 1)</li> </ul> <pre><code>human_pixel_mask = np.clip(individual_pixel_mask.mean(-) * , , ) </code></pre> <h4>Result</h4> <p>I trained three types of models and tested on validation data (CV) and leader board(Public and Private):</p> <ol> <li>baseline</li> <li>baseline with misalignment correction (MC)</li> <li>baseline with MC and soft labels (SL)</li> </ol> <p>The result shows both MC and SL has independent positive gains by 0.59% and 0.70% respectively on the private LBs.</p> <table> <thead> <tr> <th>description</th> <th>ensemble seeds</th> <th>crop_size</th> <th>CV</th> <th>Public</th> <th>Private</th> <th>gain</th> </tr> </thead> <tbody> <tr> <td>baseline</td> <td>4x</td> <td>512</td> <td>0.6672</td> <td>0.67818</td> <td>0.67669</td> <td>-</td> </tr> <tr> <td>+MC</td> <td>4x</td> <td>512</td> <td>0.6708</td> <td>0.68131</td> <td>0.68259</td> <td>0.59%</td> </tr> <tr> <td>+MC +SL</td> <td>4x</td> <td>512</td> <td>0.6766</td> <td>0.69623</td> <td>0.68956</td> <td>1.29%</td> </tr> </tbody> </table> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2F85d08acac7f886c3270dd1da47992811%2FEffect%20of%20misalignment%20correction%20%20soft%20labels.png?generation=1691897900692345&amp;alt=media" alt=""></p> <p><a href="https://www.kaggle.com/mayurimk" target="_blank">@mayurimk</a></p> <blockquote> <p>Have you considered frame t=4 as skip connection to the decoder?</p> </blockquote> <p>Well, it was once in my mind, but never tested. I believe it would stabilize training especially on deeper modulator (in my case, N=3). I will put this on stock of my arsenal. Thank you for pointing this out.</p> <p><a href="https://www.kaggle.com/tonymarkchris" target="_blank">@tonymarkchris</a> <br> Does this helps?</p> <pre><code> (): cfg = self.cfg crop_size = cfg.crop_size self.training cfg.infer_crop_size x = batch[] B, C, T, H, W = x.shape x = rearrange(x, ) (H != crop_size) (W != crop_size): x = F.interpolate(x, size=(crop_size, crop_size), mode=, align_corners=) features = self.encoder(x) i ((features[:-])): x = rearrange(features[i], , t=T) x = x[:, :, self.current_frame_relative, :, :] features[i] = x x = features[-] x = rearrange(x, , t=T) x = self.temporal_encoder(x) x = x[:, :, self.current_frame_relative - cfg.temporal_offset, :, :] features[-] = x decoder_output = self.decoder(*features) logit = self.segmentation_head(decoder_output) _, _, h, w = logit.shape (h != H) (w != W): logit = F.interpolate(logit, size=(H, W), mode=, align_corners=) output = (logit=logit) output </code></pre>
Google Research - Identify Contrails to Reduce Global Warming
26th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you very much <a href="https://www.kaggle.com/thejavanka" target="_blank">@thejavanka</a></p> <p>Thank you <a href="https://www.kaggle.com/kuddai" target="_blank">@kuddai</a> <br> Yes I mean that only. I will publish my code in a few days.</p> <p>Thank you very much <a href="https://www.kaggle.com/allunia" target="_blank">@allunia</a> <br> Congratulations for 52nd Place!<br> Sure, I will take a look at the notebook. Thanks!<br> After using soft labels, the score boost was around 0.02-0.03, if I remember correctly!</p> <p>Thank you <a href="https://www.kaggle.com/echen333us" target="_blank">@echen333us</a>,<br> I used JavisLabs' A5000 GPU for all my experiments. It worked really well for me. Kaggle is also great but I find the 30 hour GPU limit very low. <br> Yes. There were many duplicates present in the training set. A satellite image from the same location after a few hours can be very similar to the ones before. So sorting, splitting and clipping like this ensured that training and validation are far from each other with respect to time.</p> <p>First of all I would like to thank the organisers and Kaggle for this great competition. It was an amazing learning experience, I got to learn a lot of things. Congratulations to all the winners!</p> <h1>Summary</h1> <ul> <li>Data: Ash Color Images and Soft Labels (Average of individual labels)</li> <li>Image Size: 512</li> <li>Cross Validation: Split the Train Set to 5 folds after sorting by time.</li> <li>Ensemble of 3 Models</li> <li>Encoders: efficientnet-b5, regnety_120, seresnextaa101d_32x8d</li> <li>Decoder: Unet</li> </ul> <h2>Data</h2> <p>I trained on false color images and used soft labels. This gave me a big boost in both cv and lb score as compared to when training on the pixel masks. I think this was one of the most important tricks for this competiton. It was very surprising for me to see that the model was able to capture the uncertainty depicted by the probabilities of the pixels. As individual labels were not available for the valid set, so I didnt include it in training. I used it as hold set.</p> <h2>Cross Validation</h2> <p>This was a bit tricky due to the presence of many duplicates in the training set. So, to build a robust local validation, I first sorted the training set by date and time which was available in train.json and then split the sorted data into 5 folds without shuffling. After this, I clipped the valid folds during training like this:</p> <pre><code> self.fold == : df = df[:-] self.fold == : df = df[:] : df = df[:-] </code></pre> <p>The cv score was not at all correlated with the leaderboard score, but still I trusted my cv in the end to select my submissions.</p> <h2>Models</h2> <p>I tried many encoders but used efficientnet-b5, regnety_120 and seresnextaa101d_32x8d in the end as they gave the best cv score. I only used Unet for all the encoders.</p> <h2>Training</h2> <ul> <li>Epochs: 30</li> <li>Learning Rate: Between 3e-4 to 3e-3 (Different for different Encoders)</li> <li>Optimizer: Adam</li> <li>Scheduler: CosineAnnealingLR (I saved the epoch with the best valid fold score)</li> <li>Loss: BCE Loss (As I was training with soft labels)</li> <li>Augmentations: RandomResizedCrop, ShiftScaleRotate (HFlip, VFlip and RandomRotate90 didn't work for me also, but I had no idea that this can be due to some problem in the masks)</li> </ul> <h3>Trick to reduce training time</h3> <p>To reduce training time, I tried to remove some images during training. Removing all the non-contrail images reduced the local cv score. Then I predicted the train set with one of my best model and made a column with the highest pixel probability predicted by the model for the images. Then I removed the images which had max pixel probability lower than 0.05 and had no contrail.</p> <p><code>df = df[~((df['max_prob_pred'] &lt; 0.05) &amp; (df['contrail'] == 0))]</code></p> <p>Using this I was able to remove about 4000 images from training, reduce training time a lot and it didn't effect the cv score at all.</p> <h2>Ensemble</h2> <p>I used my cv folds to get the ensemble weights for my final models.<br> Weighted ensembling: 0.3172[eb5] + 0.2928[regty120] + 0.39[sr101].</p> <p>For final prediction, first I ensembled the different models separately for each fold. Then I converted the 5 fold predictions to binary predictions using the thresholds I got using my valid sets. The thresholds were 0.49, 0.47, 0.47, 0.48, 0.46 for the folds from 0 to 4. Then I took the average of the 5 folds and used 0.3 as the final threshold (which I got using the valid hold set). </p> <p>My teammate's account was deleted just after I teamed up with him. I don't know if he did anything wrong or not. Whatever be the case, I wish him best of luck for his future.<br> I really hope I won't be removed from the leaderboard after verification. Let's see what happens ☺️</p>
Google Research - Identify Contrails to Reduce Global Warming
28th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/ynhuhu" target="_blank">@ynhuhu</a> <br> Training image size is 256.</p> <p><strong>Overview</strong></p> <ul> <li><p>Emsemble 14models</p></li> <li><p>Model architecture: Unet, Unet++</p></li> <li><p>Encoder: resnest200e, resnest101e, efficientnet-b7</p></li> <li><p>The region that over 1/2 labelers considered contrail is GT, but I created labels for the regions that over 0, 1/4, 1/2, and 3/4 labelers considered contrail.I trained the model to predict these four labels simultaneously.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2Fe7023ad79299578a50b17c99fa6a4d71%2F.png?generation=1691633654216697&amp;alt=media" alt=""><br> (<a href="https://www.kaggle.com/code/welshonionman/dataset-ashcolor-4labels" target="_blank">dataset notebook</a>, <a href="https://www.kaggle.com/code/welshonionman/gr-icrgw-training-with-ashcolor-4labels" target="_blank">training notebook</a> )</p></li> <li><p>pseudo labeling</p></li> </ul> <p><strong>Augmentation</strong></p> <ul> <li>A.RandomRotate90</li> <li>A.RandomBrightnessContrast</li> <li>A.CoarseDropout</li> <li>A.RandomGridShuffle</li> </ul> <p>I trained the model as a multi-label problem, not a multi-class problem.<br> So, the model outputs the areas that are considered contrail by labelers greater than 0, 1/4, 1/2, and 3/4; for submission, only the areas that are considered contrail by labelers greater than 1/2 are used.</p> <p>I share the training code..<br> <a href="https://www.kaggle.com/code/welshonionman/gr-icrgw-training-with-ashcolor-4labels" target="_blank">https://www.kaggle.com/code/welshonionman/gr-icrgw-training-with-ashcolor-4labels</a></p> <p>I apply augmentation only during training, TTA isnt used.<br> In discussions, there were some who said that augmentation is not effective at all. It's curious.</p> <p>I separate notebooks for training and inference. <br> Training time when using pseudo labeling was about 24h(50epochs).<br> It was very long🙄</p> <p>Training time when using pseudo label was about 24h(50epochs).Using pseudo labeling takes very long time🙄</p>
Google Research - Identify Contrails to Reduce Global Warming
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. I updated the write-up with more details. Regarding preprocessing, I do not think that we did anything spatial in addition to what we listed in the write-up.</p> <p>Thanks, congratulations on getting gold too.</p> <p>We selected the threshold based on CV, but it is very close to 0.5, like 0.48. Lovasz part of the loss was helpful to reduce the dependence of the dice on the threshold selection.</p> <p>My early sub in this competition got 0.71614 public LB and it was a combination of Coat and NeXtViT single-frame models. It was rather a coincidence, and CV was just 0.700, and private LB is 0.70323. The next sub was Seq CoaT+NeXtViT LSTM, which got 0.71205 public LB, while CV is much higher of 0.7039, and private LB is 0.71488. All those subs are just single fold, later we switched to 5 folds. In this competition, we didn't really consider public LB, and relay only on CV on eval set. </p> <p>The origin of this unstable behavior, to my understanding, is the label noise(<br> I was using 3e-4 lr, but what was more important in my case is Over9000 optimizer with epsilon=1e-4. As I remember, this optimizer not only worked better for CoaT but also gave more stable training than AdamW.</p> <p>2x4090 in my experiments</p> <p>I didn't check it in this competition, but the negative effect of linear upscaling should be quite strong here because you need to reproduce several pixel-wide masks with pixel accuracy(</p> <p>Thanks. Yep, training with it is less stable than LSTM. With CoaT model everything was ok, but the spread of the results between folds is large. For NeXtViT, meanwhile, I could not finish the training from scratch even if tried to play with lr, etc. Probably proper init may resolve the problem. Meanwhile, in the case of using a model pre-trained on single frame setup without mixing (both encoder and decoder), training of the transformer mixing is quite more stable, and I didn't have any issues during training. <br> I used this setup for the case of PL pertaining. However, even if I pretrained the single frame model with different seeds for each fold, the fine-tuned models had less diversity (in comparison to training from scratch), and these models were not so useful for the ensemble in the end.</p> <p>It is just (1) flattening the feature map into a list of tokens for res/16 and res/12 maps; then (2) adding encoding to represent the time for each considered feature map in the sequence; (3) rearranging all sequences for a given image series into one long sequence; applying transformer with q=considered frame only seq and k,v=concatenated seq; (4) rearranging the thing back and doing standard U block upsampling.<br> Now only inference code is released, but you can check the definition of CoaT_UT model <a href="https://www.kaggle.com/datasets/iafoss/contrails-model-def1" target="_blank">here</a>.</p> <p>LSTM worked the best for me, but we kept other things for diversity. My training was 24 epochs long, so it took just ~2 hours on 512 input, but you could accelerate the things at initial experiments using smaller res. Seq models took probably 8 hours. The issue, though, was the results may be noisy sometimes, and 5 folds are needed to make a relatable concussion on the experiment. If you have to train 100-200 epochs, you can do initial dev on shorter training and then upscale the things for the best setup.</p> <p>The typical output of segmentation models is at x4 lower res, which then is linearly upsampled. So if u give x4 input you do not need to do upsampling of masks at the end. For x2 input, the output wd be 128x128, and additional x2 upscaling with pixel shuffle block (instead of linear upsampling) is needed.</p> <p>CoaT is a really good alternative to Segformer when a restrictive license is not acceptable. A combination of global attention and consideration of details at high-resolution feature maps (in contrast to ViT architectures) is something one doesn't see often. Meanwhile, in this particular competition, it was very important. </p> <p>I expect LSTM should perform pretty similarly to conv temporal mixing because it shares the same characteristics: mixing assuming that features are spatially aligned. 3d conv may have slightly better robustness to frame misalignment because you are looking spatially during mixing. But it may be addressed by the previous 2d convs, i.e. 2d+1d ~ 3d. LSTM may have slightly better behavior for longer seq. Unfortunately, I didn't experiment with conv mixing, and I do not remember if my teammates did anything on that.</p> <p>Regarding transformer mixing, its idea is to do intrinsic image registration for shifts larger than ~32 pixels. Conceptually it is different from LSTM/conv, and that's why it was option 2 I investigated. Unfortunately, it performed worse, which may be a results of (1) the difficulty of training such mixing model, or (2) too large misalignments are rare enough, and they are well compensated by 2d convs before and after the mixing.</p> <p>The simplistic spatial location here is not meaningful: you try to track contrails and clouds with the seq model. Cloud movement between frames is huge, and images are very far away from being properly registered to track clouds( </p> <p>Thanks. </p> <p>Thanks. Yes, exactly. I did it only at the last two, i.e. res/16 and res/32, feature maps because of too large change between input frames.</p> <p>Thanks so much. It happens for us in the second time during last several months when we drop from 1st place at public to the 2nd place at private((( Getting the first place is quite a difficult thing.</p> <p>Here are some of the results from my notes (I list public CV):<br> CoaT_U (single image): 0.6887+-0.0014, avr pred 0.6983 (th 0.47), LB 0.706<br> ExCoaT_U (single image, PL): 0.6952+-0.0008, avr pred 0.7002 (th 0.49)</p> <p>NeXtViT_U (single image): 0.6848 +- 0.0013, avr pred 0.6917 (th 0.47), LB 0.710<br> ExNeXtViT_U (single image, PL): 0.6904 +- 0.0006, avr pred 0.6941 (th 0.48)</p> <p>CoaT_ULSTM (5 image seq): 0.6960+-0.0003, avr pred 0.7039 (th 0.48), LB 0.712<br> EX Coat_ULSTM (5 image seq, PL): 0.7027+-0.0002, avr pred 0.7064 (th 0.49), LB 721</p> <p>CoaT_UT (5 image seq, transformer mixing): 0.6950+-0.0013, avr pred 0.7055 (th 0.47), LB 0.722<br> EX Coat_UT (5 image seq, transformer mixing, PL): 0.6997+-0.0008, avr pred 0.7040 (th 0.49), LB 0.716</p> <p>NeXtViT_ULSTM (5 image seq): 0.6951+-0.0004, avr pred 0.7014 (th 0.50), LB 0.713<br> Ex NeXtViT_ULSTM (5 image seq, PL): 0.6988 +- 0.0007, avr pred 0.7025 (th 0.50)</p> <p>Overall <strong>sequence model gives 0.005-0.01 boost</strong>. In my case the key thing was mixing at res/32 and res/16 feature maps followed with up-sampling U blocks, like in the single frame model. I used 0-5 frames, Theo was using 3-6 frames. 8 frames didn't perform well, but I didn't run full 5 fold for this setup.</p> <h1>Summary</h1> <ul> <li>Customized U-shape network with CoaT, NeXtViT, SAM-B, and tf_efficientnetv2_s backbones</li> <li>×2 (with pixel shuffle up-scaling of predictions to 256×256) and ×4 input upscaling: pixel-level accuracy of predictions is critical</li> <li>Sequence of images: LSTM, Transformer, Convolutional temporal mixing</li> <li>No flip and 90-degree rotation augmentation (nor TTA) because masks are shifted</li> <li>Train on soft labels: label = average of all annotators</li> <li>BCE + dice Lovasz loss</li> </ul> <h1>Code</h1> <p>My and <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> 's <a href="https://github.com/DrHB/2nd-place-contrails" target="_blank">part</a> (includes the best performing CoaT_ULSTM model)<br> <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> 's <a href="https://github.com/TheoViel/kaggle_contrails" target="_blank">part</a><br> <a href="https://www.kaggle.com/code/theoviel/contrails-inference-comb" target="_blank">Inference</a></p> <h1>Introduction</h1> <p>Our team would like to thank the organizers and Kaggle for making this competition possible. Also, I want to express my gratitude to my outstanding teammates <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> and <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> for their incredible contribution toward our final result. </p> <h1>Details</h1> <p>This competition has two main challenges: (1) noisy labels and (2) pixel-level accuracy requirements. (1) If one checks the annotation from all annotators, a major disagreement is quite apparent. This label noise challenge may be partially addressed by using soft labels (the average of all annotator labels) during training. However, since the evaluation requires hard labels, the major model failure is the prediction of contrails near the decision boundary, which cannot be avoided. (2) The thing that could really be addressed by the models is achieving pixel-level accuracy of predictions. Pixel-level accuracy is quite important since contrails are only several pixels thick, and even a mistake by one pixel in the mask boundary in the lateral direction may result in a significant decrease in the dice score, used as a metric. For such a task, a typical solution is up-sampling the input of the model or replacing the final linear up-sampling in segmentation models with transposed convolution or pixel shuffle up-sampling. This modification gave us a substantial boost in initial experiments over the direct use of models on the original resolution.</p> <h2>Data</h2> <p>Following the <a href="https://arxiv.org/pdf/2304.02122.pdf" target="_blank">organizer's publication</a>, we used “ash” false color images considering the 12 μm band, the difference between 12 and 11 μm bands, and the difference between 11 and 8 μm bands, respectively. We also tried to consider all bands or expand the “ash” color images with 8, 10, and 12 μm, but it resulted in lower performance (the pre-trained weights of the first convolution were replicated). We hypothesize that the best performance of ash color images may be a consequence of the use of the ash color images by annotators, and biased label noise. In our experiments, we up-sample the input with bi-cubic interpolation by ×2 for CoaT and NeXtViT models, and by ×4 for SAM model. EfficientNet model uses the original input but the stride in the first convolutional block is set to 1, which is equivalent to ×4 input up-sampling.</p> <h2>Model</h2> <p>The interesting thing about this competition is that the model is required to do both (1) tracking the global pixel dependencies because contrails are quite elongated, and (2) capable of generating predictions with pixel-level accuracy because contrails are only several pixels thick. (1) can be addressed with transformers, while (2) is addressed with convolutional networks or local attention. In the competition, we tried to address these points independently with SAM-B ViT backbone and EfficientNet v2 purely convolutional network. In addition, there is a class of networks, transformers with a hierarchical structure, which addresses both aspects of the considered problem, (1) + (2), simultaneously. This class includes CoaT, which resulted in the best single model performance in our experiments, NeXtViT, MaxViT (which we missed), and MiT transformer from Segformer (unfortunately, it cannot be used in the competition because of the license).<br> For the decoder we either considered a vanilla U-Net decoder (EfficientNet models) or a more customized set of U-blocks using pixel shuffle up-sampling and factorized FPN (CoaT, NeXtViT, SAM). In these blocks, we also replaced the BatchNorm with LayerNorm2d to have proper model convergence at the low batch size and used GELU instead of ReLU activation. <br> The models are schematically illustrated in the figure below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F16e90aba2cb8ee7ac94837e09455688f%2F4.png?generation=1692419847964897&amp;alt=media" alt=""></p> <h2>Temporal mixing</h2> <p>In addition to our initial runs with single-frame models, we performed experiments with image sequences. In our best setup image, multi-frame models get ~0.01 improvement.<br> Since the images are taken with a considerable temporal delay, and the displacement of clouds between frames is huge (~10-30 pixels) and is deteriorated by input up-sampling, video models based on 3D convolutions or window attention (VideoSwin) are not effective in the context of the considered problem. For the same reason, early mixing or mixing the predictions of sequential models is also not expected to work. Therefore, mixing at the intermediate feature map scales, such as res/32 and res/16, is most promising: even if clouds/contrails are misplaced, the feature maps at low resolution are reasonably aligned. In our experiments with the CoaT model, the best performance is achieved if two low-resolution feature maps (res/32 and res/16) are considered. The output of the temporal mixing modules as well as the output of the remaining feature maps are pooled at frame 4 and the decoder sees only the input corresponding to the considered frame, as illustrated in Figure 1.<br> As temporal mixing modules, we considered LSTMs (which achieved the best performance), 1D temporal convolutions, and Transformers. LSTM (1 layer) and 1D temporal convolutions perform mixing along the temporal dimension only, while the spatial dependences between features are considered in the backbone and the decoder. The transformer-based mixing was proposed to perform an implicit feature map registration for a large displacement of contrails/clouds because it works with temporal and spatial mixing of features simultaneously and can match them even if they are not spatially aligned. <br> This mixing is performed in the following way. To each feature map, corresponding to a given frame, we add encoding to distinguish it from others, then we flatten tokens from all frames into a single sequence and process it with 2 transformer decoder blocks. Key and value input to the transformer is represented by the concatenated sequence of all frames, while the query is a sequence of tokens from the 4-th frame only. Unfortunately, transformer-based mixing ended up with slightly lower performance, and the training with it is less stable. However, we kept both approaches in the final ensemble, benefitting from the improved diversity.<br> In our experiments with CoaT/NeXtViT we followed the <a href="https://arxiv.org/pdf/2304.02122.pdf" target="_blank">organizer's publication</a> and selected the first 5 frames as the input. Both LSTM and Transformer mixing were considered. In our experiments with EfficientNet we considered 4 frame input (2nd – 5th, 2 frames before and 1 frame after the annotated frame) with bi-directional LSTM. This frame selection is dictated by the instruction to annotators to have a contrail at least on two subsequent frames. With the SAM model, we considered 1D-convolutional mixing with 3 frames (1 frame before and 1 frame after the annotation) because of the heavy VRAM requirement for this model, which was also an issue in inference. <br> One idea we had is explicit registration of input frames to have contrails and clouds aligned between frames (temporal mixing is the most effective), for example considering optical flow. Unfortunately, the images contain not only clouds but also the Earth's surface, which does not move, or several layers of clouds moving in different directions. Therefore, we could not come up with any methods to proceed with the frame registration task. We were also considering performing registration based on the predicted contrails, but since contrails appear and disappear from frame to frame, this approach is also not feasible.</p> <h2>Pseudo-labels (PL)</h2> <p>In addition to the competition data, we have collected a dataset “Contrails GOES16 Images May” (single frame setup) [9]. We split the images into 256×256 tiles with partial overlap and generated the masks based on the ensemble of best single-frame models. This data is shared as a Kaggle dataset [10]. This data was used for pretraining the models in a single-frame setup in some experiments, followed by finetuning the models either in single-frame or multi-frame setups. The use of PL has drastically improved the performance of individual models (folds), but the diversity of the models is reduced even if each fold of the finetuned model uses an independently trained PL model. Therefore, the performance on the average prediction over all folds is comparable to one for the setup trained without PL.<br> We also have experimented with PL training on both external data + unannotated competition frames, but it did not give any benefit in finetuning a single frame model. Since such pretraining could affect multi-frame models, in the production experiments we used only external data at the PL step.</p> <h2>Training</h2> <p>CoaT and NeXtViT models: Training is performed for 24 epochs with Over9000 (Radam+LAMB+LookAhead) optimizer, learning rate of 3.5e-4, weight decay of 0.01. It appeared that this optimizer has superior compatibility with CoaT. In the case of PL pretraining, it is performed for 12 epochs on the external data, and then the models are finetuned for 12 epochs for CoaT and 18 epochs for NeXtViT models.<br> EfficientNet models: Pretraining is performed in a single frame setup for 100 epochs with additional CutMix augmentation, learning rate is 1e-3, AdamW optimizer, weight decay 0.2. During pretraining the fraction of PL data is linearly decreased to zero. Then the model is fine-tuned in the image sequence setup with 3e-5 for the encoder and 1e-4 for the decoder. The addition of external data for pretraining resulted in much stronger 2D models, hence multi-frame EfficientNet models were not considered in the final ensemble. For diversity, we also pre-trained models for 200 epochs, and in a full-fit setup (training on all data, including validation set).<br> SAM models: Our final submission includes only single-frame SAM-B ViT-based models. During training, we, first, train the model with a frozen encoder for 10 epochs, and then we unfreeze the model and continue for 20 more epochs. AdamW optimizer is used with the learning rate of 4e-6 for the encoder and 4e-5 for the decoder.<br> During training, we used ShiftRotateScale, RandomGamma, RandomBrightnessContrast, MotionBlur, and GaussianBlur augmentations. Flip and 90-rotate augmentation were not used because they resulted in worse performance. The training of all production models is performed for 5-6 folds our folds are trained on the entire train dataset with different random seeds, while the evaluation is performed on the provided validation set. This strategy is used because evaluation in a standard K-fold split of the training data overestimates CV (cross-validation), which may be caused by spatial overlaps of tiles in the training dataset.</p> <h2>Loss</h2> <p>As a loss function, we used BCE + dice Lovasz loss. The second term is a modified version of a Lovasz loss [12] where (1) ReLU is replaced with 1 + ELU, (2) a symmetric version is used, and (3) instead of IoU we use a surrogate function maximizing dice. The second term provides an insignificant boost to the individual model performance. However, this term gives a wider maximum of the dice with respect to the threshold selection, i.e. weaker dependence of the performance on the threshold selection. This property is vital to avoid shakeup at the private leaderboard because of the improper threshold and is preferable for more effective ensembling.</p> <h2>Results</h2> <p>The performance of our final models is summarized in the table below. Ex refers to models trained in PL setup (EfficientNet is trained with PL only). The postfix U refers to the single frame model, UT – transformer-based temporal mixing, and ULSTM – LSTM-based temporal mixing. The threshold is selected based on the search maximizing CV value. For most models, it is 0.46-0.50.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F5b5010ef692bccaca91f04bdd8ae6be3%2F5.png?generation=1692423621715751&amp;alt=media" alt=""></p> <h2>Best Single model</h2> <p>Our best single mode is CoaT, which for 5 folds gets 0.7039 CV, 0.71790 private, and 0.71243 public LB. Single fold CV is 0.6960+-0.0003 evaluated on val set (our folds are trained on the train set with different seeds because eval in a standard Kfold split of train data overestimates CV due to possible spatial overlaps). <strong>We could take top-4 in the competition with this single model</strong>. </p> <h2>Ensemble</h2> <p>Our best ensemble consists of 8 models with weights selected to make approximately equal contributions from all components: CV is 0.7140 (on eval set), 0.72574 public and 0.72304 private LB. The models and their weight are summarized in the table below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F2ddf4778766d0b5aa44a24a81409d460%2F6.png?generation=1692421380873655&amp;alt=media" alt=""><br> EfficientNet includes 3 model setups (100 epochs, 200 epochs, and full-fit) with the total efficient weight equal to 1.</p> <h2>Model Execution Time</h2> <p>CoaT and NeXtViT models: Training time is 2 hours for single-frame and 8 hours for multi-frame models per fold on 2×RTX4090 GPUs. The inference time at Kaggle (P100) for multi-frame models is 35 minutes for 5 folds.<br> EfficientNet models: Training time is 3 hours 45 minutes for 100 epochs on 8×V100 GPUs per fold. Inference at Kaggle (P100) takes 20 minutes for 6 folds including data loading.<br> SAM models: Training time is 16 hours per fold on 2×A6000 GPUs. Inference at Kaggle (P100) takes 48 minutes for 5 folds including data loading.</p> <h2>Things we missed</h2> <p>(1) During the competition one surprising finding for us was that flips and 90-degree rotation augmentation in model training resulted in worse performance. It was quite surprising, but, unfortunately, we did not think sufficiently about the origin of this behavior and attributed it to prevailing winds and a particular shape of clouds and contrails. After the competition, it appeared that the masks were shifted by 0.5 pixels, which made any flips inapplicable, as we discovered. If the mask issue was fixed in our setup, flip/90-rotation augmentation could lead to an 8-fold increase of the effective training dataset size and enables further performance boost at the inference stage due to test time augmentation. So, potentially the performance of our models can be noticeably improved. <br> (2) PL single model pertaining on the external data gives moderate CV improvement to the single-frame model (interesting that at private LB this improvement is quite large). Meanwhile, this single-frame pretraining didn't give much to sequence models. So, external sequence data collection + proper reprojection could potentially server as a better pertaining strategy for sequence models, and we may expect a boost comparable to single-frame models.</p> <p>I just didn't up-sample the masks in the model keeping 256 size as the output.</p> <p>We will share the code a bit later, but the thing I considered is mixing at low res feature maps because images are poorly alighted (time between images is too large). Congratulations with getting gold!</p> <p>Thanks so much. </p> <p>In my experiments use of 4 scales was worse than use of 2 scales, but I didn't run this experiment in 5 folds.</p> <p>In PL training I used a kind of 2 stage approach: train single frame model on PL and then add temporal mixing a and trained on the provided data (no freezing). Even If I trained 5 single frame distinct models models, the overall diversity was quite bad, and this model was not very helpful for the ensemble.</p> <p>Regarding large movent between frames, I expect it is a bad thing because LSTM mixing does not consider spatial misalignment. If the contrail is shifted from one pixel at res/32 feature map to another, LSTM cannot track it properly because it is applied only in the temporal dimension( So displacement of clouds by the receptive field of the model may kill the advantage of the seq model.</p> <p><a href="https://www.kaggle.com/socated" target="_blank">@socated</a> you are very welcome</p> <p>Thank you so much</p> <p>You are welcome</p> <p>yep, it worked the best. Since global dice is considered in this competition, I don't think that post-processing may really work well here.</p> <p>One of the key things for this particular data (pixel accuracy of masks is needed) was to avoid using torch.resize at the end. Typical segmentation models do x4 linear resizing of masks to get the same mask res as the input. So, I gave x2 larger input image and added a pixel unshuffle x2 upscaling block to the model instead of typical final convolution+resizing.</p> <p>For me it was 2x4090 machine, quite fast cards for deep learning but VRAM is an issue, like always. <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> used 2xA6000, and <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> used his V100 cluster.</p> <p>SAM ViT encoder works with 1024x1024 input, and we just performed x4 upscaling. For CoaT and NeXtViT the resolution is not fixed, similar to conv models. Training a transformer from scratch is quite an unfavorable thing, and if you asked how to use different res in ViT like models, the typical way is just linear resizing of the pos enc to the res u are interested in. After small fine-tuning the model works quite well.</p> <p>Yep. Previously I used it a lot because of the good compatibility of this optimizer with ssl_ResNeXt50 weights in fine-tuning, but later, working with transformers, I mostly switched to AdamW. It was interesting to see that CoaT seems to work quite well with this optimizer.</p> <p>You just unshuffle the input from Bx4CxHxW to BxCx2Hx2W in the way that 4 sequential values from the channel dimension go into 2x2 pixels. And I have conv + LayerNorm2D + GELU + conv after and before this shuffle. We will share the code a bit later.</p> <p>For tf_efficientnetv2_s you can use timm, which is super convenient. For others, that are not shared through timm, I just go to the corresponding github, grab files with the model definition and related things, modify it a bit in some cases to eliminate unnecessary dependences like mmseg, download weights from the repo, and just use the model.</p>
Google Research - Identify Contrails to Reduce Global Warming
30th Place Solution for the Google Research - Identify Contrails to Reduce Global Warming Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank Kaggle and the host for such a funny (and <em>nothing-works-well-for-us</em>) competition. This is not a fancy solution but we feel proud of taking the right steps to generalize well to the private test set (it was indeed, 42 positions shake up 😃).</p> <h1>1. <strong>Context</strong></h1> <ul> <li><em>Business context</em>: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview</a></li> <li><em>Data context</em>: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data</a></li> </ul> <h1>2. <strong>Overview of the Approach</strong></h1> <p>Our final solution was an ensemble of 8 models which combines different encoders and image sizes. Each model configuration was trained in turn 3 times with different random seeds (24 models in total) and always with all available data (train+validation).</p> <h3><strong>Data preprocessing</strong></h3> <p>ASH false color images, only 2D models with labeled frame and hard labels.</p> <h3><strong>Validation schema</strong></h3> <p>Provided train/validation split.</p> <h1>3. <strong>Overview of the Approach</strong></h1> <h3>3.1. <strong>Individual models and final ensemble</strong></h3> <p>All models are based on UNet architecture from SMP. We combine them using a weighted average of the probabilities dropped by single models, where weights were determined optimizing the global dice score on the validation set using Optuna. The final CV score was <strong>0.6825</strong> (which may be a little biased because as we mentioned we optimized the weights directly using the validation set).</p> <table> <thead> <tr> <th>Weight</th> <th>Encoder</th> <th>Image size</th> <th>LR</th> <th>Epochs</th> <th>BS</th> <th>CV score</th> </tr> </thead> <tbody> <tr> <td>0.02</td> <td>timm-resnest101e</td> <td>384</td> <td>5e-4</td> <td>20</td> <td>48</td> <td>0.64397</td> </tr> <tr> <td>0.07</td> <td>maxvit_base_tf_384.in21k_ft_in1k</td> <td>384</td> <td>1e-4</td> <td>20</td> <td>16</td> <td>0.65699</td> </tr> <tr> <td>0.18</td> <td>maxvit_base_tf_512</td> <td>512</td> <td>1e-4</td> <td>40</td> <td>8*</td> <td>0.66414</td> </tr> <tr> <td>0.02</td> <td>tf_efficientnetv2_b3.in1k</td> <td>768</td> <td>5e-4</td> <td>35</td> <td>24*</td> <td>0.65178</td> </tr> <tr> <td>0.23</td> <td>tf_efficientnetv2_m.in21k_ft_in1k</td> <td>512</td> <td>5e-4</td> <td>35</td> <td>24*</td> <td>0.6525</td> </tr> <tr> <td>0.14</td> <td>tf_efficientnetv2_l.in21k_ft_in1k</td> <td>384</td> <td>5e-4</td> <td>35</td> <td>24*</td> <td>0.64538</td> </tr> <tr> <td>0.11</td> <td>maxvit_large_tf_224.in1k</td> <td>448</td> <td>1e-4</td> <td>40</td> <td>8*</td> <td>0.66407</td> </tr> <tr> <td>0.23</td> <td>maxvit_large_tf_384.in1k</td> <td>384</td> <td>1e-4</td> <td>40</td> <td>8*</td> <td>0.66782</td> </tr> </tbody> </table> <p>*gradient accumulation of 2.</p> <h3>3.2. <strong>Threshold selection</strong></h3> <p>The optimal threshold for the ensemble was 0.43, which was determined running a for loop varying the threshold from 0 to 1 with 0.1 increments and calculating the corresponding global dice. We observed a relatively wide maximum peaking at 0.43 which give us enough confidence to choose that threshold.</p> <h3>3.3. <strong>Training procedure</strong></h3> <p>We used CosineAnnealingLR and AdamW. Maybe it is important to note that we always tried to choose the number of epochs such that the last epoch is the best one. This generalizes better than choosing the best checkpoint during training, especially in our case, where we observed an irregular behavior of the training curves. Finally, once a model was validated, we took the same configuration and train it again using all the available data.</p> <h3>3.4. <strong>Augmentations</strong></h3> <p>We didn't notice the mask shift, so flipping and rotation didn't work for us. Other augmentations that worked were:</p> <pre><code>A.Compose([ A.RandomResizedCrop(height=, width=, p=), A.RandomBrightnessContrast(p=) ]) CutMix(p=) MixUp(p=) </code></pre> <p>These are our implementations of CutMix and MixUp:</p> <pre><code> (): W = size[] H = size[] cut_rat = np.sqrt( - lam) cut_w = (W * cut_rat) cut_h = (H * cut_rat) cx = np.random.randint(W) cy = np.random.randint(H) bbx1 = np.clip(cx - cut_w // , , W) bby1 = np.clip(cy - cut_h // , , H) bbx2 = np.clip(cx + cut_w // , , W) bby2 = np.clip(cy + cut_h // , , H) bbx1, bby1, bbx2, bby2 (): indices = torch.randperm(data.size()) shuffled_target = target[indices] lam = np.clip(np.random.beta(alpha, alpha),,) bbx1, bby1, bbx2, bby2 = rand_bbox(data.size(), lam) new_data = data.clone() new_data[:, :, bby1:bby2, bbx1:bbx2] = data[indices, :, bby1:bby2, bbx1:bbx2] lam = - ((bbx2 - bbx1) * (bby2 - bby1) / (data.size()[-] * data.size()[-])) new_data, target, shuffled_target, lam (): alpha &gt; , x.size() &gt; , lam = np.random.beta(alpha, alpha) rand_index = torch.randperm(x.size()[]) mixed_x = lam * x + ( - lam) * x[rand_index, :] target_a, target_b = y, y[rand_index] mixed_x, target_a, target_b, lam </code></pre> <p>And losses are calculated as:</p> <pre><code> self.cfg.mixup torch.rand()[] &lt; self.cfg.mixup_p: mix_images, target_a, target_b, lam = mixup(images, masks, alpha=self.cfg.mixup_alpha) logits = self(mix_images) loss = self.loss_fn(logits, target_a) * lam + ( - lam) * self.loss_fn(logits, target_b) self.cfg.cutmix torch.rand()[] &lt; self.cfg.cutmix_p: mix_images, target_a, target_b, lam = cutmix(images, masks, alpha=self.cfg.cutmix_alpha) logits = self(mix_images) loss = self.loss_fn(logits, target_a) * lam + ( - lam) * self.loss_fn(logits, target_b) </code></pre> <h3>3.5. <strong>What didn't work</strong></h3> <p>A ton of things:</p> <ul> <li>Using other bands/other data schema.</li> <li>TTA.</li> <li>We tried to combine the former and the latter frames with the labeled one at each stage of the UNet without success (only tried concatenation).</li> <li>At the end of the competition we discovered that pseudolabels give a big boost for some models (not the strongest ones) but we didn't have enough time to properly experiment with it.</li> <li>…</li> </ul> <h1><strong>References</strong></h1> <ul> <li><a href="https://www.kaggle.com/code/inversion/visualizing-contrails" target="_blank">ASH color scheme</a></li> <li><a href="https://arxiv.org/abs/2304.02122" target="_blank">OpenContrails: Benchmarking Contrail Detection on GOES-16 ABI</a></li> <li><a href="https://www.kaggle.com/egortrushin" target="_blank">@egortrushin</a> <a href="https://www.kaggle.com/code/egortrushin/gr-icrgw-training-with-4-folds" target="_blank">public notebook</a> was our starting point.</li> </ul> <h1><strong>Acknowledgments</strong></h1> <p>To my wonderful teammates <a href="https://www.kaggle.com/maxdiazbattan" target="_blank">@maxdiazbattan</a> <a href="https://www.kaggle.com/edomingo" target="_blank">@edomingo</a> <a href="https://www.kaggle.com/juansensio" target="_blank">@juansensio</a> (if I forgot something please feel free to write it in the comments).</p>
Google Research - Identify Contrails to Reduce Global Warming
346 Rank on leader board
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sure, I will give a try to that also. Thanks for the suggestion.</p> <h1>My Approach and Achievements</h1> <h2>Leveraging Image Channels and False Color</h2> <p>I tackled the challenge of detecting contrails using an innovative strategy. The provided images were captured in 8 different wavelengths, and annotators used a false color image for labeling contrails. I decided to train my model using this false color representation, as it seemed to improve the visibility of contrails and enhance their detectability.</p> <h2>Effective Interpolation for Boundary Annotations</h2> <p>A significant hurdle I encountered was accurately labeling contrail boundaries. Since contrails suddenly appear or enter from the sides of the image, capturing their true extent was crucial. To address this, I employed an interpolation technique. I first upscaled the image to 384 x 384 and then downscaled it to 256 x 256 using bilinear interpolation. This innovative approach proved successful in obtaining precise boundary labels.</p> <h2>Iterative Model Refinement</h2> <p>I pursued an iterative model refinement strategy that gradually improved my results. I initially trained a U-Net model from scratch using PyTorch. While this yielded a decent dice score of around 0.53, I knew I could do better. I then turned to the <code>segmentation_models_pytorch</code> library and selected a U-Net architecture with a pretrained <code>resnet26</code> encoder. This choice pushed my dice score up to 0.59 on the Kaggle Public leaderboard, showcasing noticeable progress. My determination to enhance performance led me to the final step: training a U-Net model with a pretrained <code>resnet50</code> encoder. This decision paid off with a remarkable dice score of 0.64985 on the Kaggle Public leaderboard and 0.64455 on the Private leaderboard.</p> <h2>Harnessing PyTorch Lightning for Efficiency</h2> <p>To accelerate my training process and streamline metric tracking and hyperparameter management, I adopted the PyTorch Lightning framework. This choice enabled me to harness the power of multiple GPUs efficiently and focus more on experimenting with various strategies rather than dealing with technical intricacies.</p> <h1>Things that Didn't Work</h1> <h2>Time Dependency and LSTMConv Layer</h2> <p>Understanding that contrails must either suddenly appear or be visible in at least 2 images, I attempted to incorporate the temporal aspect by using an LSTMConv layer in my model. Unfortunately, this approach didn't yield the anticipated results. It appears that the temporal aspect might not play a significant role in this problem, or perhaps the architecture I selected wasn't suited for effectively utilizing this information.</p> <h2>Exploring Additional Convolutional Layer</h2> <p>In an effort to enhance the model's input capabilities, I added an extra convolutional layer before the input entered the encoder. This adjustment was aimed at allowing the pretrained model to take a 3-channel input while utilizing data from all 8 wavelengths. However, this change didn't lead to significant performance improvements. It appeared that the resnet50 encoder could already effectively extract relevant features from the multi-channel data. As a result, further alterations at this stage in the architecture didn't seem to provide the anticipated benefits.</p> <h1>Opportunities for Improvement</h1> <h2>Soft Labels for More Accurate Annotations</h2> <p>In hindsight, I realized that using the mean of the annotations from the four annotators might have provided a more accurate representation of contrail regions. This could have potentially reduced noise caused by differing interpretations and improved overall model performance.</p> <h2>Hybrid Loss Functions</h2> <p>Initially starting with Binary Cross-Entropy and then shifting to Dice Loss to align with evaluation metrics, I now recognize the value of potentially combining both loss functions. This hybrid approach could strike a better balance between precision and recall during training, leading to improved overall results.</p> <p>My journey in tackling this complex contrail detection problem has taught me a great deal about <code>model architecture</code>, <code>data preprocessing</code>, and the importance of learning from both successes and failures. Armed with these insights, I am excited to continue refining my strategies and exploring new avenues for even better performance.</p> <p>Thank you. 😊</p>
Google Research - Identify Contrails to Reduce Global Warming
3rd Place Solution: 2.5D U-Net
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!<br> Choosing the optimal threshold value directly for validation data was fine, but we couldn't do that because we trained several models also on validation data. That's why we identified the best percentile using models that didn't use validation data for training.</p> <p>We published our source code!</p> <p><a href="https://github.com/knshnb/kaggle-contrails-3rd-place" target="_blank">https://github.com/knshnb/kaggle-contrails-3rd-place</a><br> <a href="https://github.com/yoichi-yamakawa/kaggle-contrail-3rd-place-solution" target="_blank">https://github.com/yoichi-yamakawa/kaggle-contrail-3rd-place-solution</a><br> <a href="https://github.com/tyamaguchi17/contrails_charm_public" target="_blank">https://github.com/tyamaguchi17/contrails_charm_public</a></p> <p>We just manually tried several parameters. There might still be some room for tuning, but we thought it was more worthwhile to spend our time and computational resources on other experiments.</p> <p>Thanks! If you are using segmentation_models.pytorch, you can set gradient checkpointing for most timm models by the following code:</p> <pre><code>backbone = smp(...) backbone() </code></pre> <p>Thank you!!!</p> <p>I didn't use any hyperparameter optimization tool for this competition. I have used Optuna for the previous one: <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/320192" target="_blank">https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/320192</a>.</p> <p>Thanks to the host and Kaggle staff for holding the competition, and congratulations to the winners! I also appreciate my teammates ( <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a> and <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a>) a lot.</p> <h2>Overview</h2> <p>Our solution is an ensemble of three pipelines by each member. Here, let me mainly explain my pipeline, which was used as a main solution.<br> My solution is based on a simple 2.5D U-Net (described below). I used 512x512 (256x256 for experimental phases) ash color images of the <a href="https://www.kaggle.com/code/inversion/visualizing-contrails" target="_blank">official notebook</a> as input and predicted a mean value of human individual masks. I trained a model for 25 epochs by AdamW with a cosine annealing scheduler with warmup. The loss function I used was <code>(-dice_coefficient + binary_cross_entropy) / 2</code>.</p> <h2>Validation</h2> <p>At the early stage of the competition, I was doing 5-fold cross validation. The trends of out-of-fold scores and validation scores differed a little possibly because of the difference in positive-pixel ratios.<br> After the training get computationally heavy, I decided to check only the validation score of the training of the entire training data. The validation score seemed correlated with a public LB with a small noise.</p> <h2>Architecture</h2> <p>Our main approach was so-called 2.5D: making 3D input 2D by stacking frames to a batch dimension and input to 2D backbones.<br> I first tried 2.5D U-Net architecture with 3D convolutions after the whole U-Net and got a small gain compared to 2D models (around +0.01 in validation).<br> Then I tried to move 3D convolutions to the middle of U-Net's skip connection layers, which have richer information of each downsampled feature. We used frames 2, 3, and 4 (0-indexed). In 3D convolution, we reduced the frame dimension from 3 to 1 by stacking two convolutions with kernel_size=2 and padding=0. With this, we got a big gain (around +0.02 in validation).<br> The pseudo-code is as follows (depends heavily on segmentation_models.pytorch library):</p> <pre><code> (torch.nn.Sequential): (): ().__init__( torch.nn.Conv3d(in_channels, out_channels, kernel_size, padding=padding, padding_mode=), torch.nn.BatchNorm3d(out_channels), torch.nn.LeakyReLU(), ) (torch.nn.Module): (): ().__init__() self.n_frames = self.backbone = smp.Unet(...) conv3ds = [ torch.nn.Sequential( Conv3dBlock(ch, ch, (, , ), (, , )), Conv3dBlock(ch, ch, (, , ), (, , )) ) ch self.backbone.encoder.out_channels[:] ] self.conv3ds = torch.nn.ModuleList(conv3ds) () -&gt; torch.Tensor: total_batch, ch, H, W = feature.shape feat_3d = feature.reshape(total_batch // self.n_frames, self.n_frames, ch, H, W).transpose(, ) conv3d_block(feat_3d).squeeze() () -&gt; torch.Tensor: n_batch, in_ch, n_frame, H, W = x.shape x = x.transpose(, ).reshape(n_batch * n_frame, in_ch, H, W) self.backbone.check_input_shape(x) features = self.backbone.encoder(x) features[:] = [self._to2d(conv3d, feature) conv3d, feature (self.conv3ds, features[:])] decoder_output = self.backbone.decoder(*features) masks = self.backbone.segmentation_head(decoder_output) masks </code></pre> <h2>Data augmentation</h2> <p>While full flip and rotation augmentation did not work because of the pixel-shift issues pointed out by <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430618" target="_blank">1st place solution</a> and <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430479" target="_blank">9th place solution</a>, applying them by a small ratio enhanced the performance a little. The whole augmentation I used was as follows:</p> <pre><code> albumentations A augments = [ A.HorizontalFlip(p=), A.VerticalFlip(p=), A.RandomRotate90(p=), A.ShiftScaleRotate(, , , p=), A.RandomResizedCrop(, , scale=(, ), ratio=(, ), p=), ] </code></pre> <h2>Pseudo label</h2> <p>We discretized by 0.25 the models' predictions for 2, 3, 5, 6, and 7 frames and used them as pseudo labels. Since there may be a distribution shift from the original training data, we used pseudo labels for pretraining and the original training data for finetuning.</p> <h2>Threshold</h2> <p>Since we used validation data for training some models, optimizing the threshold by validation data was not easy. We adopted a percentile threshold. We confirmed by validation data that the optimal percentile was almost equal to the ratio of positive pixels (=0.18%). Therefore, we identified the percentile in test data of the best threshold of some models in validation data by LB probing using submission time. It was about 0.16% and I hope it is a correct ratio.</p> <h2>Other tips</h2> <ul> <li>Setting grad_checkpointing saved more than half of memory usage.</li> <li>Increasing the number of decoder channels of U-Net slightly enhanced the performance.</li> <li>The mean of the batch-wise dice coefficient does not correspond to the global dice coefficient and is also unstable with small batch sizes. To mitigate this, I heuristically added 700000 to the numerator and 1000000 to the denominator.</li> </ul> <h2>Final submission</h2> <p>Our best single model was 0.706/0.71770/0.71629 (validation/private/public) by maxvit_large. This could still win 3rd place!<br> By ensembling 18 2.5D models with different backbones (maxvit_large_tf_512, tf_efficientnet_l2, resnest269e, maxvit_base_tf_512, maxvit_xlarge_tf_512, tf_efficientnetv2_xl) and slightly different setups (use pseudo label, include validation data for training, finetune lr, etc), we achieved 0.72233 of private LB. Also, adding <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a> and <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>'s models increased the private score to 0.72305, which could have won 2nd place by 0.00001!<br> Unfortunately, we couldn't select that submission. On the last day, we added a model trained with full flip and rotation augmentation and test-time augmentation, which didn't change the validation score a lot. This lowered the generalization to private data probably because of the pixel-shift issue (or just a random fluctuation).<br> Anyway, being unable to find out the pixel-shift issue was the reason for the loss, and I learned a lot.</p> <h2>What did not work</h2> <ul> <li>Double U-Net architecture (training was unstable…)</li> <li>Increasing image size by finetune</li> <li>Adding conv2d layers between conv3d layers</li> </ul> <h2>Yiemon773 Part</h2> <p>Results of my 2d models' ensemble<br> Private LB: 0.709+ w/ pseudo labels<br> Private LB: 0.705+ w/o pseudo labels</p> <h3>Preprocess</h3> <p>I changed the normalization process from <code>normalize_range</code> to <code>normalize_mean_std</code>. The mean and std of each band are calculated using the training data.</p> <h3>backbone</h3> <p>resnest269e, maxvit_base, maxvit_large, efficientnetv2_l</p> <h3>Loss function</h3> <p>Weighted mean of following losses: Hard-label Dice, Hard-label BCE, Soft-label Dice, Soft-label BCE</p> <h3>What did not work</h3> <ul> <li>Many augmentations <ul> <li>Mixup</li> <li>frame shuffle</li> <li>channel shuffle</li> <li>…</li></ul></li> <li>Using bands other than 11, 13, 14, and 15 </li> </ul> <h2>charmq Part</h2> <h3>Architecture</h3> <p>2.5d models which are mentioned above and 2d models. 2.5d models performed better than 2d models.</p> <h3>backbone</h3> <p>resnest269e, resnetrs420, maxvit_base, maxvit_large, efficientnetv2_l</p> <h3>image size</h3> <p>2.5d models and 2d maxvit models were trained with (512, 512). (1024, 1024) was better than (512, 512) with 2d resnest269 but didn't work with other models.</p> <h3>Loss function</h3> <p>Weighted mean of following losses: Hard-label (pixel mask) Dice, Hard-label (1-(pixel mask))Dice, Soft-label (mean of individual mask) Dice, and Hard-label (min and max of individual mask) Dice</p> <h3>What did not work</h3> <ul> <li>augmentation cropping the area around the positive pixels</li> <li>UPerNet with convnext and swin transformer (mmsegmentation)</li> </ul> <h3>Findings</h3> <ul> <li>large architectures such as resnest269e, resnetrs420 and maxvit works well</li> <li>the training of maxvit with amp is unstable<ul> <li>we turned off amp with maxvit</li> <li>gradient checkpointing contributes to reduce memory </li></ul></li> <li>efficientnet_l2 works well but sometimes output nan<ul> <li>we simply replaced the nan output to 0</li></ul></li> <li>ensemble boosts score<ul> <li>e.g. 0.68 model + 0.68 model -&gt; 0.695</li> <li>even ensemble of 5 fold model boosts public score</li></ul></li> </ul> <h2>Acknowledgment</h2> <p>We would like to appreciate Preferred Networks, Inc for allowing us to use computational resources.</p> <p>Of course, it depends on the situation, but I think hyperparameter optimization of DL is too computationally heavy in many cases.</p>
Google Research - Identify Contrails to Reduce Global Warming
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a fantastic competition - thanks to organizers and my team. Past 3 months have not been easy, but I’m glad that in the end it was worth it.</p> <h3>Overview</h3> <ul> <li>U-Net models with a diverse set of encoders (both convolution-based, and transformer-based).</li> <li>Creating pseudo-labels for the unlabelled part of the data (7/8 of all the data) using folds and ensembles.</li> <li>Composite loss (CE, Dice, Focal).</li> <li>EMA + SWA.</li> <li>High resolution (512 + 768).</li> <li>4TTA (hflip, rot90, rot270).</li> </ul> <h3>Base models</h3> <p>Quite early we realized that heavy models and long training work well here, so we mostly ran experiments with backbones like effnet_v2_large and heavier. The final solution includes:</p> <ul> <li>effnet_v2_l.</li> <li>effnet_v2_xl.</li> <li>effnet_l2.</li> <li>maxvit.</li> </ul> <p>It takes about a week to train with 4xA6000 (most of the training time goes into creation of good pseudo-labels).</p> <h3>Validation</h3> <p>After we figured out that some geometric augmentations don’t really work (for example, vertical flips), we removed them from the augmentation pipeline. Additionally, the 4TTA validation and regular one didn’t have a perfect correlation, so the validation during training was performed with 4TTA, which gave a significant boost early in the competition. </p> <p>In terms of split - the splitting proposed by organizers was used. </p> <h3>Pseudo labels</h3> <p>The original dataset (of 20519 records, each of 8 images) is already quite big for U-Net like models, however we wanted to expand it further. That’s why we created pseudo-labels for the whole dataset by training 3 models with 4 folds for 2 rounds. The following scheme illustrates the idea:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2755695%2F0bf39e12f3004d2d843bea9aa2944dee%2F2023-08-20%2000.42.48.jpg?generation=1692481385008583&amp;alt=media" alt=""></p> <p>Then we train final models with a sampler: 50% original data, 50% soft pseudo labels.</p> <h3>Additional tricks</h3> <p>Not a lot of things worked well here, so we added some simple tricks to improve the pipeline a bit. Those things include:</p> <ul> <li>Weighted loss between CE, Dice, and Focal. </li> <li>EMA during training.</li> <li>SWA on checkpoints after training.</li> <li>Remove BN from the decoder in U-Net. </li> <li>Add different resolutions in final ensemble (512 + 768).</li> </ul> <h3>Things that didn’t work</h3> <ul> <li>Twersky loss (with focus on either FN or FP).</li> <li>Post-processing to remove FP. </li> <li>3D models (including conv-lstm).</li> <li>Training on individual annotations (shame on us for not trying to train with mean of annotations).</li> <li>Figuring-out why geometric augmentations don’t work (our guess was annotators’ bias. Turns out, it was conversion bias that other top teams found).</li> <li>Heavy augmentations.</li> <li>Adding classifier.</li> <li>Predicting additional frames or using additional channels during training.</li> <li>Validation based on geography. </li> <li>Training only with positive data.</li> </ul> <h3>Links</h3> <ul> <li>Inference kernel: <a href="https://www.kaggle.com/code/selimsef/kdl-unet-768-inference-contrails" target="_blank">link</a>.</li> <li>GitHub repo with training code: <a href="https://github.com/selimsef/kaggle-identify-contrails-4th/" target="_blank">link</a>.</li> </ul> <p>Sure. The main idea that you should understand before diving into any transformers (whether tailored for Computer Vision or NLP or anything else) is attention mechanism that was proposed (or at least popularized) in <a href="https://arxiv.org/abs/1706.03762" target="_blank">this paper</a>.</p> <p>I heavily recommend <a href="http://jalammar.github.io/illustrated-transformer/" target="_blank">this article</a> that explains it the best. </p> <p>After you're done with the fundamentals, just google "TRANSFORMER_NAME explained medium" to understand any particular architecture. For implementations - looks either at <a href="https://github.com/huggingface/pytorch-image-models" target="_blank">timm</a> or <a href="http://huggingface.co/" target="_blank">HuggingFace</a>. </p> <p>To give you pointers, for NLP check out BERT, T5 and GPT models. For CV - ViT and Swin. </p>
Google Research - Identify Contrails to Reduce Global Warming
58th Place Solution (My first write-up!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Overview</h3> <p>First, thanks to Google Research for hosting an interesting and well-organized competition. I am always happy to try and build solutions that help reduce our environmental impact. Second, thanks to everyone who participated and shared ideas throughout this competition. I always learn a lot from the code and discussion tabs.</p> <p>My solution is an ensemble of Unet models with Timm backbones. The optimal threshold for each model was tuned using an out-of-fold validation set. No augmentation was applied to the images during training.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5570735%2F98f1fa7711e8116a4b40933f13209bbd%2Ficrgw-v1.drawio.png?generation=1691625764325229&amp;alt=media" alt="Ensemble Image"></p> <p>In order of highest to lowest validation dice scores, the backbones I used were maxxvitv2_rmlp_base_rw_384, resnest269, maxvit_base_tf_512, and mit_b4. I also used stochastic weight averaging on the last 5 epochs of training for the maxvit models, and no SWA for the other models. All of my experiments were done on small models and image sizes so that I could experiment quickly. I only scaled up the image sizes and model parameters in the last week of the competition for the final submission.</p> <p>Please leave any criticisms/suggestions/questions. The more the better :)</p> <h3>Other Attempts</h3> <p>Each item on this list either did not improve performance or was too computationally expensive for me to pursue.</p> <ul> <li>Losses: Tversky, LogCoshDice, BCE</li> <li>Downsampling/Upsampling Interpolation Methods</li> <li>Removing Islands </li> <li>Openmmlab (struggled to get this working)</li> <li>Deep supervision</li> <li>MANet, Unet++, Unet+++, and others..</li> </ul> <h3>Frameworks</h3> <p>Models: <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation_models.pytorch</a>,<a href="https://github.com/huggingface/pytorch-image-models/tree/main/timm" target="_blank">Timm</a><br> Training: <a href="https://github.com/Lightning-AI/lightning" target="_blank">Lightning-AI</a><br> Logging: <a href="https://wandb.ai/" target="_blank">WandB</a></p> <hr> <p>Finally, congrats to <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> for 1 submission gold. Very impressive!</p>
Google Research - Identify Contrails to Reduce Global Warming
5th place solution (best single model, private LB 0.71443)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, cropping can reduce training time. I trained the base model with 512x512, and it takes 702S per epoch. When fine-tuning the model, I cropped the images to 480x480, and it takes 597S per epoch. The accuracy was better than the base model. But I'm not sure if it's because of the cropping, because I also changed the augmentation parameters and the learning rate.</p> <p>Multi frames model 3: EfficientNetV2L + customized 3D UNet 3, 512x512, crop to 480x480 for training, use 5 frames only.<br> (Best single model, late submission, private LB: 0.71443, public LB 0.70803)</p> <p>Thank you. It came from a grid search.</p> <ul> <li>Train a small and simple model without any augmentation.</li> <li>Inference the flipped (LR) images with the model.</li> <li>Shift the prediction by 2x to get the maximum score. (grid search)</li> </ul> <p>By using this method I got x = 0.408 and y = 0.453</p> <p>The LB score (late submission) of the best single model is private LB: 0.71443, public LB 0.70803.</p> <blockquote> <p>did you train on full train set and eval on the validation.csv split from the hosts?</p> </blockquote> <p>Yes, I used validation folder as OOF.</p> <blockquote> <p>May I ask what is the CV of your best single model and your ensemble please?</p> </blockquote> <p>The best single model score of OOF is 0.6959, and the ensemble score of OOF is 0.6993.</p> <p>Thanks to the organizers for the great competition. Also thanks to my teammates: <a href="https://www.kaggle.com/imakarov" target="_blank">Ilya Makarov</a>. I had a lot of fun and learned a lot.<br> I'd like to share our solution and what I learned from this competition. I hope it will be helpful for you.</p> <h2>Solution overview of my part</h2> <h4>Models</h4> <p>I use 5 models as follows:</p> <ul> <li>Single frame model 1: EfficientNetV2L + UNet, 512x512, crop to 480x480 for training.</li> <li>Single frame model 2: EfficientNetV2L + UNet, 768x768, crop to 512x512 for training.</li> <li>Multi frames model 1: EfficientNetV2L + customized 3D UNet 1, 256x256, use all 8 frames.</li> <li>Multi frames model 2: EfficientNetV2L + customized 3D UNet 2, 512x512, crop to 480x480 for training, use 5 frames only.</li> <li>Multi frames model 3: EfficientNetV2L + customized 3D UNet 3, 512x512, crop to 480x480 for training, use 5 frames only.<br> <strong>(<a href="https://www.kaggle.com/wuliaokaola/icrgw-submission-single-model-lb-0-714-5th-place" target="_blank">Best single model, late submission, private LB: 0.71443, public LB 0.70803</a>)</strong></li> </ul> <p>About the customized 3D UNet<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2006644%2F2130a20eb0d042ac9d48d1ec2d7d88f9%2Funet3d_.png?generation=1691656552129914&amp;alt=media" alt=""></p> <ul> <li>Apply the same encoder(backbone) to each frame</li> <li>Combine the encoder outputs by Conv3D at each level. ConvLSTM2D also works, but it's slower.</li> <li>Use UNet as decoder to get the final output.</li> </ul> <h4>Augmentation</h4> <ul> <li>Filp LR, Filp UD, Rotate 90 (for TTA8)</li> <li>Noise by channel, noise by pixel</li> <li>Random dropout frame (multi frames model only)</li> <li>Random crop</li> </ul> <p>At the early stage of the competition, I noticed that if I don't use flip and rot90, the model will be overfitting quickly. But if I use the flip and rot90, the result became worse. I was confused for a long time. Finally, I found the reason. As mentioned in <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430479#2382723" target="_blank">other discussions</a>, The image and the mask are not aligned. The mask is a littel bit shifted to the bottom right. So if we use flip or rot90, we need to align them. The magic number for me is: x = 0.408, y = 0.453. </p> <p>Tried but not worked:</p> <ul> <li>Random rotate</li> <li>CoarseDropout</li> <li>Dropout by channel</li> </ul> <h4>Target and loss function</h4> <p>I used 2 targets:</p> <ul> <li>Target 1: Sigmoid to 1 channel. weight: 75% (human_pixel_masks.npy)</li> <li>Target 2: Softmax to 5 channel. weight: 25% (human_individual_masks.npy. 0 for not contrails, 1 for 25% of labelers, 2 for 50% of labelers, 3 for 75% of labelers, 4 for 100% of labelers) </li> </ul> <p>The loss function is weighted binary cross entropy + dice loss. <br> And use Lion for optimizer. (Lion: <a href="https://github.com/keras-team/keras/blob/v2.13.1/keras/optimizers/lion.py" target="_blank">https://github.com/keras-team/keras/blob/v2.13.1/keras/optimizers/lion.py</a>)</p> <h4>Prediction</h4> <ul> <li>Ensemble by 5 models with TTA8 (flip LR, flip UD, rotate 90)<br> The TTA8 is very important. It can improve the score by 0.005~0.01.</li> </ul> <p>The best result is public LB: 0.71243, private LB: 0.71756. But unfortunately, we didn't choice it as our final submission. </p> <h4>Other experiments</h4> <ul> <li>I tried to use all bands, which can give a improvement of 0.000x. But it make the submission too complex, and take too much time. So I didn't use it in the final submission.</li> <li>UNet + Tranformer (256, 128: UNet, 64, 32: Transformer). Using the trasformer to get the information of other frames. It works but not better than UNet. Maybe because the transformer is not pretrained.</li> </ul> <h4>Update</h4> <ul> <li>20230811 Added a late submission score of the best single model.</li> </ul> <p>It's nothing but just a for loop. </p>
Google Research - Identify Contrails to Reduce Global Warming
6th place solution (CV: 0.7069 / Public LB: 0.706 / Private LB: 0.713)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>If I am correct, the purpose of classification models is to determine whether the predicted mask is empty or not, and they are only applied to segmentation models trained on all data. However, what is the role of the "top 100 logits mean per image"?</p> </blockquote> <p>"top 100 logits mean per image" is a variation of the classification model using the segmentation model.<br> A similar approach was used in a previous competition, and again the top 100 logits mean was more accurate for binary classification than the classification model.</p> <blockquote> <p>Could you elaborate a bit more about the steps to find that?</p> </blockquote> <p>In binary classification, we have both submissions with a percentile threshold and sub with the percentile converted to a fixed value and used as the threshold. Comparing the execution times of the two submissions, we can see that the percentage of empty mask images differs when there is a change. (since the size of the test set and the validation set are the same)</p> <p><strong>Update</strong><br> We have released the training code(2.5D model part) and the inference notebook for the best submission.</p> <ul> <li>training code (2.5D model part): <a href="https://github.com/tattaka/google-research-identify-contrails-reduce-global-warming" target="_blank">https://github.com/tattaka/google-research-identify-contrails-reduce-global-warming</a></li> <li>inference code: <a href="https://www.kaggle.com/code/tattaka/contrail-submission-ensemble?scriptVersionId=139455030" target="_blank">https://www.kaggle.com/code/tattaka/contrail-submission-ensemble?scriptVersionId=139455030</a></li> </ul> <p>First of all, we would like to thank the organizers of the competition and the Kaggle team for the competition hosting and all the participants who have shared their knowledge so generously. </p> <h1>Summary</h1> <ul> <li>2 stage pipeline: classification and segmentation</li> <li>ensemble of 2.5D model using 5 or 7 frame and 11ch input 2D model</li> <li>soft label using individual label</li> <li>percentile threshold</li> </ul> <h1>Pipeline</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2Fdeb33f0f5a0349334ed541ec39c2e353%2F2023-08-10%2016.57.28.png?generation=1691664174212060&amp;alt=media" alt=""></p> <p>This pipeline has been used frequently in past semantic segmentation competitions. <br> The advantage of this pipeline is that it creates two models, one trained on pos_only and the other on all data, thereby increasing the ensemble effect. <br> It also saves inference time by allowing more models to be assigned to images for which screening has determined that a mask is present. </p> <h1>Models Detail</h1> <h2>Common Settings</h2> <p>Both 2.5D and 2D models were trained for both hard and soft labels. <br> The soft label is created as follows</p> <pre><code>label = self.np_load( os.path.join() ).astype(np.float32) label = np.clip((label * ).(-) / label.shape[-], , ) </code></pre> <p>All models were trained by all the data in train folder and validated by the one in validation folder. </p> <h2>2.5D Model</h2> <p>We used a modified <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417255" target="_blank">2.5D model used in Vesuvius Challenge</a>.<br> The main changes are as follows</p> <ul> <li>Apply 3D CNN to feature maps of all resolutions</li> <li>3D For the output of the 3D CNN, feature maps corresponding to the target frame are extracted and given to UNet </li> <li>In the binary classification model, the 3DCNN is applied to the smallest resolution feature map and input to the FC layer in the same way. </li> </ul> <p>Backbones used were resnetrs101, resnest101e, swin_base_patch4_window12, swinv2_base_window16, convnext_base, convnext_large, etc. <br> The input frame is set to 5 or 7 frames, with the target frame in the middle. <br> Input resolution of 384 or 512 are used for the model. <br> It was trained using <code>BCE+Dice*0.25</code> for the pos only model and <code>BCE</code> for the all data model. <br> Also, different augmentation is used for each of the classification model, all data seg model, and pos only seg model. (possibly not optimized and therefore not appropriate). </p> <pre><code> albu.Flip(p=), albu.RandomRotate90(p=), albu.ShiftScaleRotate(p=, scale_limit=, shift_limit=), albu.GridDistortion(num_steps=, distort_limit=, p=), albu.CoarseDropout(max_height=, max_width=, fill_value=, mask_fill_value=, p=,) albu.ShiftScaleRotate(p=, scale_limit=, shift_limit=), albu.Flip(p=), albu.RandomRotate90(p=), albu.ShiftScaleRotate(p=, scale_limit=, shift_limit=, rotate_limit=), albu.Rotate(limit=, p=), </code></pre> <ul> <li>EMA(decay=0.998)</li> </ul> <h2>2D Model</h2> <p>For 2D models, we simply used Unet implemented in Segmentation Models Pytorch. The backbones used were res2net50d, regnetz_d8, regnetz_d32, and regnetz_e8.</p> <p>Characteristically, the 2D models were trained on 11 channels, i.e. band15 - band14, band14 - band11 and all 9 bands. Learning with this input does not perform well in short epochs (e.g. 20~30 epochs), but in long epochs (specifically 100 epochs), it obtained better validation scores than with false color input.</p> <p>The 11ch models also had one curious point. They had poor public score but good private score. I believe this is what pushed us to the prize zone.</p> <p>Other training settings are as follows(shared by all data seg model and pos only seg one):</p> <ul> <li>Input<ul> <li>size: 384x384</li> <li>each channels are standardized by subtracting the global mean and dividing by the global variance of the channel</li></ul></li> <li>Loss: <code>0.9*BCE + 0.1*Dice</code></li> <li>Data Augmentation</li> </ul> <pre><code>albu.HorizontalFlip(p=), albu.VerticalFlip(p=), albu.ShiftScaleRotate(p=, rotate_limit=) albu.RandomResizedCrop(p: , scale=[, ], height=, width=) </code></pre> <ul> <li>EMA(decay=0.9999)</li> </ul> <h1>Postprocessing</h1> <p>From the time taken to inference, the percentage of empty masks in the entire test was found to be about the same as in validation set. (about 30%). <br> Therefore, we used validation set to optimize the percentile threshold for both the classification part and segmentation part. <br> As a result, the percentile threshold is better than the optimized fixed threshold. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2F48a3c5b9e9ed108faaed1514dac63485%2F2023-08-10%2017.20.42.png?generation=1691664304846451&amp;alt=media" alt=""></p> <h1>Not Works</h1> <ul> <li>pseudo label・mean teacher</li> <li>2D pretrain -&gt; 2.5D finetuning<ul> <li>unstable</li></ul></li> <li>more augmentations</li> <li>mixup・cutmix・label smoothing</li> <li>TTA</li> <li>efficientnet</li> </ul> <h1>Training Code and Inference Notebook</h1> <p>We have released the training code and the inference notebook for the best submission.</p> <ul> <li>training code (2.5D model part): <a href="https://github.com/tattaka/google-research-identify-contrails-reduce-global-warming" target="_blank">https://github.com/tattaka/google-research-identify-contrails-reduce-global-warming</a></li> <li>training code (2D model part): <a href="https://github.com/tawatawara/kaggle-google-research-identify-contrails-reduce-global-warming" target="_blank">https://github.com/tawatawara/kaggle-google-research-identify-contrails-reduce-global-warming</a></li> <li>inference code: <a href="https://www.kaggle.com/code/tattaka/contrail-submission-ensemble?scriptVersionId=139455030" target="_blank">https://www.kaggle.com/code/tattaka/contrail-submission-ensemble?scriptVersionId=139455030</a></li> </ul> <p>We did not add a 2D model to the binary classification model; use it as a variation of the segmentation model. The classification accuracy is about 0.95.</p> <p>Yes, we plan to release the code for both learning and inference.<br> We are in the process of organizing it, so please be patient.</p> <p>Thank you and congratulations to you too on your solo win. I will read your solution carefully.</p> <p>This may be due to the limited number of suitable backbones for training in my experiment. Our 2.5D UNet with resnetrs101 resulted in a gain of 0.01 compared to the 2D UNet.</p>
Google Research - Identify Contrails to Reduce Global Warming
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary:</h1> <ul> <li>512x512 input resolution, single-frame models <strong>(no 2.5d segmentation, no output sequence fusion)</strong>;</li> <li>Non-binary targets (average of human_individual_masks.npy files);</li> <li>Augmentations: Rotate90 + Flip + ShiftScaleRotate;</li> <li><strong>Orthogonalized 9-channels input color space instead of ASH-RGB;</strong></li> <li>Weighted ensemble of 5 UNet+EfficientNet models, diversified by loss and encoder;</li> <li>TTA x8 mean.</li> </ul> <h3>Input preprocessing:</h3> <p>There was a lot of evidence that higher resolution helps, and after some experiments were conducted I decided that 512x512 input resolution seemed a sweet spot between quality and resources consumption.</p> <p>I haven't managed to detect target pixel shift that other participants are reporting about, but noticed, that something is wrong with seemingly "rotation-agnostic" data domain, because any augmentations were making smaller models (especially if trained on 256x256 resolution) perform worse.<br> But the same augmentations in combination with larger models/resolutions were quite helpful: even adding slight ShiftScaleRotate to already applied [rot90 + flip] increased generalization in terms of both the difference between train-sample/out-of-sample performance and validation score alone. So, I stopped on the combination mentioned above.</p> <p>Grid distortions/CutMix/Heavy affine transforms etc. seemed not appropriate considering the nature of labels and strict guidance for labelers, so I didn't even experiment on that.</p> <h3>Color Space:</h3> <p>The most interesing finding is that ASH-RGB color space actually is not the best one (well, at least in my pipeline), despite the fact that it was used to annotate the data, which is still unclear to me and quite confusing.<br> Firstly I decided that using different color spaces can help me to make final ensemble more diversified and robust, so I tried several combinations, including the original 9-channels band_{08-16} spectrum (which gives ~the same quality as ASH-RGB or even worse) and found that by orthogonalizing (and then std. scaling) this spectrum (e.g. using singular value decomposition to previously gathered stats for pixel values from the train data), one can squeeze about 0.007-0.008 global dice score "for free":</p> <pre><code>def transform(image, components=list(range())): """ image is mean-std scaled input image in original spectrum shape of (H, W, С=) components is list of singular values to preserve, all of them by default """ # U, S, V^T = scipy.linalg.svd((features - features.mean(axis=)) / features.std(axis=), full_matrices=False) # where features variable is numpy matrix shape of (H * W * #number_of_train_frames, ) # representing train sample of -channels pixel values # H = W = / , since [, ] spatial slices of images were used to save RAM V = np.array([[ , , -, - , -, , -, , -], [ , , , , , - , , -, ], [ , , , , , , -, , -], [ , -, , -, , , , -, -], [ , -, -, , , -, -, , ], [ , -, , -, , , , , ], [ , -, , -, -, - , -, , -], [ , -, , -, -, -, - , -, ], [ , -, , , - , , , -, -]]).astype(np.float32) # to normalize final model input: std = np.array([ , , , , , , , , ]).astype(np.float32) output = (np.dot(image, V[:, components]) / std[components]).astype(np.float32) return output </code></pre> <p>Since I discovered this, I've been using only this color space for all my models.</p> <h3>Models:</h3> <p>The more complexity the decoder had (UNet++/DeepLabV3/UperNet etc.), the worse results I obtained, so, only UNet models were used in the final ensemble. EfficientNet encoders performed significantly better than ResNeXt/ResNeSt/ConvNeXt (the last one especially: after a series of experiments with different encoders/custom dilations for the first layers I came up with a hypothesis that lower resolution of the first feature map before the first skip connection to the decoder leads to lower out-of-sample scores, which sounds convincing given the requirement for high pixel-level accuracy).</p> <h3>Losses:</h3> <p>Due to the lack of time and resources, I haven't investigated this matter properly, especially since the results for lighter 256x256 experiments don't match the resutls for heavier 512x512 ones. But overall, BCE alone performed better than any superposition of BCE+Dice or BCE+Focal. Nevertheless small additive infusion of Dice loss to BCE can be helpful: although it hurts global dice score but also makes ensemble with models trained with BCE better.<br> Increasing pos_weight and weighting pixels that are closer to label borders higher for BCE objective makes models converge faster, but to ~same score (pixel weighting mask example below):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F938698%2F43526e6712c6845e3ff10ad11a322f62%2FScreenshot_20230810_180025.png?generation=1691690381689822&amp;alt=media" alt=""></p> <h3>Optimization:</h3> <p>AdamW, lr schedule: 1 epoch warmup from 2e-4 to 5e-4, CosineAnnealing till the end.<br> All final models were trained for 50 or 64 epochs with pytorch-lightning using bf16 mixed precision.</p> <h3>Final submission:</h3> <p>I didn't have much time to train models for building a proper ensemble at the end of the competition, so I had to improvise and include most successful models trained during earlier experiments (which checkpoints are also not lost), which means that the final mix is a bit awkward and all "survived" models were trained on the default train/val split (no checkpoints trained on n-fold CV splits):</p> <table> <thead> <tr> <th>weight*</th> <th>encoder</th> <th>loss</th> <th>augs</th> <th>validation folder only</th> <th>LB (private)</th> </tr> </thead> <tbody> <tr> <td>0.45</td> <td>efficientnet-b7 (smp)</td> <td>BCE**</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.70030</td> <td>0.70755</td> </tr> <tr> <td>0.27</td> <td>efficientnet-b7 (smp)</td> <td>0.98 BCE + 0.02 DICE</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.69794</td> <td>-</td> </tr> <tr> <td>0.12</td> <td>efficientnet-b7 (smp)</td> <td>BCE**</td> <td>Rot90 + Flip</td> <td>0.69524</td> <td>0.69823</td> </tr> <tr> <td>0.08</td> <td>tf_efficientnet_b7 (timm)</td> <td>BCE**</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.67948</td> <td>0.67705</td> </tr> <tr> <td>0.08</td> <td>tf_efficientnet_b8 (timm)</td> <td>BCE**</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.68611</td> <td>0.69415</td> </tr> <tr> <td></td> <td><strong>ensemble</strong></td> <td></td> <td></td> <td><strong>0.70749</strong></td> <td><strong>0.71337</strong></td> </tr> </tbody> </table> <p>All threshold were tuned on validation folder in logits space rather than in probability space, weighted sum of predictions was also calculated in logits space before transforming to probabilities.</p> <ul> <li>* Tuned manually on validation folder;</li> <li>** All these experiments have slightly different pixel weighting parameters in BCE loss.</li> </ul> <p>Congratulations to you too!<br> I'm still not sure about this channels trick: maybe the selected optimization scheme is just not good for ash data or the mentioned pixel shift is somehow related with the decomposed components of the spectrum.<br> Dunno about private scores, I tried not to submit much to avoid overfit to public: the best (on private) submission is 0.007 worse on public than the second one that I selected (mix of only two models from the set) and they both scored ~the same on local validation, so I didn't expected the gold metal at all</p>
Google Research - Identify Contrails to Reduce Global Warming
8th place solution (OneFormer works)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>Summary</strong></h1> <ul> <li>Models:&nbsp;<strong>OneFormer</strong>, effnet, MaxViT, resnetrs, nfnetf5</li> <li>Train on <strong>soft labels and pseudo label</strong></li> <li>Optimizing Ensemble Weights</li> <li>Loss: BCE</li> <li>Only random image cropping without TTA and without mask shift</li> </ul> <h1><strong>Introduction</strong></h1> <p>Our team would like to thank the organizers and Kaggle for making this competition possible. Also, I want to express my gratitude to my outstanding teammates&nbsp;<a href="https://www.kaggle.com/cnumber" target="_blank">@cnumber</a> and&nbsp;<a href="https://www.kaggle.com/yukiokumura1" target="_blank">@yukiokumura1</a> for their incredible contribution toward our final result.</p> <h1><strong>Details</strong></h1> <p>Our main strategy is the adoption and optimization of various models and the use of soft and pseudo labels.</p> <ul> <li><strong>Models:</strong></li> </ul> <p>In the beginning of the competition, we mainly experimented with a combination of smp and timm, but from the middle of the competition, we also started using OneFormer.</p> <p>OneFormer demonstrated the best results. <br> <strong>private lb score of single oneformer (dinat-l) : 0.70204</strong><br> <strong>With CV score of holdout dice, effnet b7: 0.671, oneformer(dinat-l): 0.693</strong>.</p> <p>Model resolutions are below: (inference settings)&nbsp;</p> <table> <thead> <tr> <th>OneFormer</th> <th>effnetb7, b8</th> <th>MaxViT-Tiny, Base</th> <th>resnetrs</th> <th>nfnetf5</th> </tr> </thead> <tbody> <tr> <td>1024</td> <td>640</td> <td>512</td> <td>512</td> <td>640</td> </tr> </tbody> </table> <ul> <li><strong>Data: Soft and Pseudo Labels:</strong></li> </ul> <p>Using soft labels (the average of all annotator labels) during training. We also introduced pseudo labels for some models. When both are used together, training is performed randomly on images with the 0th through 7th time frame, with soft label used only for the 4th time frame and pseudo label for all other cases.</p> <ul> <li><strong>Optimizing Ensemble Weights:</strong></li> </ul> <p>To maximized the performance of ensembling, we incorporated optimization using Optuna within the CV environment. This facilitated the efficient verification of the best model combinations.</p> <h1>Be open to consideration</h1> <ul> <li><strong>Interplay Between a Strong Backbone and Pseudo Labels:</strong></li> </ul> <p>The impact of using pseudo labels<strong>, when trained with pseudo label of when trained with pseudo labels *, effnet b7: 0.685, oneformer(dinat-l): 0.695</strong> *The average of each model trained in holdout is used as pseudo label. </p> <p>However, the enhancements were relatively modest with OneFormer.<br> One hypothesis suggests that pseudo labels might bring about a knowledge distillation effect. The impact of this might reach a limit when the backbone is of a certain strength.</p> <p>Thank you for commenting!</p> <blockquote> <ol> <li>Optuna ensemble optimization? How did you perform this optimization (what was your objective function, what search method did you use)?</li> </ol> </blockquote> <p>We conducted optimization calculations by adjusting the weights and thresholds for each model to maximize the global dice.</p> <blockquote> <ol> <li>OneFormer training process - which repo did you use for training? How many epochs did you train it? Which optimizer, lr scheduler? It is not often (for me) to see OneFormer fine tuning process during Kaggle competition (certainly I could be wrong).</li> </ol> </blockquote> <p>We referred to the official repository and made some custom modifications to it. By the way, my teammate, c-number, pulled an all-nighter to make this happen.</p>
Google Research - Identify Contrails to Reduce Global Warming
92nd Place Solution for the Google Research - Identify Contrails to Reduce Global Warming Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Welcome! Сongratulations to everyone on finishing this great competition. It was a honor for us to compete with you all. Makes me happy to see, that AI solutions can be beneficial for the environment and nature. </p> <p>Our team ( <a href="https://www.kaggle.com/slavabarkov" target="_blank">@slavabarkov</a> and me) would like to present our approach. We didn't get the SOTA result, but I believe, that we got some insights. </p> <h1>Context section</h1> <ul> <li>Business context: <a href="http://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming" target="_blank">www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data</a></li> </ul> <h1>Overview of the Approach</h1> <p>Our best solution was an ensemble of several models with equal weights: efficientnet-b5, b6, b7, b8 and eca_nfnet_l2 with the Unet decoder. All models were trained on 512x512 images (Ash Color images, shout-out to <a href="https://www.kaggle.com/shashwatraman" target="_blank">@shashwatraman</a>) and DICE loss. We used threshold=0.3 for the pixels final prediction. It got 0.685 on the public lb and 0.681 on the private. </p> <p>Our validation score correlated rather nice with the public lb. We used the StratifiedKFold validation (4 folds), stratified by the masks size. I'll describe our approach in the next paragraph more precisely.</p> <h1>Details of the submission</h1> <h3>What was special about the submission</h3> <p>We had a straightforward robust approach, that has a nice performance, without using the SOTA models. We developed a thoughtful validation strategy, that can be used in the future competitions. </p> <p>We also had ideas that we did not manage to work right (we joined the competition last week). However, according to the winning solutions, we moved in the right direction.</p> <h3>Validation strategy</h3> <p>I didn't like the idea of random folds, since folds might contain the images of different complexity level. Some folds might contain too many empty masks or too big masks. I thought, that models might struggle with prediction of empty masks, for example. Therefore, it was a nice approach to divide images equally on folds, based on their masks size. One needs just to sum the total amount of masks pixels. Then you should create bins to label ranges of masks sizes. For empty masks we created a separate bin. Then you just stratify data, based on these labels. </p> <p>Code example:</p> <pre><code> skf = StratifiedKFold( n_splits=config[][], shuffle=, random_state=config[][], ) df[] = pd.qcut( df[df[] &gt; ][], q=config[][] - , labels=, ) df.loc[df[] == , ] = - fold_number, (train_index, val_index) ( skf.split(df, df[]) ): df.loc[val_index, ] = (fold_number) df.drop(columns=[], inplace=) </code></pre> <h3>Other details about the models</h3> <p>For segmentation models we used the smp torch library. We have tried different backbones from the timm library. However, we found out that an efficientnet family was the best, resnet was not successful in our case. We also liked the nfnet models, they were faster and also could be trained on bigger image sizes. However, 512x512 worked the best. We also found out, that models hyperparameters was not that important: lr, scheduler - does not mean too much. </p> <p>Unet decoder also was the best one, others were worse or showed almost the same performance. </p> <h3>Ideas that didn't work in our case (but they can)</h3> <h3>Pretraining on the synthetic dataset</h3> <p>That was a really cool idea to generate the similar background images by generative networks and draw masks on them to pretrain the models. I think that it is a very perspective idea in future works. In our case it didn't work good, trained models showed nice performance on these images, but this didn't improve the quality on the competition dataset. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2590337%2Fc04e6ac0c033d17bac52b6405ea828ee%2Fphoto_2023-08-05_00-49-53.jpg?generation=1692294981919634&amp;alt=media" alt=""></p> <h3>Training on the soft labels</h3> <p>Instead of using the ground truth masks, we can average the predictions by annotators. We preprocessed the data and then just didn't have the time to test it. But this can work nice, since there are controversial images in the dataset.</p> <h3>Pseudo labeling other frames</h3> <p>That's a great approach to make the data bigger. We did this for the 2,3,5,6 frames and trained models with the DICE loss. It was a total mistake. We trained on binary masks, but we should use BCE loss and soft labels in this case. Another mistake was that we used the whole ensemble to label this data, so we could not really validate the models on previous folds. Our validation score improved, but the lb did not. I believe it is due to the hard labels and knowledge (data) leakage from the models trained on other folds. </p> <h3>Training the additional classifier</h3> <p>I wanted to train a classifier model just to predict, whether the image has the empty mask or not. I was really concerned about the empty masks, since the wrong predictions on these images will get the high error. I tried different backbones, got about 0.9 accuracy and in the pipeline it didn't work well then. Didn't have time to improve this step.</p> <h3>Training code</h3> <p>We have created a python project with loggers (wandb), which runs from the command line with YAML config. You can find it on the github: <a href="https://github.com/25icecreamflavors/contrails" target="_blank">https://github.com/25icecreamflavors/contrails</a></p> <h1>Sources</h1> <ul> <li><a href="https://www.kaggle.com/code/shashwatraman/contrails-dataset-ash-color" target="_blank">https://www.kaggle.com/code/shashwatraman/contrails-dataset-ash-color</a></li> <li><a href="https://www.kaggle.com/competitions/understanding_cloud_organization/discussion/118080" target="_blank">https://www.kaggle.com/competitions/understanding_cloud_organization/discussion/118080</a></li> <li><a href="https://github.com/25icecreamflavors/contrails" target="_blank">https://github.com/25icecreamflavors/contrails</a></li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's about the OpenCV coordinate system.</p> <p>In short, the rule is <code>dst+0.5 = M(src+0.5)</code>. After calibration, the <code>1</code> becomes <code>1.5</code>.</p> <p>Thank you for the nice picture. I think it is either a pixel missing on top&amp;left or an extra pixel on bottom&amp;right. The idea is the same anyway.</p> <blockquote> <p>Can you share your validation strategy in this competition please? </p> </blockquote> <p>I use the train/validation folders for as train/val set.</p> <blockquote> <p>did you used it in final ensemble for some models</p> </blockquote> <p>I tried pseudo-labeling on smaller models at 256 input. Not used in the final submission. Unannotated data is 7x of annotated data, I did not have time to do pseudo-labeling using larger models at 512 input.</p> <blockquote> <p>training on the individual annotation</p> </blockquote> <p>Use <code>human_individual_masks</code> instead of <code>human_pixel_masks</code>.</p> <p>Thanks to the organizers for hosting competition and congrats to all the winners.</p> <h1>The key finding</h1> <p>Many people have likely noticed that flip/rot90 augmentation doesn't work well on this dataset.</p> <p>My guess is that during the conversion of polygon annotations to binary masks, the leftmost point is not included while the rightmost point is, causing a misalignment between the image and the mask.</p> <ul> <li>original<ul> <li>img: [0, 128, 128, 3, 4, 5, 6]</li> <li>mask: [0, 255a, 255b, 255pad, 0, 0, 0]</li></ul></li> <li>normal flip that does not work<ul> <li>img: [6, 5, 4, 3, 128, 128, 0]</li> <li>mask: [0, 0, 0, 255pad, 255b, 255a, 0]</li></ul></li> <li>consistent flip should be<ul> <li>img: [6, 5, 4, 3, 128, 128, 0]</li> <li>mask: [0, 0, 0, 0, 255b, 255a, 255pad]</li></ul></li> </ul> <p>I tried two solutions to address this issue.</p> <h3>solution 1, train with misaligned (img ,mask)</h3> <ul> <li>training<ul> <li>img: normal flip</li> <li>mask: consistent flip</li></ul></li> <li>inference<ul> <li>flip tta: normal img flip -&gt; predict -&gt; consistent mask flip</li></ul></li> </ul> <p>training/test time augmentation is a bit tricky in this case.</p> <h3>solution 2, train with aligned (img, mask)</h3> <ul> <li>training<ul> <li>shift img by +0.5 pixel</li></ul></li> <li>inference<ul> <li>shift img by +0.5 pixel</li></ul></li> </ul> <p>training/test time augmentation is as normal in this case.</p> <p>Both solutions worked. For simplicity, I chose solution 2. I use the following code to apply resize512&amp;shift1</p> <pre><code>img_affine_matrix = np.array([[2.0, 0.0, 1.5], [0.0, 2.0, 1.5]], =np.float64) img = cv2.warpAffine( img, img_affine_matrix, (512, 512), =cv2.INTER_LINEAR, =cv2.BORDER_CONSTANT, =0, ) </code></pre> <p>Note that you need to calibrate M for warpAffine. So it's 1.5 in the final affine matrix.</p> <pre><code> calibrate(M): [:, ] += M[:, ] * . + M[:, ] * . - . M </code></pre> <h1>Models</h1> <p>My submission notebook is pubic <a href="https://www.kaggle.com/code/tascj0/contrail-submit?scriptVersionId=139432132" target="_blank">here</a>.</p> <p>I joined the competition quite late and only briefly explored other bands and 2.5D before giving up on them. In the end, I only used false_color images and UNet models.</p> <ul> <li>Strong backbone were very helpful.</li> <li>train with all individual annotations is helpful.</li> </ul> <p>I had some success with pseudo-labeling on a small model, but unfortunately, I didn't have time to apply it on larger models.</p> <p>Congratulations on achieving 2nd place! Training a strong model without flip/rot90 augmentation is challenging, and it's incredible that you managed to achieve it.</p> <p>Yes, the model can learn such a pattern as long as you don't introduce inconsistency by adding flip/rot90. In fact, you can perform flip/rot90 augmentation while keeping the pattern (my solution 1).</p> <blockquote> <p>Why it's not "the leftmost point is included while the rightmost point isn't"?</p> </blockquote> <p>I could be. It could also be both included but the rightmost coordinates+1.<br> I tried shift img by -0.5 and 0.5 and it turned out 0.5 is correct.</p> <blockquote> <p>How did you detect this?</p> </blockquote> <p>What we know?</p> <ol> <li>Annotators draw polygons.</li> <li>The competition provides binary masks.</li> <li>Flip augmentation doesn't work.</li> </ol> <p>It's easy to infer that there might be issues with the implementation of polygon-to-mask conversion. It's also possible that there were some problems with the annotation software in handling the coordinates.</p> <ol> <li>when collating a batch, stack all individual masks and keep corresponding img batch indices.</li> </ol> <pre><code>def (samples): out = () out[] = torch.([s[] for s in samples]) masks = [s[] for s in samples] batch_inds = [] for idx, mask in (masks): for _ in (mask.()): batch_inds.(idx) out[] = torch.(masks, dim=).() out[] = torch.(batch_inds) return out </code></pre> <ol> <li>in the training loop</li> </ol> <pre><code> img = mask = batch_inds = out = model(img) out = out[batch_inds] loss = loss_fn(out, mask) </code></pre> <blockquote> <p>1、如果mask向右下偏移了0.5个像素,那水平翻转和垂直翻转的时候是否要把mask分别向右和向下偏移1个像素呢?</p> </blockquote> <p>I believe the misalignment is not caused by some 0.5 pixel shift, it is caused by the 1-extra-pixel on the bottom-right. A shift of 0.5 pixels can bring back symmetry.</p> <p>Without symmetry, a naive implementation of flip/rot90 augmentation would introduce inconsistency, which confuses the model in training.</p> <p>As illustrated in the main post, there are two solutions:</p> <ol> <li>Keep the misalignment (keep the asymmetry), and address the inconsistency.</li> <li>Fix the misalignment (bring back symmetry)</li> </ol> <blockquote> <p>2、在您的实验中,这个偏移0.5像素的发现对分数的改善有多大</p> </blockquote> <p>I believe the shift itself dost not help. The gain is mainly from flip/rot90 augmentation and TTA. I suggest to try it yourself.</p> <blockquote> <p>3、您笔记里的5个backbone是如何确定的</p> </blockquote> <p>I joined the competition late (6 days before the deadline). In the final stage I simply picked several heavy backbones that could finish training before the deadline. I did not explore much here.</p> <p>I rarely adjust training parameters other than the learning rate. I don't have much experience with automated parameter search. The benefits of tuning parameters might not be significant after ensembling, in my opinion, the cost-effectiveness might not be high.</p>
Google Research - Identify Contrails to Reduce Global Warming
Kaggle solution write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>import pandas as pd<br> import numpy as np</p> <p>df1 = pd.read_csv("/kaggle/input/google-research-identify-contrails-reduce-global-warming/sample_submission.csv")<br> df1</p> <p>df1.to_csv("submission.csv", index=False)</p>
Google Research Football with Manchester City F.C.
12th Place Solution(ML approach)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I will represent my team <strong>Ken+digimagi</strong> in this post to publish some information on our solution. </p> <p>Before I go deeper, I wanted to first thank the hosts, Kaggle, Google Research, and Manchester City and the community as a whole. I spent 132 hours on this competition. I am glad I did this competition because not only I learned a lot, but also I had a very exciting time.</p> <p>Credit also to these two notebooks where a lot of our inspiration came from:<br> my teammate <strong>Ken Miller</strong> <a href="https://www.kaggle.com/mlconsult" target="_blank">@mlconsult</a> <a href="https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation" target="_blank">https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation</a><br> <strong>Tom Kelly</strong> <a href="https://www.kaggle.com/tomkelly0" target="_blank">@tomkelly0</a> <a href="https://www.kaggle.com/tomkelly0/lightgbm-10mins-940" target="_blank">https://www.kaggle.com/tomkelly0/lightgbm-10mins-940</a></p> <p>We would also like to thank <strong>Wekick</strong> and <strong>SaltyFish</strong> for their continued lead in this competition and their influence on our solution.</p> <p>Also, after the submissions were closed, there was a fierce battle going on with <strong>kangaroo</strong>, <strong>xin &amp; willson &amp; pigzz</strong> for the last one gold medal. It disturbed my restful sleep, but at the same time, it made me the most excited of the year.</p> <h1>Solution</h1> <p>Our solution itself is not too innovative and is simple. This competition can be approached in a variety of ways: rule-based, machine learning, and reinforcement learning. When I decided to tackle this competition in earnest, there were two teams Wekick and SaltyFish that dominated the other teams. Our starting point was the question of how strong an agent could we make if we could mimic the strong agents of these teams. We reduced it to a multi-class classification machine learning problem of predicting the actions of these strong agents.</p> <p>Here(<a href="https://www.kaggle.com/digimagi/12th-place-solution-ml-approach-lb-1395" target="_blank">https://www.kaggle.com/digimagi/12th-place-solution-ml-approach-lb-1395</a>) is our end-to-end notebook(except collecting episodes) which learned from WeKick:18239675, scores LB about 1395.</p> <h1>Algorithm</h1> <p>Gradient boosting trees (LightGBM), one model per <code>ball_owned_player</code>.<br> Performed hyperparameter tuning with Optuna.<br> The average ensemble, with the hyperparameter changed, raised the score somewhat.<br> We had been working on K-fold model for a while, but it took a long time to learn and we were able to get a certain level of accuracy without using K-fold, so we stopped using it halfway through.</p> <p>Here are the score trends for our model.</p> <table> <thead> <tr> <th>date</th> <th>score</th> <th>accuracy</th> <th>number of features</th> </tr> </thead> <tbody> <tr> <td>Nov. 16</td> <td>600</td> <td>0.65(with a few episodes)</td> <td>147</td> </tr> <tr> <td>Nov. 22</td> <td>1070~1150</td> <td>0.60</td> <td>482</td> </tr> <tr> <td>Nov. 25</td> <td>1300~1320</td> <td>0.75~0.77</td> <td>744</td> </tr> <tr> <td>Nov. 29</td> <td>1380~1400</td> <td>0.79~0.81</td> <td>1154</td> </tr> </tbody> </table> <h1>Data</h1> <p>The following submissionId agent episodes were used as candidates.</p> <ul> <li>SaltyFish:18010220</li> <li>SaltyFish:18211783</li> <li>WeKick:18220693</li> <li>WeKick:18127517</li> <li>WeKick:18239675</li> </ul> <p>In the beginning, we used episodes from SaltyFish:18010220, which has a large number of matches, but we were not able to achieve high accuracy. However, we were able to achieve relatively high accuracy with the WeKick:18127517 or WeKick:18239675, so we focused on these agent's episodes. We tried combining them, excluding losing games.</p> <h1>Features</h1> <p>We generated 1154 features from the observations.<br> Some of them are directly from raw observations, but we used the following paper (in Japanese) as a reference to generate features based on the football domain.<br> <a href="https://www.i-juse.co.jp/statistics/xdata/sportscompe06-sem_special_award.pdf" target="_blank">https://www.i-juse.co.jp/statistics/xdata/sportscompe06-sem_special_award.pdf</a><br> <a href="https://www.ism.ac.jp/editsec/toukei/pdf/65-2-287.pdf" target="_blank">https://www.ism.ac.jp/editsec/toukei/pdf/65-2-287.pdf</a><br> <a href="https://sites.google.com/site/takumanarizuka/home/publications?authuser=0" target="_blank">https://sites.google.com/site/takumanarizuka/home/publications?authuser=0</a></p> <p>The features we created are mainly as follows:</p> <ul> <li>Distance and direction from the active player or ball to each player (polar coordinates)</li> <li>Distance and direction of each player from the main location (center position (0,0), goal line (1,0),(-1,0)) (polar coordinates)</li> <li>Distance and direction of each player to the center of gravity as a team (polar coordinates)</li> <li>Maximum y-coordinate, minimum y-coordinate and their widths for the team excluding the goalkeeper</li> <li>Maximum x-coordinate, maximum y-coordinate and width of the team</li> <li>The x-coordinate of the offside line, and whether or not an offside position has occurred</li> <li>The angle between the two goal posts and the ball</li> <li>The number of opponent/allied players in front of/behind the ball</li> <li>The number of opponent/allied players within a radius of <em>n</em> meters of the active player/ball</li> </ul> <p>We also generated features for players/ball velocity and acceleration, but since the accuracy did not improve that much, we excluded them in the process to simplify the implementation.</p> <h1>Heuristics</h1> <p>Because of the bias in the frequency of occurrence of actions, we weighted the data according to the logarithm of the frequency of occurrence.</p> <p>Also, many of the actions were the same as they were one step before, but we thought that the actions that were different from one step before (e.g., shooting, passing) were more important, so we gave them more weight.</p> <h1>Implementation ingenuity</h1> <p>We created the features without pandas because creating features with pandas seemed to be trapped by the 0.5 second per step limit, and there were times when we could not train with kaggle notebook due to lack of memory.</p> <p>Since the agent has a 100MB capacity limit, the learning rate of LightGBM cannot be very small, so we mainly used 0.1.</p> <p>Since the accuracy was in constant proportion to the score, I focused on improving the accuracy without having it play against other bots.</p> <h1>What we could not do due to time constraints</h1> <p>Consideration of actual court and xy coordinates being distorted.<a href="https://www.kaggle.com/c/google-football/discussion/190676" target="_blank">https://www.kaggle.com/c/google-football/discussion/190676</a><br> Proper handling of features when a player is ejected due to a red card.<br> Consideration of action weights just before a good play step like a goal being scored.<br> Ensemble with other models such as neural networks.</p> <p>I welcome any type of questions or comments. Thank you for all.</p>
Google Research Football with Manchester City F.C.
16th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update: </p> <ul> <li>I've added a docker image and instructions to reproduce the results in <a href="https://github.com/ceshine/google-football-2020" target="_blank">the Github repo</a>.</li> <li>For those who found this post, please refer to <a href="https://blog.ceshine.net/post/16th-place-google-research-football/" target="_blank">my blog post</a> for the latest information. This thread will not be updated further.</li> </ul> <p>Thanks to Kaggle, Manchester City F.C., and Google Research for this fantastic competition. Working on this competition was the most fun I've had for a while.</p> <p><strong>The tl;dr version of my solution is that I used an MLP model to stochastically imitate WeKick's agents, with some rules to help it navigate in unfamiliar waters.</strong></p> <p>After I got the GCP coupon, I looked at the competition timeline and thought that there is no way I can train a competitive RL agent from scratch in less than two weeks. I had to find some way to cut the training time shorter.</p> <p>Then I found Ken Miller's <a href="https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation" target="_blank">RL approximation notebook</a> and learned that the imitation strategy works reasonably well. So I decided to use a similar approach to (pre)train NN models to bootstrap my policy network, with an episode scraper based on Felipe Bivort Haiek's <a href="https://www.kaggle.com/felipebihaiek/google-football-episode-scraper-quick-fix" target="_blank">Google Football Episode Scraper quick fix</a>. Huge thanks to both of you! <a href="https://www.kaggle.com/mlconsult" target="_blank">@mlconsult</a> <a href="https://www.kaggle.com/felipebihaiek" target="_blank">@felipebihaiek</a> </p> <p>After a while, I found a lot of low-hanging fruits by tuning the features and assisting rules. As the model training is restricted to experience replays, no exploration was possible. The trained agent will not know how to act in a vast part of the state space, so setting some rules to guide them back to familiar territories can be quite helpful. I decided that tuning the imitator agents would be a better use of my time than an RL moonshot. Taking the safer bet paid off and gave me two agents with ~1,300 scores on the final day.</p> <p>Some details (after some quick browsing, there seem to be a few teams using similar approaches to mine. I haven't read through them yet, so I apologize if I did not reference already-shared contents properly) :</p> <ul> <li>Episodes: Only episodes in which both agents were high-ranking were used. It was most likely a mistake. It probably made my agents perform poorly when playing against low-ranking agents. A better approach might be a submission-based filter. </li> <li>Features: Active-player-centered coordinates; coordinates and velocities of the goalkeepers are kept separately from the rest; distance-sorted coordinates, distances, angles, and velocities for the rest; Sprint and dribble sticky states; ball features; an offside feature(doesn't seem to work as expected).</li> <li>Stochastic actions: I reduced the temperature of the softmax function (by doubling the logits). This seems to work better than deterministic/greedy actions.</li> <li>Rules:<ul> <li>Using direction sticky states in features leads to overfitting and erratic behaviors, so I removed them. The downside is having the player high-passing the ball back to the goalie in free-kicks and resulting in own-goals. So I instructed the player to simply go right in free-kicks.</li> <li>The model controls only the normal and throw-ins game modes. Training the model to play other modes seem to be a waste of time.</li> <li>No dribbling.</li> <li>Sprinting is entirely rule-based. I had found this to work slightly better in the final days as it doesn't waste precious time giving the sprint command in critical moments.</li> <li>Turn the player back when they are taking the ball straight out of the field.</li> <li>The defense is the most tricky part. In early experiments, the models would give absurd commands when the ball controlled by other players is too far from the active player (possibly due to the lack of exploration during training). Therefore, I kept the agents on a tight leash when defending. The models were only allowed to take over when the ball is near enough or in high-passes; the running-towards-the-ball strategy is used otherwise. This approach gives away many easy shot opportunities to the other teams. Only on the last day had I realized that the leash was too tight for the latest models. Loosen the leash gave better results in local tests. Unfortunately, I only had two submissions left at that point, so I could not properly tune this hyper-parameter.</li></ul></li> <li>Local testing: At first, I let my agent play against the easy built-in AI and see if they can consistently beat the AI for more than five goals. Later I've found that the larger goal margin in easy mode doesn't translate to better performance against high-ranking agents, so I switch to the hard built-in AI.</li> </ul> <p>My code is public on Github(<a href="https://github.com/ceshine/google-football-2020" target="_blank">ceshine/google-football-2020</a>). Currently, it lacks proper documentation and environment reproducibility. I'll try to make it better in the coming days. </p>
Google Research Football with Manchester City F.C.
My Human Learning bot (rank 31st)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank Kaggle, Google, and Manchester City for creating this competition. It has been a fun learning experience.</p> <p>My approach is rule-based and strictly human learning. :) I started with the excellent notebook by Yegor Biryukov's <a href="https://www.kaggle.com/yegorbiryukov/gfootball-with-memory-patterns" target="_blank">notebook</a> with the deterministic script from <a href="https://www.kaggle.com/raffaelemorganti/rule-based-way" target="_blank">Raffaele Morganti</a>. My sincere gratitude to both of them for making my participation in this competition much easier. Thanks also should go to all the top teams. They lead the way and inspire all of us to do better.</p> <p>I made these improvements from Yegor's notebook based on what I glean from the replay.</p> <ul> <li>Defensive issues:</li> </ul> <ol> <li>Limit blind Action.Shot up field to goalie only. The reason for this is I found that trying to boot the ball up field without looking at whether opponent players might be blocking is dangerous so I limit it to goalie who mostly has enough room in front of him to clear.</li> <li>For other players, I try to evade the chasing opponents before trying to pass while avoiding going into my own penalty box.</li> <li>Sliding is essential but doing it wildly can get your player sent off so I chose to slide only when the player is near both the ball and the opponent player for two steps. This is all trial-and-error.</li> <li>High ball overhead beyond our last defender leads to many goals so I handle this situation as a special case. The last defender does not chase the ball but chase the highest opponent player instead. This works reasonably well but not foolproof for the case of two-on-two and the ball is going to the second-highest attacker instead of the highest one. I try to fix it only to make it worse so I gave up.</li> </ol> <ul> <li>Offensive issues:</li> </ul> <ol> <li>I notice that quick passing and shooting without taking possession of the ball is the way top players score so I mimic it by calculating whether the active player is going to be the first to the ball. If so, I issue the pass or shot command before the ball arrives. This is not without downside in that if the player turns out not to be first to the ball, he can stray too far from the opponent to prevent him going for goal. Another sticky point is the fact that passing command is delayed. If the player turns the wrong direction after taking possession of the ball, i.e. facing toward our own goal, it can result in an own goal. So, I made quick passing and shooting sticky by repeating it. This is risky so I try to delay making this quick pass/shoot decision as late as possible before the ball reaches the player.</li> <li>When trying to dribble past defenders, I adjust the constant so we don't look at defenders that is a little bit far behind us. This improves the score quite a bit.</li> </ol> <p>Overall, I'm happy with the result. I think I achieve rank 31st only because I had many submissions with different tuning parameters. What I thought was a good change turn out not to be that good in the end. The randomness of the game combines with slow converging LB score makes tuning very difficult. I wish I have implemented a local LB but it might not help much because of the lack of variety in agents.</p> <p>Here is my <a href="https://www.kaggle.com/huiunited/hui-gfootball-with-memory-patterns-from-yegor" target="_blank">notebook</a>.</p> <p>Thank you and hope to see you guys again in the next competition. I will definitely learn and use ML by then. :)</p>
Google Research Football with Manchester City F.C.
SaltyFish: Current 2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It has been a great pleasure to join this football competition on Kaggle! We thank the <strong>Kaggle team</strong>, <strong>Manchester City</strong>, and <strong>Google research</strong> for the effort of making this happen! It seems that the gfootball team launched this competition in early 2020 and was intended to hold the final on ICML. Unfortunately, it is a tough year…… and we hope everybody is okay. Back to the competition, we also wanna thank all the participants, and it is you guys that make this competition wonderful and exciting! Special thanks to <strong>Wekick</strong>, your agents are really impressive and we also learned a lot from you.</p> <h3>Summary</h3> <p>Generally speaking, we train our agents through <strong>reinforcement learning</strong> by integrating <strong>PVE</strong> and <strong>Selfplay</strong>. Under <strong>limited computation resources</strong>, it is hard for the agents to learn from scratch (selfplay) in such a difficult environment (long-time horizon and sparse reward). By first training against the easy bot (difficulty_level = 0.05) and then the hard bot (difficulty_level = 1.0) until convergence, our agents get some clue on offense (e.g., dribbling the ball from backcourt to frontcourt, shooting skill). The selfplay phase then enables the agents to learn defense skills and further enhances the offense skill by learning passing skills. Let's get into some details.</p> <h3>Algorithm</h3> <p>We use distributed IMPALA implemented in our framework <strong>RLEase</strong>. RLEase is a highly efficient reinforcement learning based library developed by <a href="https://fuxi.163.com/en/" target="_blank">Netease Fuxi Lab</a>, which both incorporates state-of-the-art algorithms (such as IMPALA, APEX-rainbow, PPO, VDN, QMIX, etc.) and supports distributed training, models' auto testing, imitation learning and offers an easy API to incorporate new game environments. The framework looks like this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F939404%2Fa38228598d92a6254426204c2f03b026%2FRLease_struc.png?generation=1607673598249995&amp;alt=media" alt=""></p> <h3>Feature</h3> <p>We first tried the simple115 wrapper that is mostly based on absolute values, which turned out to be lacking generalization capabilities. Then we extended it to the multi-head vector input by adding relative information and grouping related features into one head. Besides, <strong>surgery</strong> is also used to add extra features since training from the very beginning is both costly and not feasible under limited resources, just like what <a href="https://openai.com/projects/five/" target="_blank">OpenAI Five</a> did. The figure below shows what the multi-head vector inputs look like, where the blue columns represent the original input and the red columns and texts represent the surgery part.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F939404%2Ffacb024666627b89576cfb62618efe2d%2Fmulti_head_vec.png?generation=1607668154777146&amp;alt=media" alt=""></p> <h3>Network</h3> <p>We use several fully-connected layers. See Figure below:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F939404%2F892538fb2739d7d96dd4059fc8e4be87%2Fnetwork.png?generation=1607667587785648&amp;alt=media" alt=""></p> <h3>Action</h3> <p><br>We use different action sets for the algorithm in different 'ball_owned_team' modes (i.e., 0, 1, and -1). The main purpose is to ignore the unnecessary actions and reduce the exploration space. For example, when one player of our team has the ball, there is no need to choose the SLIDING action. For each 'ball_owned_team' mode, we simply generate a corresponding legal/illegal mask vector to indicate which of the 19 actions can be or cannot be taken. The output of the policy network (i.e., the probability of taking each action) will be updated according to the mask vector. We list the illegal actions in each 'ball_owned_team' mode as follows:<br> &nbsp; &nbsp; <em>ball_owned_team</em> = 0: SLIDING<br> &nbsp; &nbsp; <em>ball_owned_team</em> = 1: LONG_PASS, HIGH_PASS, SHORT_PASS, SHOT, DRIBBLE, STOP_DRIBBLE<br> &nbsp; &nbsp; <em>ball_owned_team</em> = -1: LONG_PASS, HIGH_PASS, SHORT_PASS, SHOT, DRIBBLE, STOP_DRIBBLE, SLIDING<br> <br>Although such a trick can accelerate learning by avoiding unnecessary actions, it also makes our algorithm unable to learn some strong policies. For example, in the last few weeks of this competition, we found that one entry of WeKick had learned the skill of passing the ball between players without stopping the ball. When the ball was flying from the passer to the receiver, the receiver had already chosen the most appropriate position and facing direction for performing the next pass. Once the ball got very close to the receiver (we found that the `ball_owned_team' at that time was still -1), the ball would be passed to the next receiver. This kind of ball-passing tactic really impressed us and we realized that our model cannot pass the ball in the same way. Therefore, in the last week, we trained another model without legal action masks. This no-legal-action model actually has the highest score (1597.5) among all our submitted models at the end of the extra evaluation week. Interestingly, the highest score achieved by our legal-action models is 1597.0. The no-legal-action model successfully learned the ball-passing skill, though not as skillful as WeKick. We expect it will perform better if there are another extra week and more computation resources for training…</p> <h3>Reward</h3> <p>For the PVE training, we use the scoring reward and checkpoint reward. The latter is also provided by the gfootball environment (please refer to the gfootball paper). We found that using the checkpoint rewards is really helpful in the early training stage because such rewards are the most straightforward for encouraging the player to take the ball from the backcourt to the frontcourt and make a goal. At the end of the PVE training phase, our model can beat the hard bot (difficulty_level = 1.0) with a goal difference higher than 10. However, the disadvantage of the checkpoint rewards is that the behavior of the player output by the model is always dribbling the ball (maybe with sprint). In fact, the PVE-trained model is not a strong model and the learned policy is very easy to exploit. The only purpose of using the checkpoint rewards is to let our agent get some clue on the offense.<br><br> For selfplay training, we use the scoring reward and abandon checkpoint reward to let our model learn more sophisticated behaviors. Note that the model is based on the PVE-trained model, which is good at dribbling, shooting, and scoring. Therefore, the reward signals are not that sparse in selfplay.<br><br> We also tried different shaping rewards in order to learn models with different behavior styles, such as using ball possession rewards (e.g., 0.1 for getting the ball and -0.1 for losing the ball possession) to encourage tighter defense and using ball passing rewards (e.g., 0.01 for a successful pass and -0.01 for an unsuccessful one) to get a pass-favored policy. The positive result is that with different shaping rewards, the behavior styles learned by the models are quite different. For example, with the ball passing rewards, the number of passes executed and the number of passes completed of the corresponding model in a match is at least two times higher than those of the other models. However, none of the shaping rewards adopted were able to train very strong models. The reasons are two-fold. Firstly, setting an appropriate value for each type of reward is a difficult problem and needs tedious tuning. Secondly, our shaping rewards actually are not directly related to the goal of scoring, which is also pointed out by the <strong>Raw Beast</strong> team.</p> <h3>Resource</h3> <p>During the first several months before the Kaggle competition, we only have 80 CPU cores and it takes ages to get the model to work (around 190k episodes per week). The training scale is similar to that of <a href="https://www.kaggle.com/c/google-football/discussion/201376" target="_blank">liveinparis</a>) and after training for two weeks, the model can reach up to 1400+ on the previous leaderboard before Kaggle. For the Kaggle competition, we use 1000 CPU cores for each experiment (20k+ fps in our framework) and a maximum of 2500 CPU cores in the last two weeks (1000 for non-legal action masks training, 1000 for adding pre-16 steps action surgery training, and 500 for curriculum training and imitation learning). In addition, GPUs are not necessary since we use vectors as inputs.</p> <h3>Other methods we tried:</h3> <ol> <li>ELO-based Opponent Selection: We tried to select the opponent by using the Elo rank during selfplay, which turns out to be similar to the performance of selecting the newest opponent with a certain ratio (70%) in this football environment.</li> <li>Curriculum Learning and Imitation Learning:We also used curriculum learning to help our agents to learn useful skills such as shooting from the middle of the field during freekick (see the gif bellow). Besides, we also cloned the behavior of Wekick and put the model in our opponent pools.<br> <img src="https://pic1.zhimg.com/50/v2-ead94061921f49028929cc30cdf10afc_r.gif" alt=""></li> <li>Action Semantics Network: We tried the action semantics network (ASN), which is proposed in an ICLR2020 paper (<a href="https://openreview.net/forum?id=ryg48p4tPH)" target="_blank">https://openreview.net/forum?id=ryg48p4tPH)</a>. The motivation of using ASN is that in this football scenario, different actions of a player may require differnt observations. For example, for applying the SLIDING action, a player should pay more attention to the ball and the player controlling the ball. And for applying the SHOT action, the player should take the ball, the opponent goalkeeper, maybe the opponent players nearby as well into consideration. However, our computation resources are limited and this model is still in the training process.</li> <li>Behavior Tree as Macro Action: We extended the action space size from 19 to 20 by adding a new action 'behavior tree' and trained a new model. This new action means to get the behavior of the player controlled according to a behavior-tree AI. You can simply treat a behavior-tree AI as a rule-based AI. Most importantly, such a behavior-tree AI is a python version of the <strong>eliza_controller</strong> in the internal C++ code of the gfootball_engine, namely the internal bot provided by the environment. The motivation for utilizing the internal bot is that such a bot is good at ball passing and ball controlling. We believe that a stronger model can be obtained if it has the ball passing and ball controlling skills of the internal bot. However, this model is still in the training process under limited resources.</li> <li>Graph Neural Networks: We also tried the graph neural networks (such as GCN and GAT) to extract features. The features of the players are designed as a fully connected graph, where we represent the players' information with the graph's nodes and relations between players with edges. However, this model is still being trained under limited resources.</li> </ol> <h3>Some interesting stuff:</h3> <ol> <li>We found that the goalkeeper may keep dribbling from its own goal to the opponent's goal in one of the early models we trained, LOL…<br> <img src="https://pic4.zhimg.com/50/v2-ecf0fe0cde3aaf27035fa4d31dd42373_r.gif" alt=""></li> <li>The game version changing from 2.6 (left) to 2.8 (right) replaces the guards with the fronts during the kickoff, which makes one of our models' kicking-off strategy (see the gif below) useless since the guards tend to run back while the fronts tend to kick forward.<br> <img src="https://pic4.zhimg.com/50/v2-7626e3f78b67489adc87965e9d13976d_r.jpg" alt=""><br> <img src="https://pic3.zhimg.com/50/v2-63591ea9d7262b08f18b6cbdaa67a66a_r.gif" alt=""></li> </ol> <p>Yes, we both switch the opponent when a certain threshold of win-rate is reached, say 80% or when a certain number of episodes is finished, like every 20 episodes.</p> <p>Generally speaking, adding more essential features will contribute to the performance and this is what surgery does. Unfortunately, the ablation study between multi-head style and single-head style has not been carried out due to the limitation of computation resources and time……😑</p>
Google Research Football with Manchester City F.C.
TamakEri: 5th place solution ⚽️
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for asking!</p> <p>1) This is just that we added a KL loss between a policy of past RL model and a policy of trained model when training the new model.<br> Without this, we needed to train the new model from scratch and it was very slow.</p> <p>2) It can’t completely ensure that, but the agent with a policy of zero temperature for softmax without skipping actions is usually stronger a bit.</p> <blockquote> <p>TamakEri (“球蹴”) means “play with ball” in Japanese.</p> </blockquote> <p>Congratulations to the winners! 🎉<br> Thanks to all participants, Kaggle team, Manchester City and Google Research for exciting games! ⚽️</p> <p>Our team participated in this competition with about three weeks left to the deadline. We have been working towards winning by using reinforcement learning (RL). Fortunately, we got 5th place and we are happy to share our solution and <a href="https://github.com/YuriCat/TamakEriFever" target="_blank">code</a> with the kaggle community.</p> <p>Special thanks to my teammate <a href="https://www.kaggle.com/yuricat" target="_blank">@yuricat</a> !!</p> <h1>Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F263142%2F9e44f206976c657267718188cd130bfb%2Foverview.png?generation=1608013227102716&amp;alt=media" alt=""></p> <p>We mainly trained our agents with (deep) reinforcement learning on a learner-actor architecture like <a href="https://arxiv.org/abs/1802.01561" target="_blank">IMPALA</a>. Our neural network model is trained through self-play combined with single-play games against various opponent agents (rule-based bots, built-in AI, etc).</p> <p>In the early steps of learning, we simultaneously used supervised learning (SL) with the top players' replays for the purpose of helping with the start of RL. The effect of supervised learning decayed as the training progressed, and the model was finally being trained only with reinforcement learning.</p> <p>Our model consists of multi-head self attention, CNN, GRU, and MLP networks. The detailed model and input features are described later.</p> <p>Our agents were mainly evaluated against a hard-level built-in AI (i.e. team difficulty = 1.0) that is implemented in the football game engine. On the last day of the competition, the win-rate of the latest model reached ~90% against the built-in AI and was still getting higher!<br> (However we found out just before the deadline that we couldn't submit the next model while scoring the previous model, so we couldn’t submit the latest (ensemble) model 😭😭😭 . We will make use of this experience at the next BIG match!! ⚽️)</p> <p><a href="https://github.com/YuriCat/TamakEriFever" target="_blank">Our implementation script</a> is based on OSS RL library <a href="https://github.com/DeNA/HandyRL" target="_blank">HandyRL</a>.</p> <h1>Algorithm and Training</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F263142%2F5ebb310566f08e2fee190c4fba6c1e24%2F2020-12-13%2013.15.36.png?generation=1608013364797563&amp;alt=media" alt=""></p> <h2>Main algorithm</h2> <ul> <li>Off-policy distributed reinforcement learning</li> <li>TD(λ) + importance sampling (clipped variant)</li> </ul> <p>We used an off-policy distributed reinforcement learning to efficiently update an agent’s policy from the results of a large number of games. The learning algorithm is based on a policy gradient algorithm. The model learns policy, value, and return. Our value estimates final game outcome, and our return estimates discounted score reward. Then our policy maximizes both of them. In other words, the policy learns to maximize the expectations of scoring a next goal and winning a game. In our training, when one of the teams scores, the calculation of return is reset to break the episode down into before and after scores.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F263142%2F1d347faa39aa3b6cc75ceae55e65f4c1%2F2020-12-11%2017.08.35.png?generation=1608013435775391&amp;alt=media" alt=""></p> <h2>Losses</h2> <h4>RL loss</h4> <ul> <li>Policy loss<ul> <li>importance sampling (1-step Retrace) * advantage * log(policy)</li> <li>advantage = (value advantage + return advantage) / 2</li> <li>value advantage = UPGO(λ)</li> <li>return advantage = TD(λ)</li></ul></li> <li>Value loss<ul> <li>MSE</li> <li>target = λ-value (TD(λ))</li></ul></li> <li>Return loss<ul> <li>Huber loss (SmoothL1Loss)</li> <li>target = λ-return (TD(λ))</li></ul></li> <li>Entropy loss</li> </ul> <h4>SL loss</h4> <ul> <li>Policy loss <ul> <li>NLL (Negative Log-Likelihood) loss</li> <li>target = replays’ action</li></ul></li> <li>Value loss<ul> <li>MSE</li> <li>target = outcome</li></ul></li> </ul> <p>The advantage, which is used in the RL policy loss, is an average of value and return advantages. The value advantage is calculated based on UPGO(λ), which is combined with upgoing policy update (UPGO) and λ-value. The return advantage is calculated based on λ-return. The policy loss is calibrated by using a clipped importance sampling weight between behavior policy and target policy (1-step Retrace). The value target is computed based on a score difference with tanh to consider the degree of win/lose. That means when the score is 4-1, the outcomes of both players are tanh(4-1)=0.995 and tanh(1-4)=-0.995. When the score is 2-1, the outcomes are 0.76 and -0.76. In the beginning of the competition, we trained the model only with the outcome, so the above idea was used to encourage the agent to go for more scores. After that, we introduced the score as an immediate reward at the end of the competition. Now, we expect that the naive outcome -1(lose), 0(tie), 1(win) with immediate score rewards will also work fine.</p> <p>The score reward (+1/-1) is used to calculate the return target with a discount factor. Note that we also tried a checkpoint reward similar to the one described in <a href="https://arxiv.org/pdf/1907.11180.pdf" target="_blank">the original paper</a>.</p> <p>The percentage of supervised data in an experience buffer started with 10%. We manually reduced the ratio of supervised learning through training by removing the supervised data from the buffer. When the accuracy of the predicted action in supervised learning saturated, the ratio was set to 0%. From that timing, the model was trained only with reinforcement learning.</p> <h2>Training settings (see <a href="https://github.com/YuriCat/TamakEriFever/blob/master/config.yaml" target="_blank">the code</a> for details)</h2> <ul> <li>learning rate with decay (default=3e-8 x batch size)</li> <li>optimizer: Adam</li> <li>batch size: 156 ~ 400</li> <li>buffer size: 3000 (min) - 30000 (max)</li> <li>action repetitions: 4 (if an agent choose the action “→” at time t, “→” is repeated next four time steps t, t+1, t+2, t+3)</li> <li>SL loss coefficient: 0.01</li> </ul> <h2>Computational resources</h2> <ul> <li>small experiment<ul> <li>worker: 96-core CPU</li> <li>learner/trainer: 24-core CPU + 1 GPU (NVIDIA T4) </li></ul></li> <li>large experiment<ul> <li>worker: 96-core CPU x8</li> <li>learner/trainer: 96-core CPU + 4 GPU (NVIDIA T4) </li></ul></li> </ul> <p>We used the small setting per experiment in the early phase. In the last three days, we leveraged large amounts of resources in parallel to accelerate multiple experiments (total ~4000 CPUs and ~20 GPUs). The final agent has played a total of ~700k games through the training.</p> <h1>Features and Model</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F263142%2Fc9f3326262984ded006e46effe668f0e%2Ffeatures_model2.png?generation=1608013747792952&amp;alt=media" alt=""></p> <p>Our model consists of several input features and modules.</p> <ul> <li>Attention between players (encoder, multi-head self attention, control module)<ul> <li>The encoder module embeds the index of all players and game modes, and then encodes the embedding and some features to a fixed tensor</li> <li>The tensor is fed to the multi-head self attention modules to get features considering attentions between all players</li> <li>The control module mainly extracts (decodes) the feature of the active player using like an encoder-decoder attention mechanism based on the output from the previous modules</li> <li>We also tried RNN as control module to manage a long horizon memory, but it didn't improve the performance so much</li></ul></li> <li>Action history (GRU)<ul> <li>The latest action history (sequence length = 8) is input to GRU for acquiring the context of actions</li> <li>GRU outputs the latent features of the sequence</li></ul></li> <li>Left vs right player tensor (CNN)<ul> <li>This CNN is expected to learn the relationships between left vs right-side players with 1x1 conv and pooling (not for spatial structure)</li> <li>This approach is based on <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119400" target="_blank">1st solution</a> of the NFL Big Data Bowl competition</li></ul></li> <li>Head modules (fully-connected networks)<ul> <li>Concatenates the features of the previous stages and feeds them into the policy, value, and return networks</li></ul></li> </ul> <h1>Agents</h1> <p>Our neural network was trained through self-play combined with single-play games against various opponent agents. To keep the winning percentage at 50%, the weights of opponent agents were dynamically changed depending on the progress. The ratio of strong opponents increases when the training agent becomes stronger.</p> <ul> <li>Self-play agent<ul> <li>The latest model is always used for self-play</li> <li>The training is started with 10% self-play and finally ended up with 80%</li></ul></li> <li>Opponent agents when not self-play<ul> <li>Simple agents<ul> <li>Idle (do nothing), random, run right</li></ul></li> <li>Rule-based agents (Great kernel agents. Thanks to the authors!)<ul> <li><a href="https://www.kaggle.com/yegorbiryukov/gfootball-with-memory-patterns" target="_blank">https://www.kaggle.com/yegorbiryukov/gfootball-with-memory-patterns</a></li> <li><a href="https://www.kaggle.com/eugenkeil/simple-baseline-bot" target="_blank">https://www.kaggle.com/eugenkeil/simple-baseline-bot</a></li> <li><a href="https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation" target="_blank">https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation</a></li> <li><a href="https://www.kaggle.com/mlconsult/best-open-rules-bot-score-1020-7" target="_blank">https://www.kaggle.com/mlconsult/best-open-rules-bot-score-1020-7</a></li> <li><a href="https://www.kaggle.com/shotaro/optimizing-tunable-bot-with-optuna" target="_blank">https://www.kaggle.com/shotaro/optimizing-tunable-bot-with-optuna</a></li></ul></li> <li>Built-in AI<ul> <li>Hard-level AI (right_team_difficulty = 1)</li></ul></li> <li>Past trained agents<ul> <li>Agents created by multiple experiments</li></ul></li> <li>Mixed agents<ul> <li>e.g., built-in AI + RL agent, built-in AI + rule-based agent</li></ul></li></ul></li> </ul> <h3>Ensemble agent</h3> <p>By the end of the competition, multiple agents had been created. To further improve the accuracy of the policy, we tried an ensemble agent. The ensemble methods may include, for example, (weighted) voting of each agent and (weighted) policy averaging. We adopted the weighted policy averaging for the ensemble agent.</p> <pre><code>policy(agentA) = [0.1, 0.8, 0.1] policy(agentB) = [0.2, 0.5, 0.3] policy(agentC) = [0.3, 0.6, 0.1] weight = {A: 3, B: 1, C: 2} ensemble policy = (policy(agentA) * 3 + policy(agentB) * 1 + policy(agentC) * 2) / 6 = [0.183, 0.663, 0.133] </code></pre> <p>The win-rate of the ensemble agent achieved a highest score against the built-in AI in the local environment.</p> <h1>Evaluation</h1> <h2>vs. various agents (1000-step game)</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F263142%2F431834d169b7ea12429fdb28579cb707%2FFigure_1-15.png?generation=1608014159413305&amp;alt=media" alt=""></p> <p>Some workers were assigned for evaluation to get the win rate during training. Some of the opponent agents are rule-based agents and the others are trained agents. The above figure shows the win-rate curves of a certain model at the time of saving the model. This graph shows our model was still getting stronger even at the deadline.</p> <h2>vs. built-in AI (full 3000-step game)</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F263142%2Ffe90aaf906feea39281bf57c8c47214a%2Fvs_builtin.png?generation=1608014208575876&amp;alt=media" alt=""></p> <p>The win rate against built-in AI was used as a main metric to evaluate the performance of the agent. We evaluated 1,000-2,000 games to improve the reliability of the win-rate for the submission agents. We found that the rating of leaderboard is highly correlated to the win-rate against the built-in AI. The higher the win-rate against the built-in AI is, the higher the rating of the submission agent is. The win-rate of our final model reached ~90% against the built-in AI.</p> <h1>Additional features</h1> <h2>New action set</h2> <p>In football, the direction of passing and shooting is important. We analyzed some matches and found that the kick and pass actions might be related to the sticky direction. So, we created a new action set that combined the two actions (Pass/Shot + Direction). With this extension, the quality of pass and shot was enhanced qualitatively, for example crossing from side space, shot direction, free kicks, and so on.</p> <p>New action set is like the following.</p> <pre><code>20 = LongPass + Left 21 = LongPass + TopLeft ... 28 = HighPass + Left ... 36 = ShortPass + Left ... 44 = Shot + Left … 51 = Shot + BottomLeft </code></pre> <h2>Skipping initial steps</h2> <p>In a certain percentage of games during training, the initial frames were skipped by using the rule-base agents (e.g. 30% skipped games and 70% full games). This is expected to increase the variety of the states for the exploration and robustness of the agent. The number of the skipping steps was determined randomly at the start of a match After skipping, the training started from the state on the way.</p> <h2>Model distillation</h2> <p>We came up with a lot of ideas, but we didn't have much time left to create a new model from scratch. RL from scratch would take too much time! Therefore, we used a technique of model distillation (a.k.a knowledge distillation) to transfer the skills from the past model to the new one. At the start-up of training, the output of a pre-trained model was used as a teacher with reinforcement learning. As a result, the agent was able to beat the rule-based and other agents in a few hours. Note that it’s only applied in the early stages of training.</p> <h2>Deterministic opponent in self-play</h2> <p>We also used a deterministic policy for the opponents in self-play without frame skip (action repetition). As a result, the trained agent could always play against opponents who are stronger than itself, and it could be balanced with weaker rule-base agents and past models to achieve 50% winning rate.</p> <h1>Code</h1> <p><a href="https://github.com/YuriCat/TamakEriFever" target="_blank">https://github.com/YuriCat/TamakEriFever</a></p> <p>The implementation code is based on <a href="https://github.com/DeNA/HandyRL" target="_blank">HandyRL</a>, which is an open-source library for distributed reinforcement learning. The library can be applied to many problems of reinforcement learning. Because of its good parallelism, neural network training and episode data generation can scale easily. The simple implementation makes it easy to incorporate your game environments and algorithms into the library. We recommend this RL library for future simulation competitions, if you like it ❤️</p> <hr> <p>P.S. The early agent prefers Marie Curie and Rosalind Franklin to quickly get behind a defender and score a goal. </p> <div> <img src="https://raw.githubusercontent.com/YuriCat/TamakEriFever/master/curie.gif" alt="cuire"> </div> <p>Thank you <a href="https://www.kaggle.com/puzuwe" target="_blank">@puzuwe</a>. Feel free to contact us if you have any questions! 😄</p>
Google Smartphone Decimeter Challenge
10 days bronze (63 place solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>tl;dr: New baseline from raw data &gt; Recalculate all pipelines on the best ensemble &gt; Snap SJC to ground true</strong></p> <p>Hello everyone, i wanted to share the steps i took to get the bronze medal, i only started working on this competition 10 days before the deadline and mostly used work from the community to get my final score, so i'll also leave my special thanks to the ones who shared the knowledge i used.</p> <p><a href="https://www.kaggle.com/jeongyoonlee" target="_blank">@jeongyoonlee</a> for this great baseline<br> <a href="https://www.kaggle.com/jeongyoonlee/google-smartphone-decimeter-eda-keras-tpu" target="_blank">https://www.kaggle.com/jeongyoonlee/google-smartphone-decimeter-eda-keras-tpu</a></p> <p><a href="https://www.kaggle.com/bpetrb" target="_blank">@bpetrb</a> for this amazing notebook<br> <a href="https://www.kaggle.com/bpetrb/adaptive-gauss-phone-mean/output" target="_blank">https://www.kaggle.com/bpetrb/adaptive-gauss-phone-mean/output</a></p> <p><a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a> for his clustering technique<br> <a href="https://www.kaggle.com/c/google-smartphone-decimeter-challenge/discussion/245160" target="_blank">https://www.kaggle.com/c/google-smartphone-decimeter-challenge/discussion/245160</a></p> <p><a href="https://www.kaggle.com/t88take" target="_blank">@t88take</a> for this great notebook<br> <a href="https://www.kaggle.com/t88take/gsdc-phones-mean-prediction/output" target="_blank">https://www.kaggle.com/t88take/gsdc-phones-mean-prediction/output</a></p> <p><a href="https://www.kaggle.com/tensorchoko" target="_blank">@tensorchoko</a> for her multioutput regressor, i believe she removed the notebook since i can't find it anymore</p> <p><a href="https://www.kaggle.com/somayyehgholami" target="_blank">@somayyehgholami</a> for this amazing ensemble notebook<br> <a href="https://www.kaggle.com/somayyehgholami/gsdc-smart-ensembling" target="_blank">https://www.kaggle.com/somayyehgholami/gsdc-smart-ensembling</a></p> <p>The combined output from those notebooks plus a little original code granted me the medal, this is how it was done:</p> <h1>1 - Get a better baseline</h1> <p>On the first day of my research when i was trying to understand the data, i could see people talking about the <strong>base_locations_test.csv</strong> file was a good starting point and doing many post-processing techniques on it, so if i could find a better baseline, i could just apply the same public techniques people were doing and i would probably end up with a better result, so i spend some time trying to find someone who had calculated the baseline again, i end up finding a fork of <a href="https://www.kaggle.com/jeongyoonlee" target="_blank">@jeongyoonlee</a> code and after a few tweaks, i ended up with a <strong>slightly better LB score</strong> than the raw file, so in theory that could give me a little edge.</p> <h1>2 - Finding a tweakable pipeline</h1> <p><a href="https://www.kaggle.com/bpetrb" target="_blank">@bpetrb</a> provided us with a pipeline that were strongly connected with some hyperparameters (i actually got a fork of it), after a night cycling trough a few, i found out that the guy did a good job setting the parameters, <strong>i couldn't improve his score</strong> by changing the parameters alone.</p> <h1>3 - Clustering the data and setting hyperparameters</h1> <p>After that failed attempt i found <a href="https://www.kaggle.com/taroz1461" target="_blank">@taroz1461</a> comment explaining how he <strong>clustered the datasets</strong>, i used his logic 'as-is' and tried a new combinations of hyperparameters for each cluster, after a while i found a <strong>better set of parameters</strong> for the highway cluster, since that one was one of the biggest, the improvement was substantial.</p> <h1>4 - Finding a good ensembling</h1> <p>At the time, <a href="https://www.kaggle.com/somayyehgholami" target="_blank">@somayyehgholami</a> had a very good ensemble pipeline that were using the same post-processing that i improved before <em>(step 3)</em> as one of its inputs, so i figured that if i already had a better baseline, maybe i could <strong>recalculate the same inputs</strong> that she was using and combine with the best set of hyperparameters that i had.</p> <h1>5 - Checking the outputs and snapping to grid</h1> <p>During all steps i was constantly checking the outputs on the map, the SJC paths were always very messy and out of the road, so reading the forums, i found that i could implement a **snap to grid **process *(something that i saw before on the indoor competition)* and people were also talking about here, but i couldn't find any public notebook that could do this the way i wanted to, so <strong>i had to code this part</strong>. <br> This is basically the only original code on my solution, since i was out of time <em>(2 days left)</em>, i decided to implement this <strong>without interpolations and only on SJC</strong>, using the provided ground true as grid points (they are dense on SJC), it ended up increasing my score by a good amount, the table below is showing the change it made, so i was confortable submitting that as my final submission.</p> <p>Now, looking at the private dataset, this little change ended up giving me a much needed <strong>0.1 on LB</strong>, that is almost the exact difference between my position and the 100 place.</p> <table> <thead> <tr> <th>Without snap</th> <th>With snap</th> </tr> </thead> <tbody> <tr> <td><img src="https://64.media.tumblr.com/9076c2d39c15a6484eaaea07d726debe/357c1ccef231f98b-59/s1280x1920/7363d07e30fc4f92e379c9f46f0180dd80b3ddce.png" alt=""></td> <td><img src="https://64.media.tumblr.com/bd202b8827e0e531e75c4b909b37e055/ff22d3ad0772e7db-77/s1280x1920/95b14d336214fa6d1d8a48cd9fcd04bebd94e1a9.png" alt=""></td> </tr> </tbody> </table> <p>So that was my final submission, i tried to interpolate the ground true points to create more snapping targets, but i just ran out of time, luckily the public and private LB ended up improving together, so my best private score was indeed my best public one.</p> <p>Thank you everyone to read until the end, i'm sorry if i didn't gave credit to someone who was forked before the guys i forked, if so, please comment, i'll gladly extend my thanks to you too!</p>
Google Smartphone Decimeter Challenge
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to kaggle and organizers hosting a nice competition.</p> <h2>Overview</h2> <ul> <li>Predicting the Noise, <code>Noise = Ground Truth - Baseline</code>, like denoising in computer vision</li> <li>Using the speed <code>latDeg(t + dt) - latDeg(t)/dt</code> as input instead of the absolute position to prevent model from overfitting on the train dataset.</li> <li>Making 2D image input with Short Time Fourie Transform, STFT, and then using ImageNet convolutional neural network</li> </ul> <p><img src="https://user-images.githubusercontent.com/61892693/128517005-bad904b0-46e0-4af5-ae53-993e85b97de9.png" alt="pipeline"><br> <img src="https://user-images.githubusercontent.com/61892693/128673932-a1fb3b34-df4b-42f4-8485-890f9b35b8d4.png" alt="best_vs_host_baseline"></p> <h2>STFT and Conv Network Part</h2> <ul> <li>Input: Using <a href="https://librosa.org/doc/latest/index.html" target="_blank">librosa</a>, generating STFT for both latDeg&amp;lngDeg speeds.<ul> <li>Each phone sequence are split into 256 seconds sequence then STFT with <code>n_tft=256</code>, <code>hop_length=1</code> and <code>win_length=16</code> , result in (256, 127, 2) feature for each degree. The following 2D images are generated from 1D sequence.</li></ul></li> </ul> <p><img src="https://user-images.githubusercontent.com/61892693/128517031-3e2343f7-7a57-4ec9-bb3e-89c09c6542ba.png" alt="stft_images"></p> <ul> <li><p>Model: Regression and Segmentation</p> <ul> <li><p>Regression: EfficientNet B3, predict latDeg&amp;lngDeg noise, </p></li> <li><p>Segmentation: Unet ++ with EfficientNet encoder(<a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation pyroch</a>) , predict stft noise</p> <ul> <li><p>segmentation prediction + input STFT -&gt; inverse STFT -&gt; prediction of latDeg&amp;lngDeg speeds</p></li> <li><p>this speed prediction was used for:</p> <ol> <li>Low speed mask; The points of low speed area are replaced with its median.</li> <li>Speed disagreement mask: If the speed from position prediction and this speed prediction differ a lot, remove such points and interpolate.</li></ol></li> <li><p>prediction examples for the segmentation. <br> <img src="https://user-images.githubusercontent.com/61892693/128625486-cf8abdfe-675f-475a-8c7f-375752cc81c5.png" alt="conv_segmentation_result"><br> <img src="https://user-images.githubusercontent.com/61892693/128625496-a0fd458f-16b3-445b-88d6-0be71378b588.png" alt="conv_segmentation_result_2"></p></li></ul></li></ul></li> </ul> <h2>LightGBM Part</h2> <ul> <li>Input: IMU data excluding magnetic filed feature<ul> <li>also excluding y acceleration and z gyro because of phone mounting condition</li> <li>adding moving average as additional features, <code>window_size=5, 15, 45</code></li></ul></li> <li>Predict latDeg&amp;lngDeg noise</li> </ul> <h2>Public Post Process Part</h2> <p>There are nice and effective PPs in public notebook. I used the following notebooks. Thanks to the all authors.</p> <ul> <li><a href="https://www.kaggle.com/t88take/gsdc-phones-mean-prediction" target="_blank">phone mean</a></li> <li><a href="https://www.kaggle.com/dehokanta/baseline-post-processing-by-outlier-correction" target="_blank">filtering outlier</a></li> <li><a href="https://www.kaggle.com/emaerthin/demonstration-of-the-kalman-filter" target="_blank">kalman filter</a></li> <li><a href="https://www.kaggle.com/bpetrb/adaptive-gauss-phone-mean" target="_blank">gauss smoothing&amp;phone mean</a></li> </ul> <h2>KNN at downtown Part</h2> <p>similar to <a href="https://www.kaggle.com/robikscube/indoor-navigation-snap-to-grid-post-processing" target="_blank">Snap to Grid</a>, but using both global and local feature. Local re-ranking comes from the host baseline of <a href="https://www.kaggle.com/c/landmark-retrieval-2020" target="_blank">GLR2021</a></p> <ul> <li>Use train ground truth as database</li> <li>Global search: query(latDeg&amp;lngDeg) -&gt; find 10 candidates</li> <li>Local re-ranking: query(latDeg&amp;lngDeg speeds and its moving averages) -&gt; find 3 candidates -&gt; taking mean over candidates</li> </ul> <h2>scores</h2> <ul> <li>Check each idea with late submissions.</li> <li>actually conv position pred part implemented near deadline, before that I used only the segmentation model for STFT image.</li> </ul> <table> <thead> <tr> <th>status</th> <th>Host baseline + Public PP</th> <th>conv position pred</th> <th>gbm</th> <th>speed mask</th> <th>knn global</th> <th>knn local</th> <th>Private Board Score</th> </tr> </thead> <tbody> <tr> <td>my best submission</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>2.61693</td> </tr> <tr> <td>late sub</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>5.423</td> </tr> <tr> <td>late sub</td> <td>✓</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.61910</td> </tr> <tr> <td>late sub</td> <td>✓</td> <td>✓</td> <td></td> <td></td> <td></td> <td></td> <td>3.28516</td> </tr> <tr> <td>late sub</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> <td></td> <td></td> <td>3.19016</td> </tr> <tr> <td>late sub</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> <td></td> <td>2.81074</td> </tr> <tr> <td>late sub</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td></td> <td>2.66377</td> </tr> </tbody> </table> <h2>code</h2> <ul> <li>My code is available <a href="https://github.com/Fkaneko/kaggle_Google_Smartphone_Decimeter_Challenge" target="_blank">here</a></li> <li>My all pretrained weights have been uploaded at <a href="https://www.kaggle.com/sai11fkaneko/google-smartphone-decimeter-challenge-weights" target="_blank">kaggle dataset</a></li> <li>You can reproduce my result with the <a href="https://github.com/Fkaneko/kaggle_Google_Smartphone_Decimeter_Challenge#how-to-run" target="_blank">instruction</a></li> </ul>
Google Smartphone Decimeter Challenge
18th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners, and thanks so much for hosting such an interesting competition! <br> This competition is very tough for me, I'll share our solution.</p> <h2>Overview</h2> <p><img src="https://user-images.githubusercontent.com/43205304/128286772-0406d578-e9e0-4ea9-96df-3e0d30f2433c.png" alt="GSDC-solution"></p> <h2>1. Reproduce Baseline</h2> <p>In this competition, a baseline using WLS was shared(result file only). In order to reproduce this, we reproduced the baseline using derived files. It was difficult for us to reproduce the exact same results, but we were able to create baseline equivalent scores and decided to blend them with the original baseline file.</p> <p>We refered this notebook.<br> <a href="https://www.kaggle.com/hyperc/gsdc-reproducing-baseline-wls-on-one-measurement" target="_blank">https://www.kaggle.com/hyperc/gsdc-reproducing-baseline-wls-on-one-measurement</a></p> <p>Here are some of the things we did to create the baseline</p> <ul> <li>Use calculated satellite(GPS only) position by using OSR data(but effect is a little)</li> <li>Use only GPS/GALILEO/QZS data(other satellite is not good)</li> <li>Weight of WLS are hand-tuned (x**2 + 3x)</li> </ul> <p>After creating two new baseline locations using the above method, we blended them with the original baseline by taking a weighted average.</p> <h2>2. Area Classification</h2> <p>We automatically classified the collection into three categories as in the public version, and also defined a difficult area.<br> The difficult area was defined as the area like following.</p> <ol> <li>Apply pre processing to train data</li> <li>Extract point which error is more than 5m</li> <li>Convert point to polygon by apply buffer to each point</li> <li>Define these polygon area as difficult area</li> </ol> <p>Defined difficult area are like this.<br> <img src="https://user-images.githubusercontent.com/43205304/128278785-0cb03ad5-45a9-4ed9-9203-2b2131cb7cef.png" alt="スクリーンショット 2021-08-05 9 36 09"></p> <p>This was used for snap to grid.</p> <h2>3. Pre/Post Processing</h2> <p>We used some shared notebook, Thanks you to the contributor.</p> <ul> <li><p>Outlier Correction<br> <a href="https://www.kaggle.com/dehokanta/baseline-post-processing-by-outlier-correction" target="_blank">https://www.kaggle.com/dehokanta/baseline-post-processing-by-outlier-correction</a></p></li> <li><p>Kalman Smoothing<br> <a href="https://www.kaggle.com/emaerthin/demonstration-of-the-kalman-filter" target="_blank">https://www.kaggle.com/emaerthin/demonstration-of-the-kalman-filter</a><br> We apply linear interpolation to keep epoch width constant</p></li> <li><p>Phone Mean<br> <a href="https://www.kaggle.com/t88take/gsdc-phones-mean-prediction" target="_blank">https://www.kaggle.com/t88take/gsdc-phones-mean-prediction</a><br> <a href="https://www.kaggle.com/bpetrb/adaptive-gauss-phone-mean" target="_blank">https://www.kaggle.com/bpetrb/adaptive-gauss-phone-mean</a></p></li> <li><p>Remove Phone<br> <a href="https://www.kaggle.com/columbia2131/device-eda-interpolate-by-removing-device-en-ja" target="_blank">https://www.kaggle.com/columbia2131/device-eda-interpolate-by-removing-device-en-ja</a></p></li> <li><p>Snap to Grid<br> We apply snap to grid only downtown area and difficult area.</p></li> <li><p>Stop Mean<br> I tried to take the average of the stopping points.<br> The procedure is as follows.</p></li> </ul> <p>(1) Predict car speed by lightGBM</p> <ul> <li>target: speedMps in ground_truth.csv</li> <li>features<ul> <li>lag features with shift range is -30 ~ 30 (location, time, speed etc…)</li> <li>aggregate features</li></ul></li> </ul> <p>(2) If the prediction result is less than 0.95m/s and more than 2 consecutive seconds, make a group.<br> (3) Take the average for each group</p> <h2>4. Position estimation by imu data</h2> <p>Relative position correction was performed using IMU data for the only downtown area.<br> I refered this notebook. Thanks.<br> <a href="https://www.kaggle.com/alvinai9603/predict-next-point-with-the-imu-data" target="_blank">https://www.kaggle.com/alvinai9603/predict-next-point-with-the-imu-data</a></p> <ul> <li>model: lightGBM</li> <li>GroupKFold(group="phone")</li> <li>features<ul> <li>lag features(shift range is -30~30)</li> <li>aggregate features(mean, std, max, min, median, skew, kart)</li></ul></li> </ul>
Google Smartphone Decimeter Challenge
19 place solution (shake TOKYO)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Team solution</h2> <p>First of all, thank you very much for hosting this competition. Here we describe the team TOKYO's solution. </p> <p>Unfortunately, our solution went through a shake down (Public 8th Private 19th). We guess one of the reasons is that we targeted downtown &amp; tree areas too much, while highway areas consist the major part of the private dataset.</p> <p>Our solution consists of several algorithms, some of them introduced by other competitors and made public, the others developed by ourselves.</p> <p><img src="https://i.imgur.com/pAhVzTP.png" alt="Imgur"></p> <h4>1. GNSS-based localization (<a href="https://www.kaggle.com/minomonter/gnss-only-estimation?scriptVersionId=70161014" target="_blank">link</a>)</h4> <p>The first module is a position estimation based on GNSS data provided in *_derived.csv. With some simple tricks as follow, we achived LB: 6.665 in public.</p> <ul> <li>Using cauchy loss in least-square optimization</li> <li>Introducing Elevation Mask</li> </ul> <h4>2. Delta prediction (<a href="https://www.kaggle.com/dehokanta/shake-tokyo-delta-predicition?scriptVersionId=70516831" target="_blank">link</a>)</h4> <p>Here we trained a model which estimates the difference between estimated position \((lat^i_{est}, lng^i_{est})\) and corresponding ground truth position \((lat^i_{gt}, lng^i_{gt})\), based on the provided train data. We introduced several features and used LightGBM for the estimation.</p> <h4>3. Outlier rejection &amp; Kalman smoothing</h4> <p>Based on the idea introduced <a href="https://www.kaggle.com/dehokanta/baseline-post-processing-by-outlier-correction" target="_blank">here</a> by <a href="https://www.kaggle.com/dehokanta" target="_blank">@dehokanta</a>, we introduced a rule-based outlier detection algorithm. Through several experiments, we've found out that the estimation accuracy improved significantly by rejecting the following points:</p> <ul> <li>A point where the estimated velocity (= distance between the neighboring points) is higher than \(45 [\rm{m/s}]\)</li> <li>A point where the estimated acceleration ( = difference of velocity between the two neighboring points) is higher than \(1.8 \times 9.81 [\rm{m/s^2}]\)<br> After this, we applied the kalman smoothing introduced <a href="https://www.kaggle.com/emaerthin/demonstration-of-the-kalman-filter" target="_blank">here</a> by <a href="https://www.kaggle.com/emaerthin" target="_blank">@emaerthin</a>.</li> </ul> <h4>4. Phones mean</h4> <p>We used several types of phones mean method. Extended from the idea introduced <a href="https://www.kaggle.com/t88take/gsdc-phones-mean-prediction" target="_blank">here</a>, we used several types of phone means such as:</p> <ul> <li>Align all the points from different smartphones into one line</li> <li>Align all the points from different smartphones by introducing a weight based on how "zigzag" the estimated trajectories are</li> </ul> <h4>5. Stop detection &amp; filtering</h4> <p>As discussed in several discussions &amp; notebooks, the noise during the vehicle's stop was apparently one of the major noise in the baseline estimation.<br> We developed stop detection algorithm using inertial measurements in a rule-based fashion.</p> <p>For some collections which does not include sensor data, we estimated the stop state only from the position data.</p> <h4>6. Snap-to-grid</h4> <p>We used snap-to-grid for tree area. For downtown area, we applied a different snap-to-grid method described below.</p> <h4>7. Dynamic Time Warping based snap-to-grid (<a href="https://www.kaggle.com/minomonter/dynamic-time-warping-snap-to-grid?scriptVersionId=70166586" target="_blank">link</a>)</h4> <p>To overcome some severe problems in simple snap-to-grid, we combined the algorithm with Dynamic Time Warping (DTW). This algorithm significantly worked especially for the downtown area.</p> <h4>8. Remove Samsung</h4> <p>We used a method introduced <a href="https://www.kaggle.com/columbia2131/device-eda-interpolate-by-removing-device-en-ja" target="_blank">here</a> by the other team.</p> <hr> <h3>What we tried but didn't use in the final pipeline</h3> <ul> <li>RTK based localization using RTKLIB</li> <li>delta prediction by MLP</li> <li>pseudo labeling for delta prediction</li> <li>some other filtering methods, e.g. SGFilter</li> <li>Factor graph optimization based on <a href="https://arxiv.org/pdf/2004.10572.pdf" target="_blank">this paper</a></li> <li>outlier prediction by ML</li> <li>stop detection by density</li> </ul> <h3>Final submission is visualized here.(<a href="https://www.kaggle.com/dehokanta/shake-tokyo-visualization-of-final-submissions" target="_blank">link</a>)</h3>
Google Smartphone Decimeter Challenge
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry to mislead you, but I usually use GNSS raw data for my research. However, this was the first time for me to handle raw data from <strong>smartphones</strong>. Thank you!</p> <p>First, I would like to thank everyone involved in this competition. It has been a very enjoyable two months. </p> <p>Although I am a researcher in the GNSS field, this was the first time I worked with raw GNSS data from smartphones. The raw GNSS data from the smartphone was compared to commercial GNSS receivers used in drones and mobile robots, and I had the following impressions.</p> <ul> <li>Pseudorange is <strong>very noisy</strong>, and the quality of the GNSS observations is not good</li> <li>There are a lot of <strong>missing data</strong>, <strong>duplication</strong>, <strong>time jump</strong>, etc., and it is very complicated</li> </ul> <p>I realized that I was usually exposed to very clean GNSS data…This competition was very very tough because of the huge amount of courses and the variety of smartphones(and the long variable names…). To be honest, I don't want to look at GNSS data on smartphones anymore!😐</p> <h1>Key Points for Solution</h1> <ul> <li>Global optimization of position and velocity by <strong>Factor Graph Optimization</strong> Technique</li> <li>Velocity constraint by <strong>accumulated delta range(ADR)</strong></li> <li>Absolute position constraint by <strong>differential pseudorange between a base station</strong></li> <li>No machine learning(or rather, it was not possible due to time limitations)</li> </ul> <h1>Input Data</h1> <ul> <li>Phone_Gnsslog.txt</li> <li>RINEX files of GNSS<a href="https://www.kaggle.com/c/google-smartphone-decimeter-challenge/discussion/238579" target="_blank"> base station</a> (I used Verizon base station data from <a href="https://webscope.sandbox.yahoo.com/catalog.php?datatype=s&amp;did=88" target="_blank">here</a>)</li> <li>Ground Truth(Downtown area only)</li> </ul> <p>In the end, I did not use Phone_derived.csv. There were some missing data in Phone_derived.csv files. I calculated the satellite position and velocity directly from the RINEX navigation file. Baseline position and IMU data are also not used.</p> <h1>Factor Graph Optimization</h1> <blockquote> <p>Factor graphs are a class of graphical models in which there are variables and factors. The variables represent unknown quantities in the problem, and the factors represent functions on subsets of the variables. Edges in the factor graph are always between factors and variables, and indicate that a particular factor depends on a particular variable.(from <a href="https://gtsam.org/2020/06/01/factor-graphs.html" target="_blank">here</a>)</p> </blockquote> <p>The core of my approach was to use a <strong>global optimization method</strong> based on <strong>Factor Graph</strong>. Several optimization methods have been used <a href="https://www.kaggle.com/c/google-smartphone-decimeter-challenge/discussion/261959" target="_blank">here</a> and <a href="https://www.kaggle.com/c/google-smartphone-decimeter-challenge/discussion/262074" target="_blank">here</a>, with good results. Factor graph optimization is a method that can apply a variety of complex nonlinear constraints and simultaneously optimize all state variables(the entire driving trajectory). There are many outliers in the various constraints(edges in the graph), but a robust optimization technique eliminates the need to manually set the outlier threshold parameters. For more information about factor graphs, see <a href="https://gtsam.org/tutorials/intro.html" target="_blank">this page</a>.</p> <h1>Factor Graph Structure</h1> <p>I tried many different graph structures and finally performed Graph optimization using the following Factor graph.<br> <img src="https://user-images.githubusercontent.com/7933764/128525861-3b529b21-e5bb-49ca-8ca6-60229b4ade32.png" alt="fgo"></p> <p>The graph node\(X_i\) presents the state variable of different moments, and the edge connecting two nodes presents the error function \(e(\cdot)\); each edge corresponds to a single observation \(Z_i\). In the graph, error function \(e(\cdot)\) represents probabilistic constraints applied to the state at the specified time-step. Optimization factor graph can be written as follows.<br> <img src="https://user-images.githubusercontent.com/7933764/128522724-ad708687-61ec-4416-b622-c972b451a8a0.png"></p> <p>Here, \(\Omega_i\) is the information matrix(inverse of covariance matrix)which determines the accuracy of the observation \(Z_i\). We defined the following as nodes(estimated states)of the graph.<br> <img src="https://user-images.githubusercontent.com/7933764/128524196-b6843c78-dfe3-4850-88ed-9fada7becea2.png"><br> <img src="https://user-images.githubusercontent.com/7933764/128524319-f1c6a8fe-0f7e-4417-b702-a9cf815aa466.png"></p> <p>where \(r\) and \(\dot{r}\) is 3D position/velocity in earth-centered earth-fixed(ECEF)coordinate. \(t\) and \(\dot{t}\) represent the receive clock bias and drift in each GNSS signals.</p> <p>Here, \(s\) in the graph is a <a href="https://nikosuenderhauf.github.io/assets/papers/IROS12-switchableConstraints.pdf" target="_blank">switchable constraint</a>, which is a state that takes a variable between 0 and 1. The value of switchable constraint is estimated simultaneously by optimization. On the edge of an outlier, the switchable constraint is automatically optimized to 0 and acts like a weight for the observed value. The optimization problem can be described as follows.</p> <p><img src="https://user-images.githubusercontent.com/7933764/128525345-2cb74f7a-5a74-4fa4-8c30-1d73a0828ecf.png"></p> <p>where the last term in the above equation is the anchor factor of the switch to prevent the switch state from going to all zeros. </p> <h3>Pseudorange Factor</h3> <p>In the Pseudorange Factor, the observation is the difference between the pseudorange and the reference station's pseudorange, which removes the all bias error component in GNSS measurements(excluding multipath errors). Theoretically, the residuals of the single differenced pseudorange have a Gaussian distribution with zero mean value if receiver clock error is compensated. This single-difference pseudorange improves the absolute position accuracy. Using this differential GNSS technique, I did not need any bias correction for the final position.</p> <h3>Doppler/ADR Factor</h3> <p>The pseudorange rate(Doppler shift)of the smartphone was noisier than I expected. I think the use of ADR(see P21-P22 of <a href="https://www.euspa.europa.eu/simplecount_pdf/tracker?file=expo/1.1_frank_van_diggelen_-_google.pdf" target="_blank">this document</a>) led to a higher level of competition. The following figure shows the velocities calculated from Doppler and ADR, respectively, compared to the ground truth. Velocity from ADR is about <strong>5 cm/s</strong>! In terms of accuracy alone, ADR was clearly superior to Doppler shift. Although the ADR is super accurate, its availability is lower than Doppler shift due to cycle slip and half-cycle ambiguity problem, so the Doppler factor is used instead only when the ADR factor is not available.</p> <p><img src="https://user-images.githubusercontent.com/7933764/128528282-8383da71-dbed-4b11-b121-743bf3fbf839.png" alt="adr"></p> <h3>Motion Factor</h3> <p>The motion factor uses the estimated velocity and clock drift to add constraints between neighboring nodes. It simply adds a constraint so that the integral of the velocity and clock drift is equal to the difference between the neighboring states.</p> <h3>Pseudo-Position Factor</h3> <p>The pseudo-position factor was used only for the downtown area, and since the ground truth was traveling along the same path as the test data, the points on the ground truth path closest to the estimated trajectory were extracted and added to the graph as pseudo position constraints with appropriate covariance.</p> <h1>A Few Concerns</h1> <ul> <li><p>As you can see in this discussion, the CV and LB scores are not matched, as is the big difference between the public and private LB scores. Without machine learning, I don't think there would be much overfitting to public scores. I can't explain the difference in these scores.</p></li> <li><p>I have some doubts about the accuracy of Ground Truth. The ground truth is based on Novatel SPAN(I also have this sensor), but the results are quite different depending on multipath situations and analysis parameters. I would like the organizer to release the ground truth of the test data for future evaluation. I would also like to know the variance of the estimated ground truth.</p></li> </ul> <h1>My Impressions</h1> <ul> <li><p>Too bad I failed the decimeter challenge. The best public score in all my submissions was 1.4 m. I failed to choose the final post… I am convinced that with a combination of different techniques, I can eventually break 1 m!😃</p></li> <li><p>Graph optimization was very powerful. It optimizes all variables at the same time under nonlinear constraints, so it performs much better than filtering methods such as KF. For the implementation, I used the <a href="https://github.com/borglab/gtsam" target="_blank">GTSAM</a>, which is C++ library, and has Matlab and Python wrapper.</p></li> <li><p>It's a shame that I couldn't incorporate machine learning, which I wanted to do at first. I'm enjoying reading the other team's solutions. Let's write a paper together.</p></li> <li><p>There was a considerable difference in the GNSS observations depending on the phone. In the end, I didn't use phone marge and estimated the trajectory of only the best phone for each run. My best smartphone is Samsung Galaxy S20 Ultra, which is very good at tracking the GNSS carrier phase. I love it😊</p></li> <li><p>The only thing I could trust was ADR. Thank you so much, ADR.</p></li> </ul> <p>Thanks!</p> <blockquote> <p>Are there any specific reason why you did not use the IMU sensor data here?</p> </blockquote> <p>The reason is that I simply did not have the time to do it, and in many environments where GNSS is available, the relative position can be estimated with sufficient accuracy from the carrier phase measurements (ADR). Another reason is that when I checked the IMU data, there was a considerable time jump… In my solution, only the 3D position/velocity was estimated, but I would like to try adding 3D attitude into the estimated state and using IMU observations.</p> <blockquote> <p>Did you incorporate any prior knowledge to estimate multipath satellite signals, such as a mask based on elevation angle, in the FGO? Or is it intended that the switchable constraints performs well enough to detect such signals?</p> </blockquote> <p>The part of robust optimization using outlier-containing measurements was left to switchable constraint. This figure illustrates the switch values of each pseudorange finally estimated by the omitimizer in the MTV course. It can be seen that the weights of observations with large residuals due to multipath, etc., are automatically reduced.<br> <img src="https://user-images.githubusercontent.com/7933764/128584580-8b50cbb0-41fa-481c-a43d-ddcc195b222c.png" alt="sw"></p> <p>In the case of the pseudorange factor, the variance of the measurement is based on the provided pesudorange uncertainty. In practice, the variance is adjusted by multiplying the provided pseudorange uncertainty by a factor that depends on the type of satellite signal.</p> <p>In many cases, the variance of the measurement(edges)is determined from the specifications or outputs of the sensor, so I manually adjusted some scale parameters of the variances.</p> <p>I read that paper! A group at the Hong Kong PolyU is also actively engaged in research in this area.</p> <p>Neither ADR nor Doppler is normalized. As shown in this <a href="https://www.kaggle.com/gymf123/onepager-tip-acumulated-delta-range-adr" target="_blank">code</a>, only the drift of the satellite clock is corrected for in both cases. The time variability of the atmosphere delay is negligible within a second, so I do not correct it. Thank you very much!</p> <blockquote> <p>Can you share some Python code?</p> </blockquote> <p>Sorry, I did not use Python this time. I implemented the main algrithm in C++ and Matlab due to the use of GTSAM. I would like to share the source code when I get a chance, but it may take some time.</p> <blockquote> <p>This may be a dumb question, but can you also explain the difference between GTSAM and simple Bayesian Optimization?</p> </blockquote> <p>Graph optimization is different from Bayesian optimization. Unlike Bayesian optimization, which is a black-box optimization, the main difference is that the cost function is created under various nonlinear constraints, and the cost function is minimized.</p>
Google Smartphone Decimeter Challenge
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="https://www.kaggle.com/michalbrys" target="_blank">@michalbrys</a> !</p> <p>Hello :) thanks for question, in fact my diagram is little messy at this point.<br> Well target value is <br> <code>diffPrM = baselinePrM - groundTruthPrM</code> <br> where both baselinePrM and groundTruthPrm are computed to satellite position with applied effect of earth rotation<br> In fact I've tested many scenarios of target value and this one works best for me</p> <p>P.S congrat to your team achievement surviving shakeup at 6th place with gold!</p> <p>Thank you hosts for such an exciting competition!<br> My final solution can be described as follows:</p> <p><img src="https://i.ibb.co/VBm4r75/scrn.png" alt=""></p> <p>and I was inspired by Smart PPP algorithm presented <a href="https://link.springer.com/article/10.1007/s10291-021-01106-1" target="_blank">here</a></p> <ol> <li><p>XGBoost stacked ensemble improved baseline<br> I've spend about 3 weeks at this point and started with idea of correcting Baseline PrMs using other device data. This was jumpy but after all it works. Each stacked ensemble model took about 5 hour to train with GPU, and it was hard to validate. In my opinion this gives about 10-40 cm improved baseline. No further smoothing Pseudoranges was applied. Technical trick: amount of correction of PrM was used as weights in WLS algorithm</p></li> <li><p>Computing velocity<br> I've spend about week on translating WlsPvt.m from gps-measurement google tool in matlab algorithm to python using PseudorangeRateMetersPerSecond and it was 0.08Mps mean average accurate. Technical trick: When algorithm failed I used baseline speedMps described in my notebook position shift. This results improved a lot kalman filtering and mean by stop technique.</p></li> <li><p>ADR relative position<br> I've spend about week on computing relative position using ADR and I didn't succeeded, thus I've used derived files public version Android GPS tools. It took a lot of time to process raw gnss files in this software, and failed for two phones from test set. Technical trick: some of automatically downloaded ephemeris files were corrupted, thus it works to download them manually and turn off internet for the processing time.</p></li> <li><p>Remove outliers<br> I took idea from <a href="https://www.kaggle.com/dehokanta" target="_blank">@dehokanta</a> but it was upgraded. Technical trick: If there were two or more consecutive outliers remove and interpolate it</p></li> <li><p>Mixing signals from other devices:<br> My very first submissions was starting here. It turns out then median averaging is better then mean, but I've ended with idea found <a href="https://stackoverflow.com/questions/55813719/multi-sensors-fusion-using-kalman-filter" target="_blank">here</a>: which is described as follows:<br> <img src="https://i.ibb.co/dJddjV9/var.png" alt=""><br> Technical trick: variance among all epochs is pretty similar so I windowed signal (10-30 epochs adjusted by speed Category and region) </p></li> <li><p>Mean by stop<br> Just using velocity Mps with threshold adjusted about 0.5-1. At this point (Xgboost + postprocessing: remove_outliers + mixing + kalman + stop) I've reached 3.945 public and 2.379 private</p></li> <li><p>Snap to ADR shape<br> I used mean adr position - mean position during valid cycle as shift and shifted relative positions. Technical trick: This shift was weighted by adr uncertainity meters (public 3.345 private 1.903)</p></li> <li><p>Snap to ground truth<br> At this point i've started to snapping this solution to ground truth (threshold 2m for high speed and threshold way more then 2m for SJC) Technical trick: I've selected SJC regions by mean speed &lt; 6 Mps total. (best public 3.159 private 1.801)</p></li> <li><p>Technical trick: joblib + optuna were so useful for me among almost all steps</p></li> </ol> <p>After all, using GNSS will never be the same for me thus I've started as a completely newbie - thank you!</p>
Google Smartphone Decimeter Challenge
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Smartphone Decimeter Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am planning to publish code after refactoring and adding inertial measurements. For now it is too dirty and buggy. Actually main idea is simple and you can easily reproduce it.</p> <p>May be before I need to say few words about ADR.<br> Without car movement, ADR incorporate changes in satellites positions only. ADR measurement is very accurate; you will get ~0.02 meter error in changes of observable satellite distances and differentials of ADR. In case of car movement, you will get non zero result for difference of distance change to constant position and change of ADR. If you get X meters difference for ADR change and satellite distance change you can use this measure to move current position X meter towards the current satellite position. So, during loading ADRs I transform measurements to unit vectors pointed to satellite and position shift at that direction. If founded position shift is correct, that shift correspond most part of loaded differences (with exception of measurements where cycle slips occurs)</p> <p>You create tf model which contains position and biases as trainable weight layer. Position should be initialized from baseline prediction. Biases initialization is zero. Model is very simple - it takes epoch as input and return position and bias for that epoch just from the weights.</p> <p>After that you make a custom learning cycle. <br> At every step </p> <ol> <li>Run model and get positions for every epoch<br> 1.1 Using this positions calculate distance to satellite for every psevdorange measurement from logs<br> 1.2 Add absolute value of difference between calculated distances and psevdoranges shifted by epoch bias to loss<br> 2.1 Using positions calculate speed - that is just a shift from previous to current epoch<br> 2.2 Calculate acceleration - that is again just difference between speed of epochs<br> 2.3 Add all acceleration which is greater than 10 to loss<br> 3.1 Calculate movement towards satellite for every ADR measurement you have (that is just scalar multiplication of prepared vectors pointed to satellites and positions shift)<br> 3.2 Add difference between observed movement and calculated movement to loss</li> <li>Use calculated loss for gradient descent of model weights</li> </ol> <p>That’s all. After some time of training you will get correct positions</p> <p>Looks like I should somehow present my solution</p> <p>First, I want to beg your pardon for my terrible English, but hopefully there will not be a much text.</p> <p>I did not use any preprocessing or post-processing for the data, with one exception - I had to filter DeltaRange because on some tracks they are completely false. Also, I have not used any complex algorithms.</p> <p>All I used from the data is pseudo-ranges and deltas.</p> <p>In order to filter out the deltas, I calculated the position offset to the next epoch, and left only those epochs that gave enough consistent satellites</p> <p>The main idea of my algorithm was that the positions that I get on the track must be in good agreement with the psevdoranges and deltas, in addition, they must satisfy certain physical laws.</p> <p>To do this, I built a tensorflow model that contained positions and time offsets as weights. Then I minimized the loss, which consisted of the sum satellite distance errors and the delta changes errors. Plus, I add penalty for unnecessary acceleration - to avoid trajectory wobbling.</p> <p>This allowed me to get the first place in the public rating and the third in the private one.<br> As you can see, I did not use the data from the ground true positions at all.</p> <p>After the end of the competition, I spent a little bit more time and added psevdoranges correction from the base station and the calculated offset relative to the gt positions, that greatly improved the result and now it looks like this<br> Private Public <br> 1.39533 2.59287</p> <p>On training data, the error decreased to 1.1</p> <p>I'm going to add some more data from the IMU - perhaps this will allow us to overcome the error of one meter</p> <p>I have tried to apply base station during competition, but suprisingly got a worse result than without corrections. Only after competition was over I have found that results are very good but are shifted about 2 meters</p> <p>Even more, the shift is 1.5 to the north and 0.5 to the west. They looks artifical. May be GT data is not so GT</p>