self training with noisy student improves imagenet classification

First, a teacher model is trained in a supervised fashion. This is a recurring payment that will happen monthly, If you exceed more than 500 images, they will be charged at a rate of $5 per 500 images. Authors: Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le Description: We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. to use Codespaces. Then, that teacher is used to label the unlabeled data. Train a larger classifier on the combined set, adding noise (noisy student). Most existing distance metric learning approaches use fully labeled data Self-training achieves enormous success in various semi-supervised and . Noisy Student leads to significant improvements across all model sizes for EfficientNet. If nothing happens, download GitHub Desktop and try again. You signed in with another tab or window. But training robust supervised learning models is requires this step. We use our best model Noisy Student with EfficientNet-L2 to teach student models with sizes ranging from EfficientNet-B0 to EfficientNet-B7. Self-Training With Noisy Student Improves ImageNet Classification Abstract: We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. The most interesting image is shown on the right of the first row. We iterate this process by putting back the student as the teacher. Our experiments show that an important element for this simple method to work well at scale is that the student model should be noised during its training while the teacher should not be noised during the generation of pseudo labels. Please The top-1 accuracy reported in this paper is the average accuracy for all images included in ImageNet-P. Are you sure you want to create this branch? Classification of Socio-Political Event Data, SLADE: A Self-Training Framework For Distance Metric Learning, Self-Training with Differentiable Teacher, https://github.com/hendrycks/natural-adv-examples/blob/master/eval.py. We evaluate our EfficientNet-L2 models with and without Noisy Student against an FGSM attack. Different kinds of noise, however, may have different effects. supervised model from 97.9% accuracy to 98.6% accuracy. Flip probability is the probability that the model changes top-1 prediction for different perturbations. These test sets are considered as robustness benchmarks because the test images are either much harder, for ImageNet-A, or the test images are different from the training images, for ImageNet-C and P. For ImageNet-C and ImageNet-P, we evaluate our models on two released versions with resolution 224x224 and 299x299 and resize images to the resolution EfficientNet is trained on. When the student model is deliberately noised it is actually trained to be consistent to the more powerful teacher model that is not noised when it generates pseudo labels. Afterward, we further increased the student model size to EfficientNet-L2, with the EfficientNet-L1 as the teacher. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Compared to consistency training[45, 5, 74], the self-training / teacher-student framework is better suited for ImageNet because we can train a good teacher on ImageNet using label data. Our experiments showed that self-training with Noisy Student and EfficientNet can achieve an accuracy of 87.4% which is 1.9% higher than without Noisy Student. We then train a student model which minimizes the combined cross entropy loss on both labeled images and unlabeled images. This is why "Self-training with Noisy Student improves ImageNet classification" written by Qizhe Xie et al makes me very happy. Figure 1(b) shows images from ImageNet-C and the corresponding predictions. Self-Training With Noisy Student Improves ImageNet Classification Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. The abundance of data on the internet is vast. putting back the student as the teacher. Although they have produced promising results, in our preliminary experiments, consistency regularization works less well on ImageNet because consistency regularization in the early phase of ImageNet training regularizes the model towards high entropy predictions, and prevents it from achieving good accuracy. Iterative training is not used here for simplicity. This attack performs one gradient descent step on the input image[20] with the update on each pixel set to . Further, Noisy Student outperforms the state-of-the-art accuracy of 86.4% by FixRes ResNeXt-101 WSL[44, 71] that requires 3.5 Billion Instagram images labeled with tags. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. [2] show that Self-Training is superior to Pre-training with ImageNet Supervised Learning on a few Computer . [^reference-9] [^reference-10] A critical insight was to . We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. sign in w Summary of key results compared to previous state-of-the-art models. We then use the teacher model to generate pseudo labels on unlabeled images. EfficientNet-L1 approximately doubles the training time of EfficientNet-L0. This result is also a new state-of-the-art and 1% better than the previous best method that used an order of magnitude more weakly labeled data [ 44, 71]. over the JFT dataset to predict a label for each image. possible. Self-training was previously used to improve ResNet-50 from 76.4% to 81.2% top-1 accuracy[76] which is still far from the state-of-the-art accuracy. At the top-left image, the model without Noisy Student ignores the sea lions and mistakenly recognizes a buoy as a lighthouse, while the model with Noisy Student can recognize the sea lions. We have also observed that using hard pseudo labels can achieve as good results or slightly better results when a larger teacher is used. We conduct experiments on ImageNet 2012 ILSVRC challenge prediction task since it has been considered one of the most heavily benchmarked datasets in computer vision and that improvements on ImageNet transfer to other datasets. As shown in Table3,4 and5, when compared with the previous state-of-the-art model ResNeXt-101 WSL[44, 48] trained on 3.5B weakly labeled images, Noisy Student yields substantial gains on robustness datasets. Overall, EfficientNets with Noisy Student provide a much better tradeoff between model size and accuracy when compared with prior works. We use the labeled images to train a teacher model using the standard cross entropy loss. During the generation of the pseudo labels, the teacher is not noised so that the pseudo labels are as accurate as possible. Zoph et al. Our largest model, EfficientNet-L2, needs to be trained for 3.5 days on a Cloud TPU v3 Pod, which has 2048 cores. Our procedure went as follows. (using extra training data). Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. In addition to improving state-of-the-art results, we conduct additional experiments to verify if Noisy Student can benefit other EfficienetNet models. For this purpose, we use the recently developed EfficientNet architectures[69] because they have a larger capacity than ResNet architectures[23]. We use EfficientNet-B4 as both the teacher and the student. As shown in Table2, Noisy Student with EfficientNet-L2 achieves 87.4% top-1 accuracy which is significantly better than the best previously reported accuracy on EfficientNet of 85.0%. This paper proposes a pipeline, based on a teacher/student paradigm, that leverages a large collection of unlabelled images to improve the performance for a given target architecture, like ResNet-50 or ResNext. For example, without Noisy Student, the model predicts bullfrog for the image shown on the left of the second row, which might be resulted from the black lotus leaf on the water. Whether the model benefits from more unlabeled data depends on the capacity of the model since a small model can easily saturate, while a larger model can benefit from more data. For unlabeled images, we set the batch size to be three times the batch size of labeled images for large models, including EfficientNet-B7, L0, L1 and L2. During the generation of the pseudo labels, the teacher is not noised so that the pseudo labels are as accurate as possible. https://arxiv.org/abs/1911.04252, Accompanying notebook and sources to "A Guide to Pseudolabelling: How to get a Kaggle medal with only one model" (Dec. 2020 PyData Boston-Cambridge Keynote), Deep learning has shown remarkable successes in image recognition in recent years[35, 66, 62, 23, 69]. Scaling width and resolution by c leads to c2 times training time and scaling depth by c leads to c times training time. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. . To achieve this result, we first train an EfficientNet model on labeled Self-training with Noisy Student improves ImageNet classificationCVPR2020, Codehttps://github.com/google-research/noisystudent, Self-training, 1, 2Self-training, Self-trainingGoogleNoisy Student, Noisy Studentstudent modeldropout, stochastic depth andaugmentationteacher modelNoisy Noisy Student, Noisy Student, 1, JFT3ImageNetEfficientNet-B00.3130K130K, EfficientNetbaseline modelsEfficientNetresnet, EfficientNet-B7EfficientNet-L0L1L2, batchsize = 2048 51210242048EfficientNet-B4EfficientNet-L0l1L2350epoch700epoch, 2EfficientNet-B7EfficientNet-L0, 3EfficientNet-L0EfficientNet-L1L0, 4EfficientNet-L1EfficientNet-L2, student modelNoisy, noisystudent modelteacher modelNoisy, Noisy, Self-trainingaugmentationdropoutstochastic depth, Our largest model, EfficientNet-L2, needs to be trained for 3.5 days on a Cloud TPU v3 Pod, which has 2048 cores., 12/self-training-with-noisy-student-f33640edbab2, EfficientNet-L0EfficientNet-B7B7, EfficientNet-L1EfficientNet-L0, EfficientNetsEfficientNet-L1EfficientNet-L2EfficientNet-L2EfficientNet-B75. There was a problem preparing your codespace, please try again. The abundance of data on the internet is vast. In the above experiments, iterative training was used to optimize the accuracy of EfficientNet-L2 but here we skip it as it is difficult to use iterative training for many experiments. Yalniz et al. Figure 1(a) shows example images from ImageNet-A and the predictions of our models. Then we finetune the model with a larger resolution for 1.5 epochs on unaugmented labeled images. The learning rate starts at 0.128 for labeled batch size 2048 and decays by 0.97 every 2.4 epochs if trained for 350 epochs or every 4.8 epochs if trained for 700 epochs. Self-training first uses labeled data to train a good teacher model, then use the teacher model to label unlabeled data and finally use the labeled data and unlabeled data to jointly train a student model. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. For a small student model, using our best model Noisy Student (EfficientNet-L2) as the teacher model leads to more improvements than using the same model as the teacher, which shows that it is helpful to push the performance with our method when small models are needed for deployment. Le. A new scaling method is proposed that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient and is demonstrated the effectiveness of this method on scaling up MobileNets and ResNet. self-mentoring outperforms data augmentation and self training. Self-training with noisy student improves imagenet classification, in: Proceedings of the IEEE/CVF Conference on Computer . This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. corruption error from 45.7 to 31.2, and reduces ImageNet-P mean flip rate from This model investigates a new method. Proceedings of the eleventh annual conference on Computational learning theory, Proceedings of the IEEE conference on computer vision and pattern recognition, Empirical Methods in Natural Language Processing (EMNLP), Imagenet classification with deep convolutional neural networks, Domain adaptive transfer learning with specialist models, Thirty-Second AAAI Conference on Artificial Intelligence, Regularized evolution for image classifier architecture search, Inception-v4, inception-resnet and the impact of residual connections on learning. ImageNet . In other words, using Noisy Student makes a much larger impact to the accuracy than changing the architecture. Self-training with Noisy Student improves ImageNet classification. et al. We call the method self-training with Noisy Student to emphasize the role that noise plays in the method and results. Notice, Smithsonian Terms of One might argue that the improvements from using noise can be resulted from preventing overfitting the pseudo labels on the unlabeled images. We then train a larger EfficientNet as a student model on the As can be seen from Table 8, the performance stays similar when we reduce the data to 116 of the total data, which amounts to 8.1M images after duplicating. Noisy Student self-training is an effective way to leverage unlabelled datasets and improving accuracy by adding noise to the student model while training so it learns beyond the teacher's knowledge. In particular, we set the survival probability in stochastic depth to 0.8 for the final layer and follow the linear decay rule for other layers. Noisy StudentImageNetEfficientNet-L2state-of-the-art. Hence, EfficientNet-L0 has around the same training speed with EfficientNet-B7 but more parameters that give it a larger capacity. Self-Training With Noisy Student Improves ImageNet Classification @article{Xie2019SelfTrainingWN, title={Self-Training With Noisy Student Improves ImageNet Classification}, author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019 . Our main results are shown in Table1. Use Git or checkout with SVN using the web URL. Le, and J. Shlens, Using videos to evaluate image model robustness, Deep residual learning for image recognition, Benchmarking neural network robustness to common corruptions and perturbations, D. Hendrycks, K. Zhao, S. Basart, J. Steinhardt, and D. Song, Distilling the knowledge in a neural network, G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, G. Huang, Y. We present a simple self-training method that achieves 87.4 Here we use unlabeled images to improve the state-of-the-art ImageNet accuracy and show that the accuracy gain has an outsized impact on robustness. We duplicate images in classes where there are not enough images. The results also confirm that vision models can benefit from Noisy Student even without iterative training. Next, a larger student model is trained on the combination of all data and achieves better performance than the teacher by itself.OUTLINE:0:00 - Intro \u0026 Overview1:05 - Semi-Supervised \u0026 Transfer Learning5:45 - Self-Training \u0026 Knowledge Distillation10:00 - Noisy Student Algorithm Overview20:20 - Noise Methods22:30 - Dataset Balancing25:20 - Results30:15 - Perturbation Robustness34:35 - Ablation Studies39:30 - Conclusion \u0026 CommentsPaper: https://arxiv.org/abs/1911.04252Code: https://github.com/google-research/noisystudentModels: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnetAbstract:We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Their main goal is to find a small and fast model for deployment. . Self-training Code for Noisy Student Training. You signed in with another tab or window. Works based on pseudo label[37, 31, 60, 1] are similar to self-training, but also suffers the same problem with consistency training, since it relies on a model being trained instead of a converged model with high accuracy to generate pseudo labels. For RandAugment, we apply two random operations with the magnitude set to 27. Their noise model is video specific and not relevant for image classification. The swing in the picture is barely recognizable by human while the Noisy Student model still makes the correct prediction. We first report the validation set accuracy on the ImageNet 2012 ILSVRC challenge prediction task as commonly done in literature[35, 66, 23, 69] (see also [55]). Due to duplications, there are only 81M unique images among these 130M images. In the following, we will first describe experiment details to achieve our results. Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. Although the images in the dataset have labels, we ignore the labels and treat them as unlabeled data. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. Not only our method improves standard ImageNet accuracy, it also improves classification robustness on much harder test sets by large margins: ImageNet-A[25] top-1 accuracy from 16.6% to 74.2%, ImageNet-C[24] mean corruption error (mCE) from 45.7 to 31.2 and ImageNet-P[24] mean flip rate (mFR) from 27.8 to 16.1. Finally, frameworks in semi-supervised learning also include graph-based methods [84, 73, 77, 33], methods that make use of latent variables as target variables [32, 42, 78] and methods based on low-density separation[21, 58, 15], which might provide complementary benefits to our method. By showing the models only labeled images, we limit ourselves from making use of unlabeled images available in much larger quantities to improve accuracy and robustness of state-of-the-art models. [76] also proposed to first only train on unlabeled images and then finetune their model on labeled images as the final stage. This work systematically benchmark state-of-the-art methods that use unlabeled data, including domain-invariant, self-training, and self-supervised methods, and shows that their success on WILDS is limited. By clicking accept or continuing to use the site, you agree to the terms outlined in our. Also related to our work is Data Distillation[52], which ensembled predictions for an image with different transformations to teach a student network. We will then show our results on ImageNet and compare them with state-of-the-art models. The total gain of 2.4% comes from two sources: by making the model larger (+0.5%) and by Noisy Student (+1.9%). Using self-training with Noisy Student, together with 300M unlabeled images, we improve EfficientNets[69] ImageNet top-1 accuracy to 87.4%. Self-Training with Noisy Student Improves ImageNet Classification As shown in Figure 3, Noisy Student leads to approximately 10% improvement in accuracy even though the model is not optimized for adversarial robustness. Copyright and all rights therein are retained by authors or by other copyright holders. Then by using the improved B7 model as the teacher, we trained an EfficientNet-L0 student model. During the generation of the pseudo Code is available at https://github.com/google-research/noisystudent. We obtain unlabeled images from the JFT dataset [26, 11], which has around 300M images. This model investigates a new method for incorporating unlabeled data into a supervised learning pipeline. Here we show an implementation of Noisy Student Training on SVHN, which boosts the performance of a Noisy Student improves adversarial robustness against an FGSM attack though the model is not optimized for adversarial robustness. We sample 1.3M images in confidence intervals. The pseudo labels can be soft (a continuous distribution) or hard (a one-hot distribution). Work fast with our official CLI. For instance, on ImageNet-A, Noisy Student achieves 74.2% top-1 accuracy which is approximately 57% more accurate than the previous state-of-the-art model. Parthasarathi et al. Noisy Student Training seeks to improve on self-training and distillation in two ways. To achieve this result, we first train an EfficientNet model on labeled ImageNet images and use it as a teacher to generate pseudo labels on 300M unlabeled images. A. Alemi, Thirty-First AAAI Conference on Artificial Intelligence, C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, EfficientNet: rethinking model scaling for convolutional neural networks, Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results, H. Touvron, A. Vedaldi, M. Douze, and H. Jgou, Fixing the train-test resolution discrepancy, V. Verma, A. Lamb, J. Kannala, Y. Bengio, and D. Lopez-Paz, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), J. Weston, F. Ratle, H. Mobahi, and R. Collobert, Deep learning via semi-supervised embedding, Q. Xie, Z. Dai, E. Hovy, M. Luong, and Q. V. Le, Unsupervised data augmentation for consistency training, S. Xie, R. Girshick, P. Dollr, Z. Tu, and K. He, Aggregated residual transformations for deep neural networks, I. 3.5B weakly labeled Instagram images. You can also use the colab script noisystudent_svhn.ipynb to try the method on free Colab GPUs. We apply dropout to the final classification layer with a dropout rate of 0.5. Imaging, 39 (11) (2020), pp. We iterate this process by putting back the student as the teacher. Due to the large model size, the training time of EfficientNet-L2 is approximately five times the training time of EfficientNet-B7. We iterate this process by putting back the student as the teacher. Noisy Student Training is a semi-supervised learning method which achieves 88.4% top-1 accuracy on ImageNet (SOTA) and surprising gains on robustness and adversarial benchmarks. The baseline model achieves an accuracy of 83.2. Train a larger classifier on the combined set, adding noise (noisy student). This result is also a new state-of-the-art and 1% better than the previous best method that used an order of magnitude more weakly labeled data[44, 71]. A novel random matrix theory based damping learner for second order optimisers inspired by linear shrinkage estimation is developed, and it is demonstrated that the derived method works well with adaptive gradient methods such as Adam. We train our model using the self-training framework[59] which has three main steps: 1) train a teacher model on labeled images, 2) use the teacher to generate pseudo labels on unlabeled images, and 3) train a student model on the combination of labeled images and pseudo labeled images. We also list EfficientNet-B7 as a reference. Learn more. IEEE Trans. In contrast, the predictions of the model with Noisy Student remain quite stable. Stochastic Depth is a simple yet ingenious idea to add noise to the model by bypassing the transformations through skip connections. Abdominal organ segmentation is very important for clinical applications. Here we study how to effectively use out-of-domain data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. Secondly, to enable the student to learn a more powerful model, we also make the student model larger than the teacher model. The main use case of knowledge distillation is model compression by making the student model smaller. First, it makes the student larger than, or at least equal to, the teacher so the student can better learn from a larger dataset. In contrast, changing architectures or training with weakly labeled data give modest gains in accuracy from 4.7% to 16.6%. On . ImageNet-A test set[25] consists of difficult images that cause significant drops in accuracy to state-of-the-art models. Significantly, after using the masks generated by student-SN, the classification performance improved by 0.9 of AC, 0.7 of SE, and 0.9 of AUC. Scripts used for our ImageNet experiments: Similar scripts to run predictions on unlabeled data, filter and balance data and train using the filtered data. Noisy Student can still improve the accuracy to 1.6%. Noisy Student Training is based on the self-training framework and trained with 4 simple steps: Train a classifier on labeled data (teacher). On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. The mapping from the 200 classes to the original ImageNet classes are available online.222https://github.com/hendrycks/natural-adv-examples/blob/master/eval.py. CLIP (Contrastive Language-Image Pre-training) builds on a large body of work on zero-shot transfer, natural language supervision, and multimodal learning.The idea of zero-data learning dates back over a decade [^reference-8] but until recently was mostly studied in computer vision as a way of generalizing to unseen object categories. For this purpose, we use a much larger corpus of unlabeled images, where some images may not belong to any category in ImageNet. During this process, we kept increasing the size of the student model to improve the performance. This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. If nothing happens, download Xcode and try again. Chum, Label propagation for deep semi-supervised learning, D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, Semi-supervised learning with deep generative models, Semi-supervised classification with graph convolutional networks. However, manually annotating organs from CT scans is time . After using the masks generated by teacher-SN, the classification performance improved by 0.2 of AC, 1.2 of SP, and 0.7 of AUC. For instance, on the right column, as the image of the car undergone a small rotation, the standard model changes its prediction from racing car to car wheel to fire engine. It is found that training and scaling strategies may matter more than architectural changes, and further, that the resulting ResNets match recent state-of-the-art models. However, the additional hyperparameters introduced by the ramping up schedule and the entropy minimization make them more difficult to use at scale. On robustness test sets, it improves ImageNet-A top . For each class, we select at most 130K images that have the highest confidence. We first improved the accuracy of EfficientNet-B7 using EfficientNet-B7 as both the teacher and the student. Semi-supervised medical image classification with relation-driven self-ensembling model. After testing our models robustness to common corruptions and perturbations, we also study its performance on adversarial perturbations. We used the version from [47], which filtered the validation set of ImageNet. The score is normalized by AlexNets error rate so that corruptions with different difficulties lead to scores of a similar scale. The hyperparameters for these noise functions are the same for EfficientNet-B7, L0, L1 and L2. Test images on ImageNet-P underwent different scales of perturbations. In all previous experiments, the students capacity is as large as or larger than the capacity of the teacher model. Finally, for classes that have less than 130K images, we duplicate some images at random so that each class can have 130K images. Our finding is consistent with similar arguments that using unlabeled data can improve adversarial robustness[8, 64, 46, 80]. [57] used self-training for domain adaptation. Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Med. This article demonstrates the first tool based on a convolutional Unet++ encoderdecoder architecture for the semantic segmentation of in vitro angiogenesis simulation images followed by the resulting mask postprocessing for data analysis by experts.