1 분 소요

Learning loss for active learningPermalink

image

논문

BackgroundPermalink

We need big dataset to train a deep learning model

image

  • Low cost: image collection (unlabeled dataset)
  • High cost: image annotation (labeled dataset)

Semi-Supervised Learning vs. Active LearningPermalink

  • Semi-Supervised Learning
    • How to train unlabeled dataset based on labeled data
  • Active Learning
    • What data to be labeled for better performance

Passive LearningPermalink

image


Related ResearchesPermalink

① Least Confident SamplingPermalink

image

② Margin SamplingPermalink

image

③ Entropy SamplingPermalink

image


Active LearningPermalink

image

  • Using the Welsh corgi image for better performance in training model

Active Learning ProcessPermalink

image image

  • Random Sampling
    • No observable changes in decision boundary after adding labeled data
  • Active Learning
    • Training model much faster by selecting unnecessary data near decision boundary

Loss Prediction ModulePermalink

image

  • Loss prediction for a single unlabeled data
  • Smaller network than target model; can be trained under the target model
  • No need to define uncertainty through computation
  • i.e., Classification, Regression, Object Detection

Loss Prediction: How to Work?Permalink

image

Loss Prediction: ArchitecturePermalink

image

  • GAP: Global Avg. Pooling
  • FC: Fully Connected Layers
  • ReLU

Method to Learn the LossPermalink

image

  • MSE Loss Function Limitation
    • Since target loss decreases, predicted loss is just adapting the change in target loss’s size

Margin LossPermalink

image image

Loss Prediction: EvaluationPermalink

image

Image ClassificationPermalink

image image

Object DetectionPermalink

image

Human Pose EstimationPermalink

image image


LimitationPermalink

image

NotePermalink

referenced by 나동빈

댓글남기기