1 분 소요

Learning loss for active learning

image

논문

Background

We need big dataset to train a deep learning model

image

  • Low cost: image collection (unlabeled dataset)
  • High cost: image annotation (labeled dataset)

Semi-Supervised Learning vs. Active Learning

  • Semi-Supervised Learning
    • How to train unlabeled dataset based on labeled data
  • Active Learning
    • What data to be labeled for better performance

Passive Learning

image


Related Researches

① Least Confident Sampling

image

② Margin Sampling

image

③ Entropy Sampling

image


Active Learning

image

  • Using the Welsh corgi image for better performance in training model

Active Learning Process

image image

  • Random Sampling
    • No observable changes in decision boundary after adding labeled data
  • Active Learning
    • Training model much faster by selecting unnecessary data near decision boundary

Loss Prediction Module

image

  • Loss prediction for a single unlabeled data
  • Smaller network than target model; can be trained under the target model
  • No need to define uncertainty through computation
  • i.e., Classification, Regression, Object Detection

Loss Prediction: How to Work?

image

Loss Prediction: Architecture

image

  • GAP: Global Avg. Pooling
  • FC: Fully Connected Layers
  • ReLU

Method to Learn the Loss

image

  • MSE Loss Function Limitation
    • Since target loss decreases, predicted loss is just adapting the change in target loss’s size

Margin Loss

image image

Loss Prediction: Evaluation

image

Image Classification

image image

Object Detection

image

Human Pose Estimation

image image


Limitation

image

Note

referenced by 나동빈

댓글남기기