[논문 분석] Learning loss for active learning (CVPR 2019)
Learning loss for active learningPermalink
BackgroundPermalink
We need big dataset to train a deep learning model
- Low cost: image collection (unlabeled dataset)
- High cost: image annotation (labeled dataset)
Semi-Supervised Learning vs. Active LearningPermalink
- Semi-Supervised Learning
- How to train unlabeled dataset based on labeled data
- Active Learning
- What data to be labeled for better performance
Passive LearningPermalink
Related ResearchesPermalink
① Least Confident SamplingPermalink
② Margin SamplingPermalink
③ Entropy SamplingPermalink
Active LearningPermalink
- Using the Welsh corgi image for better performance in training model
Active Learning ProcessPermalink
- Random Sampling
- No observable changes in decision boundary after adding labeled data
- Active Learning
- Training model much faster by selecting unnecessary data near decision boundary
Loss Prediction ModulePermalink
- Loss prediction for a single unlabeled data
- Smaller network than target model; can be trained under the target model
- No need to define uncertainty through computation
- i.e., Classification, Regression, Object Detection
Loss Prediction: How to Work?Permalink
Loss Prediction: ArchitecturePermalink
- GAP: Global Avg. Pooling
- FC: Fully Connected Layers
- ReLU
Method to Learn the LossPermalink
- MSE Loss Function Limitation
- Since target loss decreases, predicted loss is just adapting the change in target loss’s size
Margin LossPermalink
Loss Prediction: EvaluationPermalink
Image ClassificationPermalink
Object DetectionPermalink
Human Pose EstimationPermalink
LimitationPermalink
NotePermalink
referenced by 나동빈
댓글남기기