Relation Modeling and Distillation for Learning with Noisy Labels

Xiaming Che, Junlin Zhang, Zhuang Qi, Xin Qi·May 30, 2024

Summary

RMDNet is a novel framework for learning with noisy labels in deep learning, combining relation modeling and knowledge distillation. It addresses overfitting to noisy data by using a self-supervised RM module for noise-resistant feature extraction through contrastive learning and a RGRL module to calibrate representations. RMDNet enhances model robustness by leveraging self-supervised learning and distillation to understand latent associations. The framework is versatile and outperforms existing methods on two datasets, showing improved discriminative representation learning despite noisy annotations. Key contributions include a noise mitigation strategy, compatibility with other methods, and a focus on representation error as a crucial factor for robustness. The paper explores various techniques, including robust loss functions, noise filtering, and contrastive learning, to handle noisy labels and presents empirical evidence of RMDNet's effectiveness.

Key findings

8

Introduction
Background
Objective
Method
Data Collection
Noise Modeling and Detection
Contrastive Learning for Noise-resistant Feature Extraction
Regularization and Gradient-based Noise Reduction
Data Preprocessing
Noise Mitigation Strategy
Model Architecture
RMDNet Framework
Training and Evaluation
Experiment Setup
Results and Analysis
Conclusion
References
Basic info
papers
artificial intelligence
Advanced features
Insights
What is RMDNet primarily designed for?
How does RMDNet address overfitting to noisy data?
What are the key components of RMDNet's framework?
How does RMDNet improve discriminative representation learning in the presence of noisy annotations?