Representation Learning
Some well-explained blog articles on Representation Learning.
Self-Supervised Representation Learning
Reference: https://lilianweng.github.io/lil-log/2019/11/10/self-supervised-learning.html
Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data, while learning in a supervised learning manner. This post covers many interesting ideas of self-supervised learning tasks on images, videos, and control problems.
The Illustrated SimCLR Framework
Reference: https://amitness.com/2020/03/illustrated-simclr/
In recent years, numerous self-supervised learning methods have been proposed for learning image representations, each getting better than the previous. But, their performance was still below the supervised counterparts.
This changed when Chen et. al proposed a new framework in their research paper “SimCLR: A Simple Framework for Contrastive Learning of Visual Representations”. The SimCLR paper not only improves upon the previous state-of-the-art self-supervised learning methods but also beats the supervised learning method on ImageNet classification when scaling up the architecture.
In this article, I will explain the key ideas of the framework proposed in the research paper using diagrams.