Computer Vision Tutorial

introduction


All I have learned before is Supervised Learning: we have data x and label y, the goal is to learn a function that maps data x to label y, and labels can take many forms.
Typical supervised learning is: input a picture in classification problem and output the classification of the picture; input a picture in target detection and output the frame of the target object; in semantic segmentation, label each pixel.
The 13th lecture of CS231n introduces you to unsupervised learning (Unsupervised Learning) and some knowledge of generative models.
The focus of this article

unsupervised learning
generative model

Pixel RNN/CNN
Variational Autoencoder (VAE)
Generative Adversarial Networks (GANs)



1. Unsupervised Learning


Unsupervised learning learns the structure implicit in the data when we only have some unlabeled training data. Unsupervised learning also makes data acquisition easy because there are no labels. Typical unsupervised learning includes the following algorithms:
1.1 Clustering (k-Means)

For detailed knowledge of clustering algorithms, you can also refer to the following articles by ShowMeAI

Illustrated Machine Learning Tutorial Articles Explained Clustering Algorithms Explained


Clustering is finding groupings of data that are similar in some measure. Randomly initial k center positions, assign each sample to the nearest center position, and then update the center position based on the assigned samples. Repeat this process until convergence (center position no longer changes).

1.2 PCA (Principal Component Analysis)

For detailed knowledge of the PCA dimensionality reduction algorithm, you can also refer to the following articles of ShowMeAI

Detailed explanation of the articles in the Graphical Machine Learning Tutorial Detailed explanation of dimensionality reduction algorithms


Dimensionality reduction: Find some projection directions (axes) on which the variance of training data projections is greatest. These axes are the underlying structure within the data. We can use these axes to reduce the dimensionality of the data, which has a large variance in each of the remaining dimensions.

1.3 Feature Learning
We also have some feature learning methods, such as Autoencoders:

1.4 Density Estimation
Density Estimation is also an unsupervised algorithm. We will estimate the internal distribution of the data. For example, there are some one-dimensional and two-dimensional points above the figure below. We use a Gaussian function to fit this density distribution, as shown in the figure below. Show:

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00