Technical tags

Course Project Machine Learning Pattern recognition Matlab

Menu

View Project 1
View Project 2

Project 1: skin detection

This is an individual project.

Report and Source Code

Introduction

The goal of the project is to develop a pattern recognition system to classify pixels in images as skin or non-skin pixels. The system can be used by many face detection systems to locate face candidate areas in pictures. The classifier will be working with 3 color representation systems, RGB, YIQ, and HSV, respectively.

Method

In the classification process, I use 2 classification methods, naive Bayes and Bayesian decision. Train the classifiers by using features from 3 color models, respectively. Then compare these classifiers to get the idea about which color system is the best in discrimination of skin and non-skin pixels on images.

Naive Bayes method

In naive Bayes method, since features are independent, to get the posterior probability of \( P(w_i | X) \), in which X denotes \( \left[ x_1, x_2, x_3 \right] \) for a color model. We can use the formula: \( P(w_i | x_j)= \frac{ P(x_j|w_i)P(w_i) }{ P(x_j) } \) and \( P(w_i|X) = P(w_i|x_1)P(w_i|x_2)P(w_i|x_3) \) for classification. Assume that \( w_1 \) denotes skin and \( w_2 \) denotes non-skin. We compare the value of \( P(w_1 | X) \) and \( P(w_2 | X) \) for the pixel X. Classify X to the larger value one.

Bayesian decision method

The Bayesian decision classifier is more complicated than the naive Bayesian method by considering means and covariance of features in each class. In the classification model, we have 2 classes, skin and non-skin. Each class has its own mean \( \mu_i \) and covariance \( \Sigma_i \). We use the formula as discrimination function: \[ g_i(X) = -\frac{1}{2} \left( X - \mu_i \right)^\textrm{T} \Sigma_i^{-1} \left( X - \mu_i \right) - \frac{d}{2} \ln 2 \pi - \frac{1}{2} \ln | \Sigma_i | - \ln P(w_i) \] Calculate \( g_1(x) \) and \( g_2(x) \) for the two classes, respectively. Then classify X to the one with larger value.

Result

Based on some statistical analysis, we training 6 classifiers. 3 out of 6 classifiers are using naive Bayes method, the rest 3 are using Bayesian decision method. In each method, one classifier is trained by using features from one color representation system. The classifiers are tested on some images. A few results are listed here. The performance evaluations from the collected test set are shown as below.

Conclusion

By comparing naive Bayes method and Bayesian decision method, we can draw the fact that, for each color space, the Bayesian decision is better than naive Bayes. The Bayesian decision method based on YIQ color space gives the best result in these classifiers.

Project 2: digit recognition

This is an individual project.

Report and Source Code

Introduction

In this project, we use three classification models to develop three pattern recognition systems. The objective of the system is to identify a digit. The digit is composed by a dot matrix, with 7 rows and 5 columns. However, some LED on the matrix are defects. These LEDs are not lightened at on status or vice versa. This will cause the number on a matrix is not faultlessly shown.

Method

To build the system, we consider three machine learning models:
  1. 1-Nearest Neighbor classifier (1-NN)
  2. Back-propagation (BP) network
  3. Decision tree

For the training set, we plan to generate 1000 samples. For the testing set, we plan to generate 200 samples. We will try 3 different p, to simulate different error probabilities.

The following things are taken cared in the project:

Result

The result shows that the BP network with 12 hidden nodes has the best precision. On the other side, the decision tree is the worst one. Another insight is the trend of precision with the increasing of error probability p. The precisions of all classifiers are decreased when p is larger.

Then we use PCA method to reduce original 35 dimensions to 10 dimensions. We plot each pair of dimensions from the first 3 dimensions in the figure below. In each subfigure, we plot samples based on a pair of dimensions. From the scatter plots, we can find that some patterns can be easily figured out, because they are clustered in an isolated group. Some patterns are mixed in space, which is difficult to be discriminated.

The precisions of classifiers based on PCA are shown in Figure below. The result shows that the classifiers based on PCA performed better than non-PCA classifiers when LED has a low possibility in defection. When p is increased, the BP network is more robust. It shows better performance in high error probability.