Personal website
On this page, we address various machine learning problems.
Imagine a system working like a black box, that we can feed with some input and observe the corresponding output. Regression and classification are two important problems met in adaptive filtering and supervised machine learning, aiming at modeling the behavior of this system and predicting its output. The main difference between these two problems is the nature of the output:
In this part, we will mainly focus on regression, but we will also deal with a simplified classification problem seen as a special case of regression. Supervised learning algorithms work in two stages:
We will insist on this machine learning aspect in our examples.
In this document, we present the simplest model of regression: linear regression. We introduce the model, we derive least squares exact solution and recursive least squares (RLS) algorithm, we extend to some variants of the model and we apply these results to autoregressive signals.
Code for linear regression:
Code for polynomial regression:
Code for the weighted RLS algorithm applied to speech signals:
In this document, we introduce a generalization of linear regression: Kalman filters, we derive their update equations and the corresponding algorithm, and we apply them on some examples.
Code for Kalman filters:
In this document, we extend to non-linear regression, we talk about the Newton-Raphson method and gradient descent, and we apply these results to a simplified version of neural networks: the single-neuron classifier.
Code for Single Neuron Classifiers:
In this document, we present the general structure of multilayer perceptrons, and derive the feed-forward and back-propagation equations and algorithms.
Code for Multilayer Perceptrons:
We use this code to generate neural activation maps of multilayer perceptrons trained on the XOR and chessboard problems. These maps are displayed in the following pages: