Incremental Mixture Models

This page deals with the problem of how to incrementally build a generative model from data using a positive as well as negative examples. We propose an incremental update rule for building one dimensional Gaussian mixture models based on Kernel Density Estimation that support learning from negative examples.
This page deals with the problem of how to design an algorithm that would use as little assumptions as possible about the input data and would allow building a generative model by observing only a single or a few data points at a time. The algorithm is based on multivariate Kernel Density Estimator which uses a revitalization scheme and is robust to data ordering.
We present an algorithm for building a discriminative models by observing only a single or a few data points at a time. The algorithm is and extension of the multivariate online Kernel Density Estimator, and uses a new measure of discrimination loss to determine how much a classifier can be compressed without modyfing its performance.


NEW! Version 3.5 This is a Matlab research code that is based on the papers on Online Kernel Density Estimation with Gaussian Kernels and Online Discriminative Kernel Density Estimation with Gaussin Kernles. The code essentially demonstrates estimation of a Gaussian Mixture Model from a stream of data.
The code is a minimal implementation of a batch kernel density estimator. Since the code is based on our new bandwidth estimator, it allows KDE construction even from preclustered/compressed sets of samples and weighted data. This is also a minimal demonstration of the general bandwidth estimator proposed in “Online Kernel Density Estimation with Gaussian Kernels”.
This is a demo code for the unscented Hellinger distance between a pair of Gaussian mixture models. The code follows the derivation of the multivariate unscented Hellinger distance introduced in [1]. Unlike the Kullback-Leibler divergence, the Hellinger distance is a proper metric between the distributions and is constrained to interval (0,1) with 0 meaning complete similarity and 1 complete dissimilarity.

Relevant publications