Information theoretical learning (ITL) is an important research area in signal processing and machine learning. It uses concepts of entropies and divergences from information theory to substitute the conventional statistical descriptors of variances and covariances. The empirical minimum error entropy (MEE) algorithm is a typical approach falling into this this framework and has been successfully used in both regression and classification problems. In this talk, I will discuss the consistency analysis of the MEE algorithm. For this purpose, we introduce two types of consistency. The error entropy consistency, which requires the error entropy of the learned function to approximate the minimum error entropy, is proven when the bandwidth parameter tends to 0 at an appropriate rate. The regression consistency, which requires the learned function to approximate the regression function, however, is a complicated issue. We prove that the error entropy consistency implies the regression consistency for homoskedastic models where the noise is independent of the input variable. But for heteroskedastic models,a counterexample is constructed to show that the two types of consistencyare not necessarily coincident. A surprising result is that the regression
consistency holds when the bandwidth parameter is sufficiently large. Regression consistency of two classes of special models is shown to hold With fixed bandwidth parameter. These results illustrate the complication of the MEE algorithm.
吴强博士现主要研究领域为统计模型与计算、机器学习、高维数据挖掘及应用，计算调和分析。在Journal of Machine Learning Research, Applied and Computational Harmonic Analysis等国际权威期刊发表论文40余篇，出版专著“Classication and Regularization in Learning Theory”。特别在学习理论中分类学习、正则化回归学习等研究方向作出了具有标志性的研究成果，受到国内外学者的广泛关注。