© The Institution of Engineering and Technology
The Epanechnikov kernel (EK) is a popular kernel function that has achieved promising results in many machine learning applications. Although the EK is widely used, its basic formulation requires fully observed input feature vectors. A method is proposed to estimate the EK when these input vectors are only partially observed, i.e. some of its features are missing. In the proposed method, named expected EK, the expected value of the kernel function is estimated given the distribution of the data and the observed values of the feature vectors.
References
-
-
1)
-
6. Roberts, C., Geisser, S.: ‘A necessary and sufficient condition for the square of a random variable to be gamma’, Biometrika Trust, 1966, 53, pp. 275–278 (doi: 10.2307/2334082).
-
2)
-
10. Marc, G.G.: ‘Skew-Elliptical distributions and their applications: journey beyond normality’ (Chapman and Hall/CRC, Boca Raton, FL, USA, 2004).
-
3)
-
7. Covo, S., Elalouf, A.: ‘A novel single-gamma approximation to the sum of independent gamma variables, and a generalization to infinitely divisible distributions’, Electron. J. Statist., 2014, 1, pp. 894–926 (doi: 10.1214/14-EJS914).
-
4)
-
8. Hunt, L., Jorgensen, M.: ‘Mixture model clustering for mixed data with missing information’, Comput. Statist. Data Anal., 2003, 41, (3-4), pp. 429–440 (doi: 10.1016/S0167-9473(02)00190-1).
-
5)
-
5. Belanche, L.A., Kobayashi, V., Aluja, T.: ‘Handling missing values in kernel methods with application to microbiology data’, Neurocomputing, 2014, 141, pp. 110–116 (doi: 10.1016/j.neucom.2014.01.047).
-
6)
-
2. Pekalska, E., Haasdonk, B.: ‘Kernel discriminant analysis for positive definite and indefinite kernels’, Trans. Pattern Anal. Mach. Intell., 2009, 31, pp. 1017–1032 (doi: 10.1109/TPAMI.2008.290).
-
7)
-
4. Eirola, E., Lendasse, A., Vandewalle, V., et al: ‘Mixture of Gaussians for distance estimation with missing data’, Neurocomputing, 2014, 131, pp. 32–42 (doi: 10.1016/j.neucom.2013.07.050).
-
8)
-
1. Ong, C.S., Mary, X., Canu, S., et al: ‘Learning with non-positive kernels’. Proc. of the Twenty-First Int. Conf. on Machine Learning, Banff, Canada, July 2004, p. 81.
-
9)
-
3. Mierswa, I.: ‘Evolutionary learning with kernels: a generic solution for large margin problems’. Proc. of the 8th Annual Conf. on Genetic and Evolutionary Computation, Seattle, WA, USA, July 2006, pp. 1553–1560.
-
10)
-
9. Johnson, N., Kotz, S., Balakrishnan, N.: ‘Continuous univariate distributions’ (Wiley, New York, NY, USA, 1995).
-
11)
-
11. Hulse, J.V., Khoshgoftaar, T.M.: ‘Incomplete-case nearest neighbor imputation in software measurement data’, Inf. Sci., 2014, 259, pp. 596–610 (doi: 10.1016/j.ins.2010.12.017).
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2017.0507
Related content
content/journals/10.1049/el.2017.0507
pub_keyword,iet_inspecKeyword,pub_concept
6
6