© The Institution of Engineering and Technology
During the online learning process of support vector machines (SVMs), when a newly added sample is violating the Karush–Kuhn–Tucker) conditions, the new sample should be a new SV and transfer the old samples between the SVs and the non-SVs. Normally, the performance of an SVM model is decided by the SVs, and the model should be updated by the newly added SVs; therefore, the selection of high-quality candidate SVs will lead to a better learning accuracy, whereas low-quality candidate SVs may result in low learning efficiency and unnecessary updating. A new strategy is proposed to select the candidate SVs. SVs are selected according to two new criteria: the importance and the informativeness criteria. Furthermore, a mixed local–global regularisation method is applied during the online learning process to improve the penalty coefficients. Experiment results show that the proposed algorithm can achieve a better performance with a faster speed and a higher accuracy when compared with traditional methods.
References
-
-
1)
-
1. Read, J., Bifet, A., Pfahringer, B., et al: ‘Batch-incremental versus instance-incremental learning in dynamic and evolving data’. Int. Conf. Advances in Intelligent Data Analysis, Helsinki, Finland, October 2012, pp. 313–323.
-
2)
-
8. Faris, H., Hassonah, M.A., Ala'M, A.-Z., et al: ‘A multi-verse optimizer approach for feature selection and optimizing SVM parameters based on a robust system architecture’, Neural Comput. Appl., 2017, pp. 1–15, .
-
3)
-
3. Lmeanu, H., Andonie, R.: ‘Implementation issues of an incremental and decremental SVM’. Int. Conf. Artificial Neural Networks, Prague, Czech Republic, September 2008, pp. 325–335.
-
4)
-
12. Zhu, J., Sun, P., Gao, Y., et al: ‘Clock differences prediction algorithm based on EMD-SVM’, Chin. J. Electron., 2018, 27, (1), pp. 128–132 (doi: 10.1049/cje.2016.08.039).
-
5)
-
6. Platt, J.C.: ‘Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods’, Adv. Large Margin Classifiers, 2000, 10, pp. 61–74.
-
6)
-
4. Diehl, C.P., Cauwenberghs, G.: ‘SVM incremental learning, adaptation and optimization’. Proc. Int. Joint Conf. Neural Networks, Portland, OR, USA, July 2003, . 4, pp. 2685–2690.
-
7)
-
13. Bordes, A., Ertekin, S., Weston, J., et al: ‘Fast kernel classifiers with online and active learning’, J. Mach. Learn. Res., 2012, 6, (6), pp. 1579–1619.
-
8)
-
2. Cauwenberghs, G., Poggio, T.: ‘Incremental and decremental support vector machine learning’. Int. Conf. Neural Information Processing Systems, Denver, CO, USA, January 2000, pp. 388–394.
-
9)
-
5. Singla, A., Patra, S.: ‘A fast partition-based batch-mode active learning technique using SVM classifier’, Soft Comput., 2017, pp. 1–11, .
-
10)
-
11. Gao, C., Bompard, E., Napoli, R., et al: ‘Price forecast in the competitive electricity market by support vector machine’, Phys. A, Stat. Mech. Appl., 2007, 382, (1), pp. 98–113 (doi: 10.1016/j.physa.2007.03.050).
-
11)
-
9. Dhar, S., Cherkassky, V.: ‘Development and evaluation of cost-sensitive universum-SVM’, Trans. Cybern., 2015, 45, (4), p. 806 (doi: 10.1109/TCYB.2014.2336876).
-
12)
-
7. Lin, H.T., Lin, C.J., Weng, R.C.: ‘A note on Platt's probabilistic outputs for support vector machines’, Mach. Learn., 2007, 68, (3), pp. 267–276 (doi: 10.1007/s10994-007-5018-6).
-
13)
-
10. Pang, S., Zhu, L., Chen, G., et al: ‘Dynamic class imbalance learning for incremental LPSVM’, Neural Netw., 2013, 44, (8), pp. 87–100 (doi: 10.1016/j.neunet.2013.02.007).
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2018.0765
Related content
content/journals/10.1049/el.2018.0765
pub_keyword,iet_inspecKeyword,pub_concept
6
6