Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free New SV selection strategy and local–global regularisation method for improving online SVM learning

During the online learning process of support vector machines (SVMs), when a newly added sample is violating the Karush–Kuhn–Tucker) conditions, the new sample should be a new SV and transfer the old samples between the SVs and the non-SVs. Normally, the performance of an SVM model is decided by the SVs, and the model should be updated by the newly added SVs; therefore, the selection of high-quality candidate SVs will lead to a better learning accuracy, whereas low-quality candidate SVs may result in low learning efficiency and unnecessary updating. A new strategy is proposed to select the candidate SVs. SVs are selected according to two new criteria: the importance and the informativeness criteria. Furthermore, a mixed local–global regularisation method is applied during the online learning process to improve the penalty coefficients. Experiment results show that the proposed algorithm can achieve a better performance with a faster speed and a higher accuracy when compared with traditional methods.

References

    1. 1)
      • 1. Read, J., Bifet, A., Pfahringer, B., et al: ‘Batch-incremental versus instance-incremental learning in dynamic and evolving data’. Int. Conf. Advances in Intelligent Data Analysis, Helsinki, Finland, October 2012, pp. 313323.
    2. 2)
      • 8. Faris, H., Hassonah, M.A., Ala'M, A.-Z., et al: ‘A multi-verse optimizer approach for feature selection and optimizing SVM parameters based on a robust system architecture’, Neural Comput. Appl., 2017, pp. 115, doi: https://doi.org/10.1007/s00521-016-2818-2.
    3. 3)
      • 3. Lmeanu, H., Andonie, R.: ‘Implementation issues of an incremental and decremental SVM’. Int. Conf. Artificial Neural Networks, Prague, Czech Republic, September 2008, pp. 325335.
    4. 4)
    5. 5)
      • 6. Platt, J.C.: ‘Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods’, Adv. Large Margin Classifiers, 2000, 10, pp. 6174.
    6. 6)
      • 4. Diehl, C.P., Cauwenberghs, G.: ‘SVM incremental learning, adaptation and optimization’. Proc. Int. Joint Conf. Neural Networks, Portland, OR, USA, July 2003, vol. 4, pp. 26852690.
    7. 7)
      • 13. Bordes, A., Ertekin, S., Weston, J., et al: ‘Fast kernel classifiers with online and active learning’, J. Mach. Learn. Res., 2012, 6, (6), pp. 15791619.
    8. 8)
      • 2. Cauwenberghs, G., Poggio, T.: ‘Incremental and decremental support vector machine learning’. Int. Conf. Neural Information Processing Systems, Denver, CO, USA, January 2000, pp. 388394.
    9. 9)
      • 5. Singla, A., Patra, S.: ‘A fast partition-based batch-mode active learning technique using SVM classifier’, Soft Comput., 2017, pp. 111, doi: https://doi.org/10.1007/s00500-017-2645-0.
    10. 10)
    11. 11)
    12. 12)
    13. 13)
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2018.0765
Loading

Related content

content/journals/10.1049/el.2018.0765
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address