access icon free Low-power enhanced system-on-chip design for sequential minimal optimisation learning core with tri-layer bus and butterfly-path accelerator

A tri-layer bus system-on-chip (SoC) and a butterfly-path accelerator are used to enhance system-level performance in a sequential minimal optimisation learning core. The tri-layer bus architecture is used to obtain an adequate transfer rate. The butterfly-path accelerator also uses symmetrical access to resolve bottlenecks during linear prediction cepstral coefficients extraction. This novel design increases speed and flexibility without substantially increasing area. For implementation in chip manufacturing, the SoC is synthesised, placed and routed using the TSMC 90 nm technology library. The die size is 2.09 mm × 2.09 mm, and the power consumption is 8.9 mW. Compared with the non-butterfly-path design, the simulation results show that the proposed architecture provides a 2.4-fold speed increase. In addition, clock down-sampling and voltage scaling reduce the power consumed by the proposed chip by a factor of 8.5. The experimental results confirm the improved speed and power that are provided by the proposed architecture and methods.

Inspec keywords: optimisation; system-on-chip

Other keywords: linear prediction cepstral coefficients extraction; SoC; clock down-sampling; system-level performance; sequential minimal optimisation learning core; butterfly-path accelerator; voltage scaling; low-power enhanced system-on-chip design; tri-layer bus architecture

Subjects: System-on-chip; Optimisation techniques; Optimisation techniques; System-on-chip

References

    1. 1)
      • 19. Peng, S.-Y., Minch, B.A., Hasler, P.: ‘Analog VLSI implementation of support vector machine learning and classification’. Proc. IEEE Int. Symp. Circuits Systems, Seattle, WA, May 2008, pp. 860863.
    2. 2)
    3. 3)
    4. 4)
    5. 5)
    6. 6)
    7. 7)
      • 5. Wang, J.-F., Peng, J.-S., Wang, J.-C., et al: ‘Hardware/software co-design for fast-trainable speaker identification system based on SMO’. Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Anchorage, AK, October 2011, pp. 16211625.
    8. 8)
    9. 9)
    10. 10)
    11. 11)
      • 1. Cortes, C., Vapnik, V.: ‘Support vector networks’, Mach. Learn., 1995, 20, (3), pp. 273297.
    12. 12)
      • 11. Cao, K.K., Shen, H.B., Chen, H.F.: ‘A parallel and scalable digital architecture for training support vector machines’, IEEE Trans. Very Large Scale Integr. Syst., 2010, 11, (8), pp. 620628.
    13. 13)
      • 15. Wu, G.-D., Kuo, K.-T.: ‘System-on-chip architecture for speech recognition’, J. Inf. Sci. Eng., 2010, 26, (3), pp. 10731089.
    14. 14)
    15. 15)
    16. 16)
      • 4. Platt, J.C., Scholkopf, B., Burges, C., et al: ‘Fast training of support vector machines using sequential minimal optimization’, in (eds.): Schölkopf, B., Burges, C., Smola, A.: ‘Advances in Kernel methods—support vector learning’ (MIT Press, Cambridge, MA, USA, 1999).
    17. 17)
    18. 18)
    19. 19)
    20. 20)
    21. 21)
    22. 22)
    23. 23)
      • 25. Kuhn, H.W., Tucker, A.W.: ‘Nonlinear programming’. Proc. Int. Conf. Berkeley Symp. Mathematical Statistics and Probability, Berkeley, California, August 1951, pp. 481492.
    24. 24)
      • 17. Kucher, P., Chakrabartty, S.: ‘An energy-scalable margin propagation-based analog VLSI support vector machine’. Proc. IEEE Int. Symp. Circuits Systems, New Orleans, USA, May 2007, pp. 12891292.
    25. 25)
    26. 26)
    27. 27)
    28. 28)
    29. 29)
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cdt.2013.0153
Loading

Related content

content/journals/10.1049/iet-cdt.2013.0153
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading