Continuous restricted Boltzmann machine with an implementable training algorithm

Access Full Text

Continuous restricted Boltzmann machine with an implementable training algorithm

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IEE Proceedings - Vision, Image and Signal Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The authors introduce a continuous stochastic generative model that can model continuous data, with a simple and reliable training algorithm. The architecture is a continuous restricted Boltzmann machine, with one step of Gibbs sampling, to minimise contrastive divergence, replacing a time-consuming relaxation search. With a small approximation, the training algorithm requires only addition and multiplication and is thus computationally inexpensive in both software and hardware. The capabilities of the model are demonstrated and explored with both artificial and real data.

Inspec keywords: approximation theory; signal sampling; stochastic processes; Boltzmann machines

Other keywords: addition; artificial data; real data; embedded intelligent systems; continuous stochastic generative model; implementable training algorithm; VLSI implementation; Gibbs sampling; minimising contrastive divergence; contrastive divergence; continuous restricted Boltzmann machine; computationally inexpensive algorithm; approximation; continuous data processing; multiplication

Subjects: Interpolation and function approximation (numerical analysis); Interpolation and function approximation (numerical analysis); Signal processing theory; Other topics in statistics; Signal processing and detection; Neural nets (theory); Other topics in statistics

References

    1. 1)
      • G.E. Hinton , T.J. Sejnowski . (1986) Learning and re-learning in Boltzmann machines, Parallel distributed processing: explorations in the microstructure of cognition.
    2. 2)
      • P. Smolensky . (1986) Information processing in dynamical systems: foundations of harmony theory, Parallel distributed processing: explorations in the microstructure of cognition.
    3. 3)
      • Fleury, P., Murray, A.F., Reekie, M.: `High-accuracy mixed-signal VLSI for weight modification in contrastive divergence learning', Proceedings of 12th Int. Conf. on Artificial neural networks (ICANN2002), August 2002, Madrid, Spain, p. 426–431.
    4. 4)
      • Hinton, G.E.: `Training products of experts by minimizing contrastive divergence', Technical, 2000.
    5. 5)
      • Marks, T.K., Movellan, J.R.: `Diffusion networks, products of experts, and factor analysis', 2001.02, Technical, 2001.
    6. 6)
    7. 7)
      • Y.W. Teh , G.E. Hinton . (2001) Rate-coded restricted Boltzmann machine for face recognition, Advances in neural information processing system.
    8. 8)
      • Hinton, G.E.: `Products of experts', Proceedings of 9th Int. Conf. on Artificial neural networks, (ICANN'99), Sept. 1999, Edinburgh, Scotland, UK, p. 1–6.
    9. 9)
    10. 10)
      • Fleury, P., Woodburn, R.J., Murray, A.F.: `Matching analogue hardware with applications using the products of experts algorithm', Proceedings of IEEE European Symp. on Artificial neural networks, 2001, p. 63–64.
    11. 11)
      • Chen, H., Murray, A.F.: `A continuous restricted Boltzmann machine with hardware-amenable learning algorithm', Proceedings of 12th Int. Conf. on Artificial neural networks (ICANN2002), Aug. 2002, Madrid, Spain, p. 358–363.
    12. 12)
    13. 13)
      • J.J. Hopfield . Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA , 3088 - 3092
    14. 14)
      • Woodburn, R.J., Astaras, A.A., Dalzell, R.W., Murray, A.F., McNeill, D.K.: `Computing with uncertainty in probabilistic neural networks on silicon', Proceedings 2nd Int. ICSC Symp. on Neural computation, 1999, p. 470–476.
    15. 15)
      • J.R. Movellan , P. Mineiro , R.J. Williams . A Monte-Carlo EM approach for partially observable diffusion processes: theory and applications to neural networks. Neural Comput. , 1501 - 1544
    16. 16)
    17. 17)
      • B.J. Frey . (1997) Continuous sigmoidal belief networks trained using slice sampling, Advances in neural information processing systems.
    18. 18)
http://iet.metastore.ingenta.com/content/journals/10.1049/ip-vis_20030362
Loading

Related content

content/journals/10.1049/ip-vis_20030362
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading