http://iet.metastore.ingenta.com
1887

Neural network architectures for content-addressable memory

Neural network architectures for content-addressable memory

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IEE Proceedings F (Radar and Signal Processing) — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The paper investigates whether neural network content-addressable memories (CAMs) can compete with the non-neural alternatives which are currently available. The storage and retrieval of 64-bit patterns is used as a test problem which reflects the requirements of today's computer technology. The two main strategies available for implementing a CAM with a neural network architecture, feedback networks and twostage CAMs, and in particular their ability to retrieve patterns from corrupted input data, are investigated in detail. The storage capacity of the Hopfield network is very poor although it can be improved with the use of an iterative algorithm, such as the threshold algorithm which is described in this paper. However, the possibility of generating spurious patterns always remains with feedback networks. Two-stage CAMs are much more efficient, provided that an appropriate algorithm is used for the input classification stage. The wellknown perceptron and least-mean squares algorithms need to be modified if they are to cope with corrupted input patterns, but the optimal classifier for the type of problem under consideration is the minimum-distance classifier (or Hamming network for binary patterns). The implementation of the latter in analogue VLSI is discussed in the last section of the paper as an alternative to conventional technology.

References

    1. 1)
      • , Parallel models of associative memory
    2. 2)
      • Realtime visual computations using analog CMOS processing arrays, 1987 Stanford VLSI conference
    3. 3)
      • A neuromorphic VLSI learning system, 1987 Stanford VLSI Conference
    4. 4)
      • VLSI implementation of a neural network model
    5. 5)
      • Neural networks for high-storage content-addressable memory: VLSI circuit and learning algorithm
    6. 6)
      • Fully programmable analogue VLSI devices for the implementation of neural networks, VLSI for artificial intelligence
    7. 7)
      • Holography, associative memory, and inductive generalization, Parallel models of associative memory
    8. 8)
      • Internal representations for associative memory
    9. 9)
      • Content-addressable and associative memory: alternatives to the ubiquitous RAM
    10. 10)
      • A 4-kbit associative memory LSI
    11. 11)
      • Low-cost associative memory
    12. 12)
      • Neural networks and physical systems with emergent collective computational abilities
    13. 13)
      • Neurons with graded response have collective computational properties like those of two-state neurons
    14. 14)
      • Venkatesh, S.S.: `Epsilon capacity of neural networks', Proc. Am. Inst. Phys. Conf. on Neural networks for computing, 1986, Snowbird, p. 440–445
    15. 15)
      • The space of interactions in neural network models
    16. 16)
      • Learning algorithms with optimal stability in neural networks
    17. 17)
      • The capacity of the Hopfield associative memory
    18. 18)
      • Storing infinite numbers of patterns in a spin glass model of neural networks
    19. 19)
      • Wallace, D.J.: `Memory and learning in a class of neural networks', Proc. Workshop on Lattice gauge theory, 1985, 1986, Wuppertal, Plenum Press, p. 313–330
    20. 20)
      • Content-addressability and learning in neural networks
    21. 21)
      • A general framework for parallel distributed processing, Parallel distributed processing: explorations in the microstructure of cognition
    22. 22)
      • , Principles of neurodynamics
    23. 23)
      • , Adaptive switching circuits
    24. 24)
      • Learning representations by back-propagating errors
    25. 25)
      • An introduction to computing with neural nets
    26. 26)
      • Guyon, I., Poujaud, I., Personnaz, L., Dreyfus, G., Denker, J., Le Cun, Y.: `Comparing different neural network architectures for classifying handwritten digits', Proc. Int. Joint Conf. on Neural networks, 1988, Washington, DC, p. 11.127–11.132
    27. 27)
      • Characteristics of sparsely encoded associative memory
    28. 28)
      • On associative memory
    29. 29)
      • Associative memory models, Machine learning
    30. 30)
      • A maximum overlap neural network for pattern recognition
    31. 31)
      • Pulse-firing neural chips for hundreds of neurons, Advances in neural information processing systems — 2
    32. 32)
      • Analog electronic neural network circuits
    33. 33)
      • Tarassenko, L., Brownlow, M.J., Murray, A.F.: `VLSI neural networks for autonomous robot navigation', Proc. 1990 Int. Neural network conference, July 1990, Paris, p. 213–216
    34. 34)
      • Silicon implementations of neural networks
http://iet.metastore.ingenta.com/content/journals/10.1049/ip-f-2.1991.0006
Loading

Related content

content/journals/10.1049/ip-f-2.1991.0006
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address