Piecewise linear approximation applied to nonlinear function of a neural network

Piecewise linear approximation applied to nonlinear function of a neural network

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IEE Proceedings - Circuits, Devices and Systems — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

An efficient piecewise linear approximation of a nonlinear function (PLAN) is proposed. This uses a simple digital gate design to perform a direct transformation from X to Y, where X is the input and Y is the approximated sigmoidal output. This PLAN is then used within the outputs of an artificial neural network to perform the nonlinear approximation. The comparison of this technique with two other sigmoidal approximation techniques for digital circuits is presented and the results show that the fast and compact digital circuit proposed produces the closest approximation to the sigmoid function. The hardware implementation of PLAN has been verified by a VHDL simulation with Mentor Graphics running under the UNIX operating system.


    1. 1)
      • Introduction to computing with neural nets
    2. 2)
      • Progress in supervised neural networks: what's new since Lippmann ?
    3. 3)
      • Using and designing massively parallel computers for artificialneural networks
    4. 4)
      • Implementing nonlinear activation function in neural networkemulatiors
    5. 5)
      • Efficient implementation of piecewise linear activationfunction for digital VLSI neural networks
    6. 6)
      • Fahlman, S.E.: `An imperical study of learning speed in back-propagation networks', CMU–CS–88–162, Technical report, June 1988
    7. 7)
      • VHDL: hardware description and design
    8. 8)
      • Learning internal representations by errorpropagation, Parallel distributed processing:explorations in the microstructures of cognition

Related content

This is a required field
Please enter a valid email address