IET Computers & Digital Techniques
Volume 10, Issue 3, May 2016
Volumes & issues:
Volume 10, Issue 3
May 2016
-
- Author(s): Swaminathan Kathirvel ; Rajkumar Jangre ; Seokbum Ko
- Source: IET Computers & Digital Techniques, Volume 10, Issue 3, p. 93 –101
- DOI: 10.1049/iet-cdt.2015.0066
- Type: Article
- + Show details - Hide details
-
p.
93
–101
(9)
A novel combinational digital device for finding maximum magnitude among the ‘n’ input numbers is proposed. This maximum magnitude generator (MaxMG) generates maximum magnitude as an output by utilising the bit by bit approach from multiple input (multi-bit) values simultaneously. MaxMG generates output from most significant bit (MSB) to least significant bit (LSB) in parallel, which utilises a minimum number of gate counts among the multi-bit of multiple input values. The minimum magnitude generator is also derived by applying the dual function to the MaxMG. The proposed design is implemented using Synopsys 90 nm generic library and RTL is written using Verilog HDL. The performance of the proposed design is compared with a rank based Kth max selection algorithm, a parallel tree based maximum generator utilised comparator-multiplexer combination, an array-based maximum finder (AB) and improved quad tree (IQT). The bit by bit parallel processing at the inputs – from MSB to LSB, and the simple architecture utilising a minimum number of gates, makes the proposed design more energy efficient when compared with the Kth max algorithm, the tree based maximum finder, the AB based maximum finder, and the IQT architecture.
- Author(s): M. Morales-Sandoval and A. Diaz-Perez
- Source: IET Computers & Digital Techniques, Volume 10, Issue 3, p. 102 –109
- DOI: 10.1049/iet-cdt.2015.0055
- Type: Article
- + Show details - Hide details
-
p.
102
–109
(8)
This study presents a scalable hardware architecture for modular multiplication in prime fields GF(p). A novel iterative digit–digit Montgomery multiplication (IDDMM) algorithm is proposed and two hardware architectures that compute that algorithm are described. The input operands (multiplicand, multiplier and modulus) are represented using as radix β = 2 k . Multiplication over GF(p) is possible using almost the same hardware since the complexity of multiplier's kernel module depends mainly on k and not on p. The novel hardware architectures of GF(p) multipliers were evaluated on three Xilinx FPGA families. Design trade-offs were analysed considering different operand sizes commonly used in cryptography and different digits sizes. The proposed designs for IDDMM are well suited to be implemented in modern FPGAs, making use of available dedicated multipliers and memory blocks reducing drastically the FPGA's standard logic while keeping an acceptable performance compared with other implementation approaches. From the Virtex5 implementation, the proposed MM multiplier reaches a throughput of 242 Mbps using only 219 FPGA slices and achieving a 1024-bit modular multiplication in 4.21μs. This is 26 times less area resources than similar related works in the literature with an improved efficiency of 7x.
- Author(s): Debasri Saha and Susmita Sur-Kolay
- Source: IET Computers & Digital Techniques, Volume 10, Issue 3, p. 110 –118
- DOI: 10.1049/iet-cdt.2015.0051
- Type: Article
- + Show details - Hide details
-
p.
110
–118
(9)
Signature-based authentication is used often to authenticate hardware intellectual property (IP) when it is reused on a plug-and-play system-on-chip. A signature embedded in the functional/test component of a hardware IP can easily be verified as it can be generated and observed as functional/scan output of the hardware IP for a certain input key vector. An existing scan-based approach for embedding signature inserts signature through reordering of scan cells in a single scan (SS) chain. However, it is not applicable to the recent reconfigurable scan architectures having reduced test application time. We propose a scheme for embedding two distinct signatures separately in a reconfigurable scan architecture and verifying those without conflict from the packaged chip by using two distinct test modes of the reconfigurable architecture: namely, scan tree mode and SS mode. The two signatures may include one from logic IP source and the other from physical IP source. The overhead in both routing and power has been minimised in our scheme. Experimental results on design overhead and robustness for ISCAS89 benchmarks are very encouraging.
- Author(s): Nilina Bera ; Subhashis Majumder ; Bhargab B. Bhattacharya
- Source: IET Computers & Digital Techniques, Volume 10, Issue 3, p. 119 –127
- DOI: 10.1049/iet-cdt.2015.0091
- Type: Article
- + Show details - Hide details
-
p.
119
–127
(9)
Digital microfluidics has recently emerged as an effective technology in providing inexpensive but reliable solutions to various biomedical and healthcare applications. On-chip dilution of a fluid sample to achieve a desired concentration is an important problem in the context of droplet-based microfluidic systems. Existing dilution algorithms deploy a sequence of balanced mix-split steps, where two unit-volume droplets of different concentrations are mixed, followed by a balanced-split operation to obtain two equal-sized droplets. In this study, the authors study the problem of generating dilutions using a combination of (1 : 1) and (1:2) mix/split operations, called weighted dilution (WD), and present a layout architecture to implement such WD-steps. The authors also describe a simulation based method to find the optimal mix-split steps for generating a dilution under various criteria such as minimisation of waste, sample, or buffer droplets. The sequences can be stored in a look-up table a priori, and used later in real time for fast generation of actuation sequences. Compared with the balanced (1:1) model, the proposed WD scheme reduces the number of mix-split steps by around 22%, and the number of waste droplets, by 18%.
- Author(s): Chaohui Wang ; Weiguo Wu ; Shiqiang Nie ; Depei Qian
- Source: IET Computers & Digital Techniques, Volume 10, Issue 3, p. 128 –137
- DOI: 10.1049/iet-cdt.2015.0095
- Type: Article
- + Show details - Hide details
-
p.
128
–137
(10)
Task scheduling and placement problem is one of the most significant and time-consuming parts in reconfigurable computing (RC) system. Many investigators have explored on the subject, and most of the traditional studies are concentrated on the rectangle task model, which is inconsistent with objective task shape placed in a field programmable gate array (FPGA) but simplifies the system complexity. Rectangle task model produces inner fragments which reduces utilisation of reconfigurable resources in an FPGA. In this study, a task model transformation strategy and an innovative best-fit transformation (BFT) placement algorithm are proposed for a non-rectangle task model to improve the performance of an RC system in rejection rate and total execution time. According to simulation experiments, BFT algorithm reduced the rejection rate by 15% and 7% compared with that of the first-fit algorithm and the best-fit algorithm, respectively. Multi-shape placement algorithm and 3D compaction algorithm are also cited to compare with the BFT algorithm. The result shows that the BFT algorithm has less total execution time in short laxity period and lower rejection rate in large laxity period. Compared with 3D compaction algorithm, the proposed algorithm reduced the total execution time up to 10.79%.
- Author(s): Irith Pomeranz
- Source: IET Computers & Digital Techniques, Volume 10, Issue 3, p. 138 –145
- DOI: 10.1049/iet-cdt.2015.0117
- Type: Article
- + Show details - Hide details
-
p.
138
–145
(8)
Several approaches exist for reducing the input test data volume beyond the use of test data compression. These approaches use each stored test for applying several different tests. This study develops an approach that combines the advantages of several existing approaches for the application of broadside or skewed-load tests for transition faults. The importance of the combination is that it magnifies the possibility of producing new broadside and skewed-load tests from a stored test, thus allowing the number of stored tests to be reduced further. The combined approach is based on clocking the circuit in functional or shift mode for several clock cycles after a scan-in operation in order to bring it to different states. Each state can be used as the initial state of different broadside or skewed-load tests.
Design of a novel energy efficient topology for maximum magnitude generator
Scalable GF(p) Montgomery multiplier based on a digit–digit computation approach
Embedding of signatures in reconfigurable scan architecture for authentication of intellectual properties in system-on-chip
Simulation-based method for optimum microfluidic sample dilution using weighted mix-split of droplets
BFT: a placement algorithm for non-rectangle task model in reconfigurable computing system
Combined input test data volume reduction for mixed broadside and skewed-load test sets
Most viewed content
Most cited content for this Journal
-
High-performance elliptic curve cryptography processor over NIST prime fields
- Author(s): Md Selim Hossain ; Yinan Kong ; Ehsan Saeedi ; Niras C. Vayalil
- Type: Article
-
Majority-based evolution state assignment algorithm for area and power optimisation of sequential circuits
- Author(s): Aiman H. El-Maleh
- Type: Article
-
Scalable GF(p) Montgomery multiplier based on a digit–digit computation approach
- Author(s): M. Morales-Sandoval and A. Diaz-Perez
- Type: Article
-
Fabrication and characterisation of Al gate n-metal–oxide–semiconductor field-effect transistor, on-chip fabricated with silicon nitride ion-sensitive field-effect transistor
- Author(s): Rekha Chaudhary ; Amit Sharma ; Soumendu Sinha ; Jyoti Yadav ; Rishi Sharma ; Ravindra Mukhiya ; Vinod K. Khanna
- Type: Article
-
Adaptively weighted round-robin arbitration for equality of service in a many-core network-on-chip
- Author(s): Hanmin Park and Kiyoung Choi
- Type: Article