Online ISSN
1751-861X
Print ISSN
1751-8601
IET Computers & Digital Techniques
Volume 3, Issue 2, March 2009
Volumes & issues:
Volume 3, Issue 2
March 2009
-
- Author(s): A.H. El-Maleh ; M.I. Ali ; A.A. Al-Yamani
- Source: IET Computers & Digital Techniques, Volume 3, Issue 2, p. 143 –161
- DOI: 10.1049/iet-cdt:20080012
- Type: Article
- + Show details - Hide details
-
p.
143
–161
(19)
An effective reconfigurable broadcast scan compression scheme that employs partitioning of test sets and relaxation-based decomposition of test vectors is proposed. Given a constraint on the number of tester channels, the technique classifies test sets into acceptable and bottleneck vectors. The bottleneck vectors are then decomposed into a set of vectors that meets the given constraint. The acceptable and decomposed test vectors are partitioned into the smallest number of partitions while satisfying the tester channels constraint to reduce the decompressor area. Thus, the technique by construction satisfies a given tester channel constraint at the expense of an increased test vector count and number of partitions, offering a tradeoff between test compression, the test application time and the area of test decompression circuitry. Experimental results demonstrate that the proposed technique achieves better compression ratios compared with other techniques of test compression. - Author(s): K.-H. Lim ; Y.H. Kim ; T. Kim
- Source: IET Computers & Digital Techniques, Volume 3, Issue 2, p. 162 –174
- DOI: 10.1049/iet-cdt:20080019
- Type: Article
- + Show details - Hide details
-
p.
162
–174
(13)
Distributed register-file microarchitecture (DRFM), which comprises multiple uniform blocks (called islands), each containing a dedicated register file, functional unit(s) and data-routing logic, has been known as a very attractive architecture for implementing designs with platform-featured on-chip memory or register-file IP blocks. In comparison with the discrete-register-based architecture, DRFM offers an opportunity of reducing the cost of global (inter-island) connections by confining as many of the computations to the inside of the islands as possible. Consequently, for DRFM architecture, two important problems to be solved effectively in high-level synthesis are: (problem 1) scheduling and resource binding for minimising inter-island connections (IICs) and (problem 2) data transfer (i.e. communication) scheduling through the IICs for minimising access conflicts among data transfers. By solving problem 1, the design complexity because of the long interconnect delay is minimised, whereas by solving problem 2, the additional latency required to resolve the register-file access conflicts among the inter-island data transfers is minimised. This work proposes novel solutions to the two problems. Specifically, for problem 1, previous work solves it in two separate steps: (i) scheduling and (ii) then determining the IICs by resource binding to islands. However, in this algorithm called DFRM-int, the authors place primary importance on the cost of interconnections. Consequently, the authors minimise the cost of interconnections first to fully exploit the effects of scheduling on interconnects and then to schedule the operations later. For problem 2, previous work tries to solve the access conflicts by forwarding data directly to the destination island. However, in this algorithm called DFRM-com, the authors devise an efficient technique of exploring an extensive design space of data forwarding indirectly as well as directly to find a near-optimal solution. By applying this proposed synthesis approach DFRM-int+DFRM-com, the authors are able to further reduce the IICs by 17.9%, compared with that by the conventional DRFM approach, even completely eliminating register-file access conflicts without any increase of latency. - Author(s): D. Mitra ; S. Sur-Kolay ; B.B. Bhattacharya
- Source: IET Computers & Digital Techniques, Volume 3, Issue 2, p. 175 –183
- DOI: 10.1049/iet-cdt:20080020
- Type: Article
- + Show details - Hide details
-
p.
175
–183
(9)
In nanometer-scale integrated circuits, simultaneous switching at gates in physical proximity may induce power supply droop, and thereby invoke timing faults, termed as droop faults. During at-speed testing of such chips, two test vectors in a test sequence may excite droop and, thus, cause test invalidation. Fast application of test vectors may be needed for high-speed testing or for built-in self-test systems. The occurrence of droop strongly depends on the sequence of test vectors applied. The effect of droop on fast testing of stuck-at faults is investigated. For combinational circuits, the droop sensitivity of a given test sequence is studied and a method of re-ordering to reduce this effect is proposed. Experimental results on benchmark circuits show that the increase in test length to achieve droop-insensitive re-ordering is low. Droop excitability in full-scan sequential circuits is also studied. - Author(s): S. Lin ; L. Su ; H. Su ; G. Zhou ; D. Jin ; L. Zeng
- Source: IET Computers & Digital Techniques, Volume 3, Issue 2, p. 184 –194
- DOI: 10.1049/iet-cdt:20080036
- Type: Article
- + Show details - Hide details
-
p.
184
–194
(11)
A method is proposed to guarantee bandwidth (BW) or latency of network-on-chip. This method contains three kernels: traffic classification; flit-based switching; path pre-assignment and link-BW setting. Compared with the traditional circuit-switch method, the proposed method can guarantee the latency between one flit's generation in the source node and its reception in the destination node. This method also supports a wide range of traffic types such as latency critical, low BW traffic and streaming data which only have BW requirement. Moreover, router and network interface which support the proposed method are implemented and a maximum latency formula is developed. Simulation and synthesis results show that this method can guarantee the BW and latency well and is relatively low cost. - Author(s): O. Sinanoglu ; M. Al-Mulla ; M. Taha
- Source: IET Computers & Digital Techniques, Volume 3, Issue 2, p. 195 –204
- DOI: 10.1049/iet-cdt:20080051
- Type: Article
- + Show details - Hide details
-
p.
195
–204
(10)
Utilisation of input compatibilities alleviates test costs in many applications such as reducing linear feedback shift register (LFSR) size, and scan tree construction among others. Correlation among inputs, identified based on a test set analysis, can be exploited by driving the circuit inputs through fewer channels. The reduction in the number of channels, which is dictated by the number of compatible input groups, determines the extent of test cost savings thus attained. The utilisation of inverse compatibility along with direct compatibility of inputs helps reduce test costs further. The don't care bits in a test set, however, complicate the identification of valid compatibility groups that consist of both pairwise directly and pairwise inversely compatible inputs, as conflicts may arise during the specification of these don't care bits, leading to an invalid compatibility class. Here, the authors formally model inverse compatibility for the first time, tackling the challenge induced by the specification of don't care bits in the process of identification of compatible group, thus enabling the utilisation of inverse compatibilities along with direct compatibilities. The hyper-graph-based modelling that is introduced here enables the exploitation of the full potential of inverse compatibilities. The applications that rely on input compatibility can, thus, greatly benefit from the techniques presented here in attaining higher test cost savings. - Author(s): N. Kroupis and D. Soudris
- Source: IET Computers & Digital Techniques, Volume 3, Issue 2, p. 205 –221
- DOI: 10.1049/iet-cdt:20080009
- Type: Article
- + Show details - Hide details
-
p.
205
–221
(17)
Considering the time-to-market restrictions and the increased computational complexity of modern applications, the efficient design of data intensive digital signal processing (DSP) applications is a challenging problem. A typical design exploration procedure, which uses simulation-based tools for various cache parameters, is a rather time-consuming task, even for low-complexity applications. The main goal is the introduction of a novel estimation methodology, which provides fast and accurate estimates of the number of executed instructions and the instruction cache miss rate of data intensive applications implemented on a programmable embedded platform, during the early design phases. The proposed methodology consists of three stages, where the first one is a platform-independent stage, whereas the remaining two use information from the chosen embedded platform. In particular, specific information is extracted from both the high-level code description (C code) of the application and its corresponding assembly code, without carrying out any kind of simulation. The proposed methodology requires only a single execution of the application in a general-purpose processor and uses only the assembly code of the targeted embedded processor. To accelerate the estimation procedure, a novel software tool, which implements the proposed methodology, has been developed. Using nine real-life data intensive applications from different domains of the DSP field, it has been proved that with the proposed methodology the number of instructions and the miss rate of the instruction cache can be estimated with very high accuracy (>90%). Furthermore, the required time cost is much smaller (orders of magnitude) than the existing simulation-based approaches. - Author(s): I. Pomeranz and S.M. Reddy
- Source: IET Computers & Digital Techniques, Volume 3, Issue 2, p. 222 –233
- DOI: 10.1049/iet-cdt:20080056
- Type: Article
- + Show details - Hide details
-
p.
222
–233
(12)
The authors introduce the concept of test vector chains which allows one to obtain new test vectors from existing ones through single-bit changes. A test vector chain is defined based on a pair of test vectors t1 and t2. It consists of a sequence of single-bit changes, which gradually modifies t1 into t2. The authors demonstrate that, based on a test set T that does not detect all the detectable target faults, it is possible to define a significant number of test vector chains, which are effective in detecting yet-undetected target faults. It is also possible to find test vector chains that are effective in increasing the numbers of detections of target faults that are detected by T. Increasing the number of detections increases the coverage of untargeted faults, that is, faults that were not targeted during the generation of T. The authors study criteria for identifying the most effective test vector chains of a test set in order to avoid considering m(m−1) test vector chains for a test set of size m, and describe test generation procedures based on test vector chains.
Reconfigurable broadcast scan compression using relaxation-based test vector decomposition
Interconnect and communication synthesis for distributed register-file microarchitecture
Droop sensitivity of stuck-at fault tests
Design networks-on-chip with latency/bandwidth guarantees
Utilisation of inverse compatibility for test cost reductions
High-level estimation methodology for designing the instruction cache memory of programmable embedded platforms
Test vector chains for increasing the fault coverage and numbers of detections
Most viewed content for this Journal
Article
content/journals/iet-cdt
Journal
5
Most cited content for this Journal
-
High-performance elliptic curve cryptography processor over NIST prime fields
- Author(s): Md Selim Hossain ; Yinan Kong ; Ehsan Saeedi ; Niras C. Vayalil
- Type: Article
-
Majority-based evolution state assignment algorithm for area and power optimisation of sequential circuits
- Author(s): Aiman H. El-Maleh
- Type: Article
-
Scalable GF(p) Montgomery multiplier based on a digit–digit computation approach
- Author(s): M. Morales-Sandoval and A. Diaz-Perez
- Type: Article
-
Fabrication and characterisation of Al gate n-metal–oxide–semiconductor field-effect transistor, on-chip fabricated with silicon nitride ion-sensitive field-effect transistor
- Author(s): Rekha Chaudhary ; Amit Sharma ; Soumendu Sinha ; Jyoti Yadav ; Rishi Sharma ; Ravindra Mukhiya ; Vinod K. Khanna
- Type: Article
-
Adaptively weighted round-robin arbitration for equality of service in a many-core network-on-chip
- Author(s): Hanmin Park and Kiyoung Choi
- Type: Article