Home
>
Journals & magazines
>
IEE Proceedings - Computers and Digital Technique...
>
Volume 152
Issue 1
IEE Proceedings - Computers and Digital Techniques
Volume 152, Issue 1, January 2005
Volumes & issues:
Volume 152, Issue 1
January 2005
-
- Author(s): G. Gielen
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 1 –2
- DOI: 10.1049/ip-cdt:20059006
- Type: Article
- + Show details - Hide details
-
p.
1
–2
(2)
- Author(s): A. Wieferink ; M. Doerper ; R. Leupers ; G. Ascheid ; H. Meyr ; T. Kogel ; G. Braun ; A. Nohl
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 3 –11
- DOI: 10.1049/ip-cdt:20045058
- Type: Article
- + Show details - Hide details
-
p.
3
–11
(9)
Current and future system-on-chip (SoC) designs will contain an increasing number of heterogeneous programmable units combined with a complex communication architecture to meet flexibility, performance and cost constraints. Such a heterogeneous multiprocessor SoC architecture has enormous potential for optimisation, but requires a system-level design environment and methodology to evaluate architectural alternatives. A methodology is proposed to jointly design and optimise the processor architecture together with the onchip communication based on the LISA processor design platform in combination with SystemC transaction level models. The proposed methodology advocates a successive refinement flow of the system-level models of both the processor cores and the communication architecture. This allows design decisions based on the best modelling efficiency, accuracy and simulation performance possible at the respective abstraction level. The effectiveness of our approach is demonstrated by the exemplary design of a dual-processor JPEG decoding system. - Author(s): K. Ueda ; K. Sakanushi ; Y. Takeuchi ; M. Imai
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 12 –19
- DOI: 10.1049/ip-cdt:20045057
- Type: Article
- + Show details - Hide details
-
p.
12
–19
(8)
An architecture-level performance estimation method based on system-level profiling is proposed. The proposed method estimates the performance of the target architecture by the following procedures: system-level profiling; automatic construction of the execution order graph and execution dependency graph from the profiling information; and estimation of the system performance based on the graph analysis. The proposed method enables fast performance estimation because it can estimate the performance of various architectures from the same system-level profiling information. Experimental results show that the proposed estimation method is 2700 times faster than the architecture-level simulation. - Author(s): V. D'silva ; S. Ramesh ; A. Sowmya
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 20 –27
- DOI: 10.1049/ip-cdt:20045097
- Type: Article
- + Show details - Hide details
-
p.
20
–27
(8)
Plug-n-play-style intellectual property reuse in system-on-chip design is facilitated by the use of an on-chip bus architecture. Component integration and verification in such systems is a cumbersome and time consuming process largely concerned with interfacing issues. A synchronous, finite state machine framework for modelling communication aspects of such architecture is presented. The framework has been developed via interaction with designers and the industry, and is intuitive and light-weight. The development includes cycle-accurate methods for protocol specification, compatibility verification, interface synthesis and model checking with automated specification. Case studies performed include the AMBA family of protocols and a proprietary industrial bus protocol. These modelling exercises show that such models enable reasoning about and comparison of different bus architectures to gain valuable design insights. The utility of this framework is demonstrated by modelling the AMBA bus architecture including details such as pipelined operation, burst transfers, the AHB-APB bridge and arbitration features. - Author(s): A. Andrei ; M. Schmitz ; P. Eles ; Z. Peng ; B.M. Al-Hashimi
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 28 –38
- DOI: 10.1049/ip-cdt:20045055
- Type: Article
- + Show details - Hide details
-
p.
28
–38
(11)
Dynamic voltage scaling and adaptive body biasing have been shown to reduce dynamic and leakage power consumption effectively. The authors report an optimal solution to the combined supply voltage and body bias selection problem for multiprocessor systems with imposed time constraints, explicitly taking into account the transition overheads implied by changing voltage levels, and considering both energy and time overheads. They investigate continuous voltage scaling as well as its discrete counterpart, and strongly prove NP-hardness in the discrete case. Furthermore, the continuous voltage scaling problem is formulated and solved using nonlinear programming with polynomial time complexity, while for the discrete problem they use mixed integer linear programming. Extensive experiments, conducted on several benchmarks and a real-life example, are used to validate the approaches. - Author(s): B. Wan and C.-J.R. Shi
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 39 –44
- DOI: 10.1049/ip-cdt:20045062
- Type: Article
- + Show details - Hide details
-
p.
39
–44
(6)
A systematic method to automatically generate hierarchical multi-dimensional table lookup models for compact device and behavioural models with any number of terminals is presented. The method is based on an abstract syntax tree representation of analytic equations. The expensive parts of the computations represented by the abstract syntax trees are identified and replaced by two-dimensional table lookup models. An error-control-based optimisation algorithm is developed to generate table lookup models with the minimal amount of table data for a given accuracy requirement. The proposed method has been implemented in the model compiler MCAST and the circuit simulator SPICE3. Experimental results show that, compared to non-optimised compilation-based simulation, the simulation using the proposed table lookup optimisation method is about 40 times faster and achieves sufficiently accurate results with an error of less than 1–2%. - Author(s): E.S.J. Martens and G.G.E. Gielen
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 45 –52
- DOI: 10.1049/ip-cdt:20045066
- Type: Article
- + Show details - Hide details
-
p.
45
–52
(8)
A framework to model the behaviour of front-end architectures is presented. To make the model useful for systematic architectural exploration during front-end system design, a wide range of architectures are represented. Compared to other models, it emphasises the information flow throughout the architecture and also enables incremental modelling to represent the system at lower levels of abstraction. All signals in the architecture are represented as polyphase harmonic signals which are transformed by the building blocks of the architectures in the phase–frequency space. Their linear transformation behaviour is modelled by a multiplication of a polyphase harmonic transfer matrix with the input signal. It is shown that this representation for a polyphase mixer can be easily derived from the harmonic transfer matrices for the individual mixers. Extensions to weakly nonlinear behaviour are realised by adding distortion tensors to the model which take both intermodulation and harmonic distortion into account. The nonlinear mapping operation is represented by a repeated calculation of inner products of the distortion tensor and the input signal. As an example, the model has been applied to a downconversion architecture. It is shown that the non-ideal behaviour of the architecture is the result of parasitic transfers between phases and frequencies within the phase–frequency space. This allows a clear identification of the causes of performance degradation and gives suggestions to improve the performance. - Author(s): A.J. Ginés ; E.J. Peralías ; A. Rueda
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 53 –63
- DOI: 10.1049/ip-cdt:20045060
- Type: Article
- + Show details - Hide details
-
p.
53
–63
(11)
The paper presents a new digital technique for background calibration of gain errors in pipeline ADCs. The proposed algorithm estimates and corrects both the MDAC gain error of the stage under calibration (SUC) and the global gain error associated with the least significant stages. This process is performed without interruption of the conversion and without reduction of the dynamic range. It uses a stage with two input–output characteristics depending on the value of a digital pseudorandom noisy signal to modulate the output residue of the SUC and to estimate the calibration code by an adaptive averaging process. The proposed method introduces no significant modifications in the analogue blocks of the pipeline ADCs, making this technique a very promising alternative for background calibration of the nonlinearity associated with the gain errors. Simulation results have proved the stability of the algorithm and the tracking capability for fast gain error changes considering second order effects in both the sub-ADC of the SUC and the back-end stages.
Editorial: DATE04
System level processor/communication co-exploration methodology for multiprocessor system-on-chip platforms
Architecture-level performance estimation method based on system-level profiling
Synchronous protocol automata: a framework for modelling and verification of SoC communication architectures
Overhead-conscious voltage selection for dynamic and leakage energy reduction of time-constrained systems
Hierarchical multi-dimensional table lookup for model-compiler-based circuit simulation
Phase–frequency transfer model of analogue and mixed-signal front-end architectures for system-level design
Noisy signal based background technique for gain error correction in pipeline ADCs
-
- Author(s): Z. Peng ; E. Larsson ; P. Eles
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 65 –66
- DOI: 10.1049/ip-cdt:20059025
- Type: Article
- + Show details - Hide details
-
p.
65
–66
(2)
- Author(s): Q. Xu and N. Nicolici
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 67 –81
- DOI: 10.1049/ip-cdt:20045019
- Type: Article
- + Show details - Hide details
-
p.
67
–81
(15)
Manufacturing test is a key step in the implementation flow of modern integrated electronic products. It certifies the product quality, accelerates yield learning and influences the final cost of the device. With the ongoing shift towards the core-based system-on-a-chip (SOC) design paradigm, unique test challenges, such as test access and test reuse, are confronted. In addition, when addressing these new challenges, the SOC designers must consciously use the resources at hand, while keeping the testing time and volume of test data under control. Consequently, numerous test strategies and algorithms in test architecture design and optimisation, test scheduling and test resource partitioning have emerged to tackle the resource-constrained core-based SOC test. This paper presents a survey of the recent advances in this field. - Author(s): V. Iyengar and A. Chandra
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 82 –88
- DOI: 10.1049/ip-cdt:20045030
- Type: Article
- + Show details - Hide details
-
p.
82
–88
(7)
Test access mechanism (TAM) optimisation and test data compression lead to a reduction in test data volume and testing time for SOCs. In this paper, we integrate for the first time both these approaches into a single test methodology. We show how an integrated test architecture based on TAMs and test data decoders can be designed. The proposed approach offers considerable savings in test resource requirements. Two case studies using the integrated test architecture are presented. Experimental results on test data volume reduction, savings in test application time and the low test pin overheads for a benchmark SOC demonstrate the effectiveness of this approach. - Author(s): P.T. Gonciari ; P. Rosinger ; B.M. Al-Hashimi
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 89 –96
- DOI: 10.1049/ip-cdt:20045043
- Type: Article
- + Show details - Hide details
-
p.
89
–96
(8)
A low-cost test solution for core-based system-on-a-chip (SoC) comprises of test access mechanism (TAM) design – for facilitating access to the embedded cores – and the use of test data compression (TDC) methods – for reducing test resources. While most previous work has considered TAM design and TDC independently, this work analyzes the interrelations between the two, outlining that unless compression characteristics are integrated in the TAM design, test resource penalties may be incurred. This is due to the dependency of some TDC methods on test bus width and care bit density, both of which are related to test time, and hence to TAM design. Therefore, this paper analyzes the interactions between TDC and TAM, and highlights the compression characteristics which need to be considered in compression-driven TAM solutions for reducing test resource penalties. Furthermore, it also shows how an existing TAM design method can be enhanced toward a compression-driven solution. - Author(s): A. Sehgal ; A. Dubey ; E.J. Marinissen ; C. Wouters ; H. Vranken ; K. Chakrabarty
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 97 –106
- DOI: 10.1049/ip-cdt:20045018
- Type: Article
- + Show details - Hide details
-
p.
97
–106
(10)
Embedded memories currently occupy more than 50% of the chip area for typical SOC integrated circuits. Defects in memory arrays can therefore significantly degrade manufacturing yield. In such a setting, repairable embedded memories are desirable because they help improve the memory array yield of an IC. We have developed an array yield analysis tool that provides realistic yield estimates for both single repairable memories, as well as for ICs containing multiple, possibly different, repairable embedded memories. Our approach uses pseudo-random fault bit-maps, which are generated based on memory area, defect density, and fault distribution. In order to accommodate a wide range of industrial memory and redundancy organizations, we have developed a flexible memory model. It generalizes the traditional simple memory matrix model with partitioning into regions, grouping of columns and rows, and column-wise and row-wise coupling of the spares. Our tool is used to determine an optimal amount of spare columns and rows for a given memory, as well as to determine the effectiveness of various repair algorithms. - Author(s): S.-H. Kim ; H.-Y. Choi ; K. Kim
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 1, p. 107 –112
- DOI: 10.1049/ip-cdt:20045027
- Type: Article
- + Show details - Hide details
-
p.
107
–112
(6)
In this paper, usage of undefined states on a state transition graph (STG) is addressed to obtain high fault coverage, in the area of synthesis for testability (SFT) of synchronous sequential circuits. Basically, a given STG could be modified by adding undefined states and distinguishable transitions so that each state might be included in one strongly-connected component as much as possible. Such modification has an effect on decreasing the number of redundant faults because redundant faults caused by the existence of unreachable states on an STG may be eliminated. For the modification, we propose two algorithms for both incompletely specified STGs and completely specified STGs, respectively. In the case of incompletely specified STGs, undefined states are added using unspecified transitions of defined states. In the case of completely specified STGs, undefined states are added by changing transitions specified on an STG while preserving state equivalence. Experimental results with MCNC benchmarks show that the number of redundant faults of gate-level circuits synthesized by our modified STGs are reduced, resulting in high fault coverage as well as short test generation time.
Editorial: Emerging strategies for resource-constrained testing of system chips
Resource-constrained system-on-a-chip test: a survey
Unified SOC test approach based on test data compression and TAM design
Compression considerations in test access mechanism design
Redundancy modelling and array yield analysis for repairable embedded memories
Testable synthesis of synchronous sequential circuits considering strong-connectivity using undefined states
Most viewed content for this Journal
Article
content/journals/ip-cdt
Journal
5
Most cited content for this Journal
We currently have no most cited data available for this content.