Home
>
Journals & magazines
>
IEE Proceedings - Computers and Digital Technique...
>
Volume 152
Issue 3
IEE Proceedings - Computers and Digital Techniques
Volume 152, Issue 3, May 2005
Volumes & issues:
Volume 152, Issue 3
May 2005
-
- Author(s): P. Mishra and N. Dutt
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 285 –297
- DOI: 10.1049/ip-cdt:20045071
- Type: Article
- + Show details - Hide details
-
p.
285
–297
(13)
Embedded systems present a tremendous opportunity to customise designs by exploiting the application behaviour. Shrinking time-to-market, coupled with short product lifetimes, create a critical need for rapid exploration and evaluation of candidate architectures. Architecture description languages (ADL) enable exploration of programmable architectures for a given set of application programs under various design constraints such as area, power and performance. The ADL is used to specify programmable embedded systems, including processor, coprocessor and memory architectures. The ADL specification is used to generate a variety of software tools and models facilitating exploration and validation of candidate architectures. The paper surveys the existing ADLs in terms of (a) the inherent features of the languages and (b) the methodologies they support to enable simulation, compilation, synthesis, test generation and validation of programmable embedded systems. It concludes with a discussion of the relative merits and demerits of the existing ADLs and expected features of future ADLs. - Author(s): D. Sokolov and A. Yakovlev
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 298 –316
- DOI: 10.1049/ip-cdt:20045094
- Type: Article
- + Show details - Hide details
-
p.
298
–316
(19)
Future embedded systems and systems-on-chip are going to be more asynchronous than current VLSI circuits, as predicted by the International Technology Roadmap on Semiconductors. The need for CAD tools for systems without global clocking is rapidly growing. To this end, recent research has been active in two main directions, one being globally asynchronous and locally synchronous systems and the other purely asynchronous or self-timed systems. The state of the art in the synthesis of self-timed circuits from high-level behavioural specifications is reviewed where the two main categories are syntax-driven synthesis and logic-driven synthesis. The primary focus is on the logic‐driven approach, where the key role of an intermediate formal model is played by interpretations of Petri nets, such as signal transition graphs. Recent developments in the area of direct mapping and interactive logic synthesis from Petri net specifications are highlighted. A number of logic synthesis tools are compared by means of a simple and widely known example of the greatest common divisor alogrithm. - Author(s): G.G.E. Gielen
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 317 –332
- DOI: 10.1049/ip-cdt:20045116
- Type: Article
- + Show details - Hide details
-
p.
317
–332
(16)
The paper gives an overview of methods and tools that are needed to design and embed analogue and RF blocks in mixed-signal integrated systems on chip (SoC). The design of these SoCs is characterised by growing design complexities and shortening time to market constraints. This requires new mixed-signal design methodologies and flows, including high-level architectural explorations and techniques for analogue behavioural modelling. This also calls for new methods to increase analogue design productivity, such as the reuse of analogue blocks as well as the adoption of analogue and RF circuit and layout synthesis tools. Also, more detailed modelling and verification tools are needed that can analyse signal integrity and crosstalk problems, especially noise coupling problems caused by the embedding of the analogue circuits in a digital environment. Solutions that already exist today are presented, and challenges that still remain to be solved are outlined. - Author(s): M. Pedram and A. Abdollahi
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 333 –343
- DOI: 10.1049/ip-cdt:20045111
- Type: Article
- + Show details - Hide details
-
p.
333
–343
(11)
Power consumption and power-related issues have become a first-order concern for most designs and loom as fundamental barriers for many others. While the primary method used to date for reducing power has been supply voltage reduction, this technique begins to lose its effectiveness as voltages drop to below one volt and further reductions in the supply voltage begin to create more problems than are solved. Under these circumstances, the process of design and the automation tools required to support that process become the critical success factors. In the last decade, huge effort has been invested to come up with a wide range of design solutions that help solve the power dissipation problem for different types of electronic devices, components and systems. These techniques range from RTL power management and multiple voltage assignment, to power-aware logic synthesis and physical design, to memory and bus interface design. A number of representative low-power design techniques from this large set are explained. More precisely, basic techniques are described, that are applicable at RT-level and below, and have proved to hold good potential for power optimisation in practical design environments. - Author(s): N.K. Jha
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 344 –352
- DOI: 10.1049/ip-cdt:20045067
- Type: Article
- + Show details - Hide details
-
p.
344
–352
(9)
Many scheduling techniques have been presented recently which exploit dynamic voltage scaling (DVS) and dynamic power management (DPM) for both uniprocessors and distributed systems, as well as both real-time and non-real-time systems. DVS/DPM techniques have been applied not just to processors, but to interconnection networks as well. While such techniques are power-aware and aim at extending battery lifetimes for portable systems, they need to be augmented to make them battery-aware as well. Such power-aware and battery-aware scheduling algorithms are surveyed. Also, system synthesis algorithms for real-time systems-on-a-chip (SoCs), distributed and wireless client-server embedded systems, etc. have begun optimising power consumption in addition to system price. Such algorithms are also surveyed. In many handheld computing devices, it is the display that may consume the largest fraction of system power. Recent work in display-related power optimisation is discussed. Finally, some open problems are pointed out. - Author(s): A. Agarwal ; S. Mukhopadhyay ; C.H. Kim ; A. Raychowdhury ; K. Roy
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 353 –368
- DOI: 10.1049/ip-cdt:20045084
- Type: Article
- + Show details - Hide details
-
p.
353
–368
(16)
The high leakage current in the nanometre regime is becoming a significant proportion of power dissipation in CMOS circuits as threshold voltage, channel length and gate oxide thickness are scaled. Consequently, the identification and estimation of different leakage currents are very important in designing low power circuits. In the paper a methodology for accurate estimation of the total leakage in a logic circuit based on the compact modelling of the different leakage currents in nanoscaled bulk CMOS devices is demonstrated. Different leakage currents are modelled based on the device geometry, 2-D doping profile and operating temperature. A circuit level model of subthreshold, junction band-to-band tunnelling (BTBT) and gate leakage is described. The presented model includes the impact of quantum mechanical behaviour of substrate electrons on the circuit leakage. Using the compact current model, a transistor has been modelled as a sum of current sources (SCS). The SCS transistor model has been used to estimate the total leakage in simple logic gates and complex logic circuits (designed with transistors of 25 nm effective length) at room and elevated temperatures. - Author(s): S. Yoo and A.A. Jerraya
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 369 –379
- DOI: 10.1049/ip-cdt:20045113
- Type: Article
- + Show details - Hide details
-
p.
369
–379
(11)
The aim is to explain the current issues of HW/SW cosimulation and to introduce a new challenge of HW/SW cosimulation for multiprocessor SoC (MPSoC). Most of the current issues are related to raising abstraction levels of HW/SW cosimulation. Mixed-level cosimulation is explained in a unified manner using a concept of ‘HW/SW interface’. First, abstraction levels in HW/SW cosimulation are explained in terms of abstraction levels of function, SW interface and HW interface. Transaction level models are introduced for HW interface. OS and device driver levels are explained for the SW interface. Then, the concept, applications and techniques of mixed-level cosimulation are presented. To better understand mixed-level cosimulation through SoC design flow, a view called refinement space is presented. Using the refinement space, cases of mixed-level cosimulation are explained in a SoC design scenario. Then, the issue of cosimulation performance in raising abstraction levels, i.e. Amdahl's law in HW/SW cosimulation, is addressed. A new challenge of cosimulation for MPSoC is also introduced. - Author(s): I.G. Harris
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 380 –392
- DOI: 10.1049/ip-cdt:20045095
- Type: Article
- + Show details - Hide details
-
p.
380
–392
(13)
Hardware/software systems are embedded in devices used to enable all manner of tasks in society today. The increasing use of hardware/software systems in cost-critical and life-critical applications has led to the heightened significance of design correctness of these systems. A summary is presented of research in hardware/software covalidation. The general covalidation problem involves the verification of design correctness using simulation-based techniques. The focus is on the test generation process, the fault models and fault coverage analysis techniques, and the test response analysis techniques employed in covalidation. The current state of research in the field is summarised and future areas for research are identified. - Author(s): R. Drechsler and D. Große
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 393 –406
- DOI: 10.1049/ip-cdt:20045073
- Type: Article
- + Show details - Hide details
-
p.
393
–406
(14)
Owing to increasing design complexity and intensive reuse of components, verifying the correctness of circuits and systems becomes a more and more important factor. In the meantime, in many circuit design projects up to 80% of the overall design costs are caused by verification. By this, checking the correct behaviour becomes the dominating factor. Formal verification has been proposed as a promising alternative to simulation and has become a standard in many flows. In the paper, existing approaches are reviewed and recent trends for system level verification are outlined. To demonstrate the techniques SystemC is used as a system level description language. Besides the successful applications, a list of challenging problems is provided. This gives a better understanding of current problems in hardware verification and shows directions for future research. - Author(s): T.S. Barnett ; M. Grady ; K. Purdy ; A.D. Singh
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 407 –413
- DOI: 10.1049/ip-cdt:20045056
- Type: Article
- + Show details - Hide details
-
p.
407
–413
(7)
An integrated yield–reliability model is verified using burn-in data from 77 000 microprocessor units manufactured by IBM Microelectronics. The model is based on the fact that defects over semiconductor wafers are not randomly distributed but have a tendency to cluster. It is shown that this fact can be exploited to produce dies of varying reliability by sorting dies into bins based on how many of their neighbours test faulty. Dies that test as good at the wafer probe, yet come from regions with many faulty dies, have a higher incidence of infant mortality failure than dies from regions with few faulty dies. The yield–reliability model is used to predict the fraction of good dies in each bin following a wafer probe as well as the fraction of failures in each bin following stress testing (e.g. burn-in). Results show excellent agreement between model predictions and observed data. - Author(s): P.M. Levine and G.W. Roberts
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 415 –426
- DOI: 10.1049/ip-cdt:20045063
- Type: Article
- + Show details - Hide details
-
p.
415
–426
(12)
Verification of timing performance in systems-on-chip (SoCs) is becoming more difficult as clock frequencies and levels of integration increase. As a result, on-chip timing measurement has become a very attractive alternative for validation of these systems because it helps to overcome the bandwidth and test access limitations inherent in SoC environments. Flash time-to-digital converters (TDCs) are well suited for use in on-chip timing measurement systems because they can be operated at high speeds, offer low test time and are relatively easy to integrate. However, clock jitter in modern SoCs is often of the same order of magnitude as the temporal resolution of the TDC itself. Therefore, techniques are required to increase TDC resolution while ensuring timing accuracy. A high-resolution flash TDC is presented that exploits the random offsets on flip-flops or arbiters to perform time quantisation. Also described is a novel technique based on additive temporal noise to accurately calibrate this measurement device. Simulation and experimental results reveal that the latter method can calibrate the high-resolution flash TDC down to 5 ps within reasonable error limits. In addition, accurate timing measurement of jitter below 10 ps has been experimentally validated using a high-resolution flash TDC fabricated in a 0.18-µm CMOS process. - Author(s): K. Chakrabarty
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 427 –441
- DOI: 10.1049/ip-cdt:20045068
- Type: Article
- + Show details - Hide details
-
p.
427
–441
(15)
The popularity of system-on-chip (SOC) integrated circuits has led to an unprecedented increase in test costs. This increase can be attributed to the difficulty of test access to embedded cores, long test development and test application times, and high test data volumes. A survey is presented of test resource partitioning techniques that facilitate low-cost SOC testing. Topics discussed here include techniques for modular testing of digital, mixed-signal and hierarchical SOCs, as well as test data compression methods for intellectual property cores. Together, these techniques offer SOC integrators with the necessary means to manage test complexity and reduce test costs. - Author(s): S.K. Goel and E.J. Marinissen
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 152, Issue 3, p. 442 –456
- DOI: 10.1049/ip-cdt:20050046
- Type: Article
- + Show details - Hide details
-
p.
442
–456
(15)
Multi-site testing is a popular and effective way to increase test throughput and reduce test costs. The authors pro\pose a test flow with large multi-site testing during wafer test, enabled by a narrow SOC-ATE test interface, and relatively small multi-site testing during final (packaged-IC) test, in which all SOC pins need to be contacted. They present a throughput model for multi-site testing, valid for both wafer test and final test, which considers the effects of test time, index time, abort-on-fail and re-test after contact fails. Conventional multi-site testing requires sufficient ATE channels to allow testing of multiple SOCs in parallel. Instead, a given fixed ATE is assumed, and for a given SOC they design and optimise the on-chip design-for-test infrastructure, in order to maximise the throughput during wafer test. The on-chip DfT consists of an E-RPCT wrapper, and, for modularly tested SOCs, module wrappers and TAMs. Subsequently, for the designed test infrastructure, they also maximise the test throughput for final test by tuning its multi-site number. Finally, they present experimental results for the ITC'02 SOC Test Benchmarks and a complex Philips SOC.
Architecture description languages for programmable embedded systems
Clockless circuits and system synthesis
CAD tools for embedded analogue circuits in mixed-signal integrated systems on chip
Low-power RT-level synthesis techniques: a tutorial
Low-power system scheduling, synthesis and displays
Leakage power analysis and reduction: models, estimation and tools
Hardware/software cosimulation from interface perspective
Hardware/software covalidation
System level validation using formal techniques
Exploiting defect clustering for yield and reliability prediction
High-resolution flash time-to-digital conversion and calibration for system-on-chip testing
Low-cost modular testing and test resource partitioning for SOCs
Optimisation of on-chip design-for-test infrastructure for maximal multi-site test throughput
Most viewed content for this Journal
Article
content/journals/ip-cdt
Journal
5
Most cited content for this Journal
We currently have no most cited data available for this content.