IET Computers & Digital Techniques
Volume 9, Issue 4, July 2015
Volumes & issues:
Volume 9, Issue 4
July 2015
-
- Author(s): Sani R. Nassif and Martin A. Trefzer
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 185 –186
- DOI: 10.1049/iet-cdt.2015.0036
- Type: Article
- + Show details - Hide details
-
p.
185
–186
(2)
- Author(s): Asen Asenov
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, page: 187 –187
- DOI: 10.1049/iet-cdt.2015.0019
- Type: Article
- + Show details - Hide details
-
p.
187
(1)
- Author(s): Steve Trimberger
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 188 –189
- DOI: 10.1049/iet-cdt.2014.0155
- Type: Article
- + Show details - Hide details
-
p.
188
–189
(2)
Programmable logic devices permit a new way to practice yield improvement: redundancy at configuration time. By doing so, the authors avoid the overheads of traditional redundancy: explicit spares, replacement logic and on-chip non-volatile memory. This presentation describes a method for avoiding defects that also does not require a unique place-and-route for each fielded chip. Formal analysis and experimental results show the feasibility of the method for standard, unmodified field-programmable gate arrays.
- Author(s): Martin A. Trefzer ; James A. Walker ; Simon J. Bale ; Andy M. Tyrrell
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 190 –196
- DOI: 10.1049/iet-cdt.2014.0146
- Type: Article
- + Show details - Hide details
-
p.
190
–196
(7)
In this study, the authors present a design optimisation case study of D-type flip-flop timing characteristics that are degraded as a result of intrinsic stochastic variability in a 25 nm technology process. What makes this work unique is that the design is mapped onto a multi-reconfigurable architecture, which is, like a field programmable gate array (FPGA), configurable at the gate level but can then be optimised using transistor level configuration options that are additionally built into the architecture. While a hardware VLSI prototype of this architecture is currently being fabricated, the results presented here are obtained from a virtual prototype implemented in SPICE using statistically enhanced 25 nm high performance metal gate MOSFET compact models from gold standard simulations for pre-fabrication verification. A D-type flip-flop is chosen as a benchmark in this study, and it is shown that timing characteristics that are degraded because of stochastic variability can be recovered and improved. This study highlights significant potential of the programmable analogue and digital array architecture to represent a next-generation FPGA architecture that can recover yield using post-fabrication transistor-level optimisation in addition to adjusting the operating point of mapped designs.
- Author(s): Andrey Mokhov
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 197 –205
- DOI: 10.1049/iet-cdt.2014.0135
- Type: Article
- + Show details - Hide details
-
p.
197
–205
(9)
A switch, mechanical or electrical, is a fundamental building element of digital systems. The theory of switching networks, or simply circuits, dates back to Shannon's thesis (1937), where he employed Boolean algebra for reasoning about the functionality of switching networks, and graph theory for describing and manipulating their structure. Following this classic approach, one can deduce functionality from a given structure via analysis, and create a structure implementing a specified functionality via synthesis. The use of two mathematical languages leads to a 'language barrier' – whenever a circuit description is changed in one language, it is necessary to translate the change into the other one to keep both descriptions synchronised. This work presents a unified algebra of switching networks. Its elements are circuits rather than just Boolean functions (as in Boolean algebra) or vertices/edges (as in graph theory). This approach allows one to express both the functionality and structure of switching networks in the same mathematical language and brings new methods of circuit composition for greater reuse of components and interfaces. In this paper we demonstrate how to use the algebra to formally transform circuits, reason about their properties, and even solve equations whose 'unknowns' are circuits.
- Author(s): Saman Kiamehr ; Mojtaba Ebrahimi ; Farshad Firouzi ; Mehdi B. Tahoori
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 206 –212
- DOI: 10.1049/iet-cdt.2014.0142
- Type: Article
- + Show details - Hide details
-
p.
206
–212
(7)
Transistor aging, mostly due to bias temperature instability (BTI), is one of the major unreliability sources at nano-scale technology nodes. BTI causes the circuit delay to increase and eventually leads to a decrease in the circuit lifetime. Typically, standard cells in the library are optimised according to the design time delay; however, because of the asymmetric effect of BTI, the rise and fall delays might become significantly imbalanced over the lifetime. In this study, the BTI effect is mitigated by balancing the rise and fall delays of the standard cells at the excepted lifetime. The authors find an optimal tradeoff between the increase in the library size and the lifetime improvement by non-uniform extension of the library cells for various ranges of the input signal probabilities. The simulation results reveal that this technique can prolong the circuit lifetime by around 150% with a negligible area overhead. Moreover, the effect of different realistic workloads on the distribution of internal node signal probabilities is investigated. This is done to obtain the sensitivity of the proposed static (design time) approach to different workloads during system lifetime. The results show that the proposed approach is still efficient if the workload changes during the runtime.
- Author(s): Renu Kumawat ; Vineet Sahula ; Manoj Singh Gaur
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 213 –220
- DOI: 10.1049/iet-cdt.2014.0124
- Type: Article
- + Show details - Hide details
-
p.
213
–220
(8)
In this study, the authors propose a novel extended continuous time birth–death model for reliability analysis of a nanocell device. A nanocell consists of conducting nanoparticles connected via randomly placed self-assembled monolayer of molecules. These molecules behave as a negative differential resistor. The mathematical expression for expected nanocell lifetime and its availability, in presence of transient errors is computed. On the basis of the model, an algorithm is developed and implemented in MATLAB, PERL and HSPICE, to automatically generate the proposed model representation for a given nanocell. It is used to estimate the success_ratio as well as the nanocell reliability, while considering the uncertainties induced by transient errors. The theoretical results for reliability are validated by simulating HSPICE model of nanocell in presence of varying defect rates. It is observed that the device reliability increases with increase in the number of nanoparticles and molecules. A lower and upper bounds for nanocell reliability are calculated in theory which is validated in simulations.
- Author(s): Mahmoud Momtazpour ; Omid Assare ; Negar Rahmati ; Amirali Boroumand ; Saeid Barati ; Maziar Goudarzi
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 221 –229
- DOI: 10.1049/iet-cdt.2014.0126
- Type: Article
- + Show details - Hide details
-
p.
221
–229
(9)
Process variation has already emerged as a major concern in design of multi-processor system on chips (MPSoC). In recent years, there have been several attempts to bring variability awareness into the task scheduling process of embedded MPSoCs to improve performance yield. This study attempts to provide a comparative study of the current variation-aware design-time task and communication scheduling techniques that target embedded MPSoCs. To this end, the authors first use a sign-off variability modelling framework to accurately estimate the frequency distribution of MPSoC components. The task scheduling methods are then compared in terms of both the quality of the final solution and the computational complexity of the scheduling algorithm. Experimental results on a wide range of benchmarks show that ILP-based task scheduling technique, while guaranteeing the optimality of the solution, can be costly for large application task graphs. On the other hand, one-pass heuristic method is 795 times faster than ILP-based method on average, but is ineffective to find reasonable solutions in the case of large task graphs. Finally, metaheuristic approaches can produce near-optimal schedules within 1–2% of the optimal solutions on average, with up to 7.8 times faster execution time compared with ILP-based approach.
- Author(s): Andrew D. Brown ; Rob Mills ; Kier James Dugan ; Jeff S. Reeve ; Steve B. Furber
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, p. 230 –237
- DOI: 10.1049/iet-cdt.2014.0110
- Type: Article
- + Show details - Hide details
-
p.
230
–237
(8)
As computing systems continue their unquenchable rise towards and through million core architectures, two considerations that used to be unimportant become more and more dominant: power consumption (be it FLOPS/W or W/mm2) and reliability. This study is concerned with the latter: in a system of a million cores, it is unrealistic to expect 100% functionality on power-up; equally, operational availability degrades with time. Monitoring and maintaining the health of such a system using traditional techniques is costly, and most rely on the concept of some sort of central overseer or monitor to make a final judgement about system availability, giving a single point of failure. Large systems of the future will consist of hardware and software that work synergistically to cope with isolated points of failure, allowing the gross behaviour of the system to degrade gracefully and in a meaningful way in the face of faults. This study describes one such system: spiking neural network architecture is a million-core machine with layered fault-tolerance built in at many levels. The authors show how the system may be used to solve the canonical distributed heat diffusion equation, and how the quality of solution is modulated by the effects of partial system failure.
- Source: IET Computers & Digital Techniques, Volume 9, Issue 4, page: 238 –238
- DOI: 10.1049/iet-cdt.2014.0216
- Type: Article
- + Show details - Hide details
-
p.
238
(1)
Editorial
Mastering CMOS variability is the key to success
Defect avoidance in programmable devices
Fighting stochastic variability in a D-type flip-flop with transistor-level reconfiguration
Algebra of switching networks
Extending standard cell library for aging mitigation
Probabilistic model for nanocell reliability evaluation in presence of transient errors
Yield-driven design-time task scheduling techniques for multi-processor system on chips under process variation: a comparative study
Reliable computation with unreliable computers
Erratum
Most viewed content
Most cited content for this Journal
-
High-performance elliptic curve cryptography processor over NIST prime fields
- Author(s): Md Selim Hossain ; Yinan Kong ; Ehsan Saeedi ; Niras C. Vayalil
- Type: Article
-
Majority-based evolution state assignment algorithm for area and power optimisation of sequential circuits
- Author(s): Aiman H. El-Maleh
- Type: Article
-
Scalable GF(p) Montgomery multiplier based on a digit–digit computation approach
- Author(s): M. Morales-Sandoval and A. Diaz-Perez
- Type: Article
-
Fabrication and characterisation of Al gate n-metal–oxide–semiconductor field-effect transistor, on-chip fabricated with silicon nitride ion-sensitive field-effect transistor
- Author(s): Rekha Chaudhary ; Amit Sharma ; Soumendu Sinha ; Jyoti Yadav ; Rishi Sharma ; Ravindra Mukhiya ; Vinod K. Khanna
- Type: Article
-
Adaptively weighted round-robin arbitration for equality of service in a many-core network-on-chip
- Author(s): Hanmin Park and Kiyoung Choi
- Type: Article