Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Many-core systems for big-data computing

Many-core systems for big-data computing

For access to this article, please select a purchase option:

Buy chapter PDF
£10.00
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Many-Core Computing: Hardware and Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

In many ways, big data should be the poster-child of many-core computing. By necessity, such applications typically scale extremely well across machines, featuring high levels of thread-level parallelism. Programming techniques, such as Google's MapReduce, have allowed many applications running in the data centre to be programmed with parallelism directly in mind and have enabled extremely high throughput across machines. We explore the state-of-the-art in terms of techniques used to make many-core architectures work for big-data workloads. We explore how tail-latency concerns mean that even though workloads are parallel, high performance is still necessary in at least some parts of the system. We take a look at how memory-system issues can cause some big-data applications to scale less favourably than we would like for many-core architectures. We examine the programming models used for big-data workloads and consider how these both help and hinder the typically complex mapping seen elsewhere for many-core architectures. And we also take a look at the alternatives to traditional many-core systems in exploiting parallelism for efficiency in the big-data space.

Chapter Contents:

  • 21.1 Workload characteristics
  • 21.2 Many-core architectures for big data
  • 21.2.1 The need for many-core
  • 21.2.2 Brawny vs wimpy cores
  • 21.2.3 Scale-out processors
  • 21.2.4 Barriers to implementation
  • 21.3 The memory system
  • 21.3.1 Caching and prefetching
  • 21.3.2 Near-data processing
  • 21.3.3 Non-volatile memories
  • 21.3.4 Memory coherence
  • 21.3.5 On-chip networks
  • 21.4 Programming models
  • 21.5 Case studies
  • 21.5.1 Xeon Phi
  • 21.5.2 Tilera
  • 21.5.3 Piranha
  • 21.5.4 Niagara
  • 21.5.5 Adapteva
  • 21.5.6 TOP500 and GREEN500
  • 21.6 Other approaches to high-performance big data
  • 21.6.1 Field-programmable gate arrays
  • 21.6.2 Vector processing
  • 21.6.3 Accelerators
  • 21.6.4 Graphics processing units
  • 21.7 Conclusion and future directions
  • 21.7.1 Programming models
  • 21.7.2 Reducing manual effort
  • 21.7.3 Suitable architectures and microarchitectures
  • 21.7.4 Memory-system advancements
  • 21.7.5 Replacing commodity hardware
  • 21.7.6 Latency
  • 21.7.7 Workload heterogeneity
  • References

Inspec keywords: multiprocessing systems; Big Data; distributed programming

Other keywords: tail-latency; many-core computing; memory-system issues; many-core architectures; thread-level parallelism; big-data workloads; Google MapReduce; big-data computing; programming techniques

Subjects: Data handling techniques; Multiprocessing systems; Distributed systems software; Parallel programming

Preview this chapter:
Zoom in
Zoomout

Many-core systems for big-data computing, Page 1 of 2

| /docserver/preview/fulltext/books/pc/pbpc022e/PBPC022E_ch21-1.gif /docserver/preview/fulltext/books/pc/pbpc022e/PBPC022E_ch21-2.gif

Related content

content/books/10.1049/pbpc022e_ch21
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address