|Item type||Location||Call number||Copy||Status||Date due|
|BOOK||Mesa Lab||QA76.88 .H34 2011 (Browse shelf)||1||Checked out||02/28/2016|
Includes bibliographical references and index.
""Georg Hager and Gerhard Weller have developed a very approachable introduction to high performance computing for scientists and engineers. Their style and descriptions are easy to read and follow...This book presents a balanced treatment of the theory, technology, architecture, and software for modern high performance computers and the use of high performance computing systems. The focus on scientific and engineering problems makes this both educational and unique. I highly recommend this timely book for scientists and engineers. I believe this book will benefit many readers and provide a fine reference." From the Foreword by Jack Dongarra, University of Tennessee, Knoxville, USA" "Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. The book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature." "Features" "Covers basic sequential optimization strategies and the dominating parallelization paradigms" "Highlights the importance of performance modeling of applications on all levels of a system's architecture" "Contains numerous case studies drawn from the authors' invaluable experiences in HPC user support, performance optimization, and benchmarking" "Explores important contemporary concepts, such as multicore architecture and affinity issues" "Includes code examples in Fortran and, if relevant, C and C++" "Offers downloadable code and an annotated bibliography on the book's Web site"--BOOK JACKET.
Modern Processors ; Stored-program computer architecture ; General-purpose cache-based microprocessor architecture ; Memory hierarchies ; Multicore processors ; Multithreaded processors ; Vector processors -- Basic Optimization Techniques for Serial Code ; Scalar profiling ; Common sense optimizations ; Simple measures, large impact ; The role of compilers ; C++ optimizations -- Data Access Optimization ;Balance analysis and lightspeed estimates ; Storage order ; Case study: The Jacobi algorithm ; Case study: Dense matrix transpose ; Algorithm classification and access optimizations ; Case study: Sparse matrix-vector multiply -- Parallel Computers ; Taxonomy of parallel computing paradigms ; Shared-memory computers ; Distributed-memory computers ; Hierarchical (hybrid) systems ; Networks -- Basics of Parallelization ; Why parallelize? ; Parallelism ; Parallel scalability -- Shared-Memory Parallel Programming with OpenMP ; Short introduction to OpenMP ; Case study: OpenMP-parallel Jacobi algorithm ; Advanced OpenMP: Wavefront parallelization -- Efficient OpenMP Programming ; Profiling OpenMP programs ; Performance pitfalls ; Case study: Parallel sparse matrix-vector multiply -- Locality Optimizations on ccNUMA Architectures ; Locality of access on ccNUMA ; Case study: ccNUMA optimization of sparse MVM ; Placement pitfalls ; ccNUMA issues with C++ -- Distributed-Memory Parallel Programming with MPI ; Message passing ; A short introduction to MPI ; Example: MPI parallelization of a Jacobi solver -- Efficient MPI Programming ; MPI performance tools ; Communication parameters ; Synchronization, serialization, contention ; Reducing communication overhead ; Understanding intranode point-to-point communication -- Hybrid Parallelization with MPI and OpenMP ; Basic MPI/OpenMP programming models ; MPI taxonomy of thread interoperability ; Hybrid decomposition and mapping ; Potential benefits and drawbacks of hybrid programming