Amazon cover image
Image from Amazon.com

High performance Spark : best practices for scaling and optimizing Apache Spark / Hoolden Karau & Rachel Warren.

By: Contributor(s): Publisher: Sebastopol, CA : O'Reilly Media, Inc., 2017Copyright date: 2017Edition: First edition : June 2017Description: xiv, 341 pages : black and white illustrations, graphs, charts ; 24 cmContent type:
  • text
Media type:
  • unmediated
Carrier type:
  • volume
ISBN:
  • 9781491943205
  • 1491943203
Subject(s): LOC classification:
  • QA76.9.D343 K37 2017
Contents:
Table of Contents : Preface -- 1. Introduction high performance Spark -- 2. How Spark works -- 3. Dataframes, datasets, and Spark SQL -- 4. Joins (SQL and Core) -- 5. Effective transformations -- 6. Working with Key/Value Data -- 7. Going beyond Scala -- 8. Testing and validation -- 9. Spark MLlib and ML -- 10. Spark components and packages -- A. Tuning, debugging and other things developers like to pretend don't exist -- Index.
Summary: "Apache Spark is amazing when everything clicks. But if you haven't seen the performance improvements you expected, or still don't feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources. Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you'll also learn how to make it sing. With this book, you'll explore : How Spark SQL's new interfaces improve performance over SQL's RDD data structure ; The choice between data joins in Core Spark and Spark SQL ; Techniques for getting the most out of standard RDD transformations ; How to work around performance issues in Spark's key/value pair paradigm ; Writing high-performance Spark code without Scala or the JVM ; How to test for functionality and performance when applying suggested improvements ; Using Spark MLlib and Spark ML machine learning libraries ; Spark's Streaming components and external community packages." -- back cover.
Holdings
Item type Current library Call number Copy number Status Date due Barcode Item holds
BOOK BOOK NCAR Library Mesa Lab QA76.9 .D343 .K37 2017 1 Available 50583020006486
Total holds: 0

Includes index.

Table of Contents : Preface -- 1. Introduction high performance Spark -- 2. How Spark works -- 3. Dataframes, datasets, and Spark SQL -- 4. Joins (SQL and Core) -- 5. Effective transformations -- 6. Working with Key/Value Data -- 7. Going beyond Scala -- 8. Testing and validation -- 9. Spark MLlib and ML -- 10. Spark components and packages -- A. Tuning, debugging and other things developers like to pretend don't exist -- Index.

"Apache Spark is amazing when everything clicks. But if you haven't seen the performance improvements you expected, or still don't feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources. Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you'll also learn how to make it sing. With this book, you'll explore : How Spark SQL's new interfaces improve performance over SQL's RDD data structure ; The choice between data joins in Core Spark and Spark SQL ; Techniques for getting the most out of standard RDD transformations ; How to work around performance issues in Spark's key/value pair paradigm ; Writing high-performance Spark code without Scala or the JVM ; How to test for functionality and performance when applying suggested improvements ; Using Spark MLlib and Spark ML machine learning libraries ; Spark's Streaming components and external community packages." -- back cover.

Questions? Email library@ucar.edu.

Not finding what you are looking for? InterLibrary Loan.