Amazon cover image
Image from Amazon.com

Hands-on machine learning with Scikit-Learn and TensorFlow : concepts, tools, and techniques to build intelligent systems / Aurelien Geron.

By: Publisher: Beijing O'Reilly, 2017Edition: First editionDescription: xx, 543 pages : illustrations (black and white) ; 24 cmContent type:
  • text
  • still image
Media type:
  • unmediated
Carrier type:
  • volume
ISBN:
  • 9781491962299 (pbk)
  • 1491962291 (pbk)
Subject(s): LOC classification:
  • Q325.5 .G47 2017
Contents:
Machine generated contents note: 1.The Machine Learning Landscape -- What Is Machine Learning? -- Why Use Machine Learning? -- Types of Machine Learning Systems -- Supervised/Unsupervised Learning -- Batch and Online Learning -- Instance-Based Versus Model-Based Learning -- Main Challenges of Machine Learning -- Insufficient Quantity of Training Data -- Nonrepresentative Training Data -- Poor-Quality Data -- Irrelevant Features -- Overfitting the Training Data -- Underfitting the Training Data -- Stepping Back -- Testing and Validating -- Exercises -- 2.End-to-End Machine Learning Project -- Working with Real Data -- Look at the Big Picture -- Frame the Problem -- Select a Performance Measure -- Check the Assumptions -- Get the Data -- Create the Workspace -- Download the Data -- Take a Quick Look at the Data Structure -- Create a Test Set -- Discover and Visualize the Data to Gain Insights -- Visualizing Geographical Data -- Looking for Correlations --
Note continued: Experimenting with Attribute Combinations -- Prepare the Data for Machine Learning Algorithms -- Data Cleaning -- Handling Text and Categorical Attributes -- Custom Transformers -- Feature Scaling -- Transformation Pipelines -- Select and Train a Model -- Training and Evaluating on the Training Set -- Better Evaluation Using Cross-Validation -- Fine-Tune Your Model -- Grid Search -- Randomized Search -- Ensemble Methods -- Analyze the Best Models and Their Errors -- Evaluate Your System on the Test Set -- Launch, Monitor, and Maintain Your System -- Try It Out! -- Exercises -- 3.Classification -- MNIST -- Training a Binary Classifier -- Performance Measures -- Measuring Accuracy Using Cross-Validation -- Confusion Matrix -- Precision and Recall -- Precision/Recall Tradeoff -- The ROC Curve -- Multiclass Classification -- Error Analysis -- Multilabel Classification -- Multioutput Classification -- Exercises -- 4.Training Models -- Linear Regression --
Note continued: The Normal Equation -- Computational Complexity -- Gradient Descent -- Batch Gradient Descent -- Stochastic Gradient Descent -- Mini-batch Gradient Descent -- Polynomial Regression -- Learning Curves -- Regularized Linear Models -- Ridge Regression -- Lasso Regression -- Elastic Net -- Early Stopping -- Logistic Regression -- Estimating Probabilities -- Training and Cost Function -- Decision Boundaries -- Softmax Regression -- Exercises -- 5.Support Vector Machines -- Linear SVM Classification -- Soft Margin Classification -- Nonlinear SVM Classification -- Polynomial Kernel -- Adding Similarity Features -- Gaussian RBF Kernel -- Computational Complexity -- SVM Regression -- Under the Hood -- Decision Function and Predictions -- Training Objective -- Quadratic Programming -- The Dual Problem -- Kernelized SVM -- Online SVMs -- Exercises -- 6.Decision Trees -- Training and Visualizing a Decision Tree -- Making Predictions --
Note continued: Estimating Class Probabilities -- The CART Training Algorithm -- Computational Complexity -- Gini Impurity or Entropy? -- Regularization Hyperparameters -- Regression -- Instability -- Exercises -- 7.Ensemble Learning and Random Forests -- Voting Classifiers -- Bagging and Pasting -- Bagging and Pasting in Scikit-Learn -- Out-of-Bag Evaluation -- Random Patches and Random Subspaces -- Random Forests -- Extra-Trees -- Feature Importance -- Boosting -- AdaBoost -- Gradient Boosting -- Stacking -- Exercises -- 8.Dimensionality Reduction -- The Curse of Dimensionality -- Main Approaches for Dimensionality Reduction -- Projection -- Manifold Learning -- PCA -- Preserving the Variance -- Principal Components -- Projecting Down to d Dimensions -- Using Scikit-Learn -- Explained Variance Ratio -- Choosing the Right Number of Dimensions -- PCA for Compression -- Incremental PCA -- Randomized PCA -- Kernel PCA -- Selecting a Kernel and Tuning Hyperparameters --
Note continued: LLE -- Other Dimensionality Reduction Techniques -- Exercises -- 9.Up and Running with TensorFlow -- Installation -- Creating Your First Graph and Running It in a Session -- Managing Graphs -- Lifecycle of a Node Value -- Linear Regression with TensorFlow -- Implementing Gradient Descent -- Manually Computing the Gradients -- Using autodiff -- Using an Optimizer -- Feeding Data to the Training Algorithm -- Saving and Restoring Models -- Visualizing the Graph and Training Curves Using TensorBoard -- Name Scopes -- Modularity -- Sharing Variables -- Exercises -- 10.Introduction to Artificial Neural Networks -- From Biological to Artificial Neurons -- Biological Neurons -- Logical Computations with Neurons -- The Perceptron -- Multi-Layer Perceptron and Backpropagation -- Training an MLP with TensorFlow's High-Level API -- Training a DNN Using Plain TensorFlow -- Construction Phase -- Execution Phase -- Using the Neural Network --
Note continued: Fine-Tuning Neural Network Hyperparameters -- Number of Hidden Layers -- Number of Neurons per Hidden Layer -- Activation Functions -- Exercises -- 11.Training Deep Neural Nets -- Vanishing/Exploding Gradients Problems -- Xavier and He Initialization -- Nonsaturating Activation Functions -- Batch Normalization -- Gradient Clipping -- Reusing Pretrained Layers -- Reusing a TensorFlow Model -- Reusing Models from Other Frameworks -- Freezing the Lower Layers -- Caching the Frozen Layers -- Tweaking, Dropping, or Replacing the Upper Layers -- Model Zoos -- Unsupervised Pretraining -- Pretraining on an Auxiliary Task -- Faster Optimizers -- Momentum optimization -- Nesterov Accelerated Gradient -- AdaGrad -- RMSProp -- Adam Optimization -- Learning Rate Scheduling -- Avoiding Overfitting Through Regularization -- Early Stopping -- e1 and e2 Regularization -- Dropout -- Max-Norm Regularization -- Data Augmentation -- Practical Guidelines -- Exercises --
Note continued: 12.Distributing TensorFlow Across Devices and Servers -- Multiple Devices on a Single Machine -- Installation -- Managing the GPU RAM -- Placing Operations on Devices -- Parallel Execution -- Control Dependencies -- Multiple Devices Across Multiple Servers -- Opening a Session -- The Master and Worker Services -- Pinning Operations Across Tasks -- Sharding Variables Across Multiple Parameter Servers -- Sharing State Across Sessions Using Resource Containers -- Asynchronous Communication Using TensorFlow Queues -- Loading Data Directly from the Graph -- Parallelizing Neural Networks on a TensorFlow Cluster -- One Neural Network per Device -- In-Graph Versus Between-Graph Replication -- Model Parallelism -- Data Parallelism -- Exercises -- 13.Convolutional Neural Networks -- The Architecture of the Visual Cortex -- Convolutional Layer -- Filters -- Stacking Multiple Feature Maps -- TensorFlow Implementation -- Memory Requirements -- Pooling Layer --
Note continued: CNN Architectures -- LeNet-5 -- AlexNet -- GoogLeNet -- ResNet -- Exercises -- 14.Recurrent Neural Networks -- Recurrent Neurons -- Memory Cells -- Input and Output Sequences -- Basic RNNs in TensorFlow -- Static Unrolling Through Time -- Dynamic Unrolling Through Time -- Handling Variable Length Input Sequences -- Handling Variable-Length Output Sequences -- Training RNNs -- Training a Sequence Classifier -- Training to Predict Time Series -- Creative RNN -- Deep RNNs -- Distributing a Deep RNN Across Multiple GPUs -- Applying Dropout -- The Difficulty of Training over Many Time Steps -- LSTM Cell -- Peephole Connections -- GRU Cell -- Natural Language Processing -- Word Embeddings -- An Encoder-Decoder Network for Machine Translation -- Exercises -- 15.Autoencoders -- Efficient Data Representations -- Performing PCA with an Undercomplete Linear Autoencoder -- Stacked Autoencoders -- TensorFlow Implementation -- Tying Weights --
Note continued: Training One Autoencoder at a Time -- Visualizing the Reconstructions -- Visualizing Features -- Unsupervised Pretraining Using Stacked Autoencoders -- Denoising Autoencoders -- TensorFlow Implementation -- Sparse Autoencoders -- TensorFlow Implementation -- Variational Autoencoders -- Generating Digits -- Other Autoencoders -- Exercises -- 16.Reinforcement Learning -- Learning to Optimize Rewards -- Policy Search -- Introduction to OpenAl Gym -- Neural Network Policies -- Evaluating Actions: The Credit Assignment Problem -- Policy Gradients -- Markov Decision Processes -- Temporal Difference Learning and Q-Learning -- Exploration Policies -- Approximate Q-Learning -- Learning to Play Ms. Pac-Man Using Deep Q-Learning -- Exercises -- Thank You!.
Summary: Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.
Holdings
Item type Current library Call number Copy number Status Date due Barcode Item holds
BOOK BOOK NCAR Library Mesa Lab Q325.5 .G47 2017 1 Missing 50583020006940
Total holds: 0

Includes QR code.

Formerly CIP. Uk

Includes bibliographical references and index.

Machine generated contents note: 1.The Machine Learning Landscape -- What Is Machine Learning? -- Why Use Machine Learning? -- Types of Machine Learning Systems -- Supervised/Unsupervised Learning -- Batch and Online Learning -- Instance-Based Versus Model-Based Learning -- Main Challenges of Machine Learning -- Insufficient Quantity of Training Data -- Nonrepresentative Training Data -- Poor-Quality Data -- Irrelevant Features -- Overfitting the Training Data -- Underfitting the Training Data -- Stepping Back -- Testing and Validating -- Exercises -- 2.End-to-End Machine Learning Project -- Working with Real Data -- Look at the Big Picture -- Frame the Problem -- Select a Performance Measure -- Check the Assumptions -- Get the Data -- Create the Workspace -- Download the Data -- Take a Quick Look at the Data Structure -- Create a Test Set -- Discover and Visualize the Data to Gain Insights -- Visualizing Geographical Data -- Looking for Correlations --

Note continued: Experimenting with Attribute Combinations -- Prepare the Data for Machine Learning Algorithms -- Data Cleaning -- Handling Text and Categorical Attributes -- Custom Transformers -- Feature Scaling -- Transformation Pipelines -- Select and Train a Model -- Training and Evaluating on the Training Set -- Better Evaluation Using Cross-Validation -- Fine-Tune Your Model -- Grid Search -- Randomized Search -- Ensemble Methods -- Analyze the Best Models and Their Errors -- Evaluate Your System on the Test Set -- Launch, Monitor, and Maintain Your System -- Try It Out! -- Exercises -- 3.Classification -- MNIST -- Training a Binary Classifier -- Performance Measures -- Measuring Accuracy Using Cross-Validation -- Confusion Matrix -- Precision and Recall -- Precision/Recall Tradeoff -- The ROC Curve -- Multiclass Classification -- Error Analysis -- Multilabel Classification -- Multioutput Classification -- Exercises -- 4.Training Models -- Linear Regression --

Note continued: The Normal Equation -- Computational Complexity -- Gradient Descent -- Batch Gradient Descent -- Stochastic Gradient Descent -- Mini-batch Gradient Descent -- Polynomial Regression -- Learning Curves -- Regularized Linear Models -- Ridge Regression -- Lasso Regression -- Elastic Net -- Early Stopping -- Logistic Regression -- Estimating Probabilities -- Training and Cost Function -- Decision Boundaries -- Softmax Regression -- Exercises -- 5.Support Vector Machines -- Linear SVM Classification -- Soft Margin Classification -- Nonlinear SVM Classification -- Polynomial Kernel -- Adding Similarity Features -- Gaussian RBF Kernel -- Computational Complexity -- SVM Regression -- Under the Hood -- Decision Function and Predictions -- Training Objective -- Quadratic Programming -- The Dual Problem -- Kernelized SVM -- Online SVMs -- Exercises -- 6.Decision Trees -- Training and Visualizing a Decision Tree -- Making Predictions --

Note continued: Estimating Class Probabilities -- The CART Training Algorithm -- Computational Complexity -- Gini Impurity or Entropy? -- Regularization Hyperparameters -- Regression -- Instability -- Exercises -- 7.Ensemble Learning and Random Forests -- Voting Classifiers -- Bagging and Pasting -- Bagging and Pasting in Scikit-Learn -- Out-of-Bag Evaluation -- Random Patches and Random Subspaces -- Random Forests -- Extra-Trees -- Feature Importance -- Boosting -- AdaBoost -- Gradient Boosting -- Stacking -- Exercises -- 8.Dimensionality Reduction -- The Curse of Dimensionality -- Main Approaches for Dimensionality Reduction -- Projection -- Manifold Learning -- PCA -- Preserving the Variance -- Principal Components -- Projecting Down to d Dimensions -- Using Scikit-Learn -- Explained Variance Ratio -- Choosing the Right Number of Dimensions -- PCA for Compression -- Incremental PCA -- Randomized PCA -- Kernel PCA -- Selecting a Kernel and Tuning Hyperparameters --

Note continued: LLE -- Other Dimensionality Reduction Techniques -- Exercises -- 9.Up and Running with TensorFlow -- Installation -- Creating Your First Graph and Running It in a Session -- Managing Graphs -- Lifecycle of a Node Value -- Linear Regression with TensorFlow -- Implementing Gradient Descent -- Manually Computing the Gradients -- Using autodiff -- Using an Optimizer -- Feeding Data to the Training Algorithm -- Saving and Restoring Models -- Visualizing the Graph and Training Curves Using TensorBoard -- Name Scopes -- Modularity -- Sharing Variables -- Exercises -- 10.Introduction to Artificial Neural Networks -- From Biological to Artificial Neurons -- Biological Neurons -- Logical Computations with Neurons -- The Perceptron -- Multi-Layer Perceptron and Backpropagation -- Training an MLP with TensorFlow's High-Level API -- Training a DNN Using Plain TensorFlow -- Construction Phase -- Execution Phase -- Using the Neural Network --

Note continued: Fine-Tuning Neural Network Hyperparameters -- Number of Hidden Layers -- Number of Neurons per Hidden Layer -- Activation Functions -- Exercises -- 11.Training Deep Neural Nets -- Vanishing/Exploding Gradients Problems -- Xavier and He Initialization -- Nonsaturating Activation Functions -- Batch Normalization -- Gradient Clipping -- Reusing Pretrained Layers -- Reusing a TensorFlow Model -- Reusing Models from Other Frameworks -- Freezing the Lower Layers -- Caching the Frozen Layers -- Tweaking, Dropping, or Replacing the Upper Layers -- Model Zoos -- Unsupervised Pretraining -- Pretraining on an Auxiliary Task -- Faster Optimizers -- Momentum optimization -- Nesterov Accelerated Gradient -- AdaGrad -- RMSProp -- Adam Optimization -- Learning Rate Scheduling -- Avoiding Overfitting Through Regularization -- Early Stopping -- e1 and e2 Regularization -- Dropout -- Max-Norm Regularization -- Data Augmentation -- Practical Guidelines -- Exercises --

Note continued: 12.Distributing TensorFlow Across Devices and Servers -- Multiple Devices on a Single Machine -- Installation -- Managing the GPU RAM -- Placing Operations on Devices -- Parallel Execution -- Control Dependencies -- Multiple Devices Across Multiple Servers -- Opening a Session -- The Master and Worker Services -- Pinning Operations Across Tasks -- Sharding Variables Across Multiple Parameter Servers -- Sharing State Across Sessions Using Resource Containers -- Asynchronous Communication Using TensorFlow Queues -- Loading Data Directly from the Graph -- Parallelizing Neural Networks on a TensorFlow Cluster -- One Neural Network per Device -- In-Graph Versus Between-Graph Replication -- Model Parallelism -- Data Parallelism -- Exercises -- 13.Convolutional Neural Networks -- The Architecture of the Visual Cortex -- Convolutional Layer -- Filters -- Stacking Multiple Feature Maps -- TensorFlow Implementation -- Memory Requirements -- Pooling Layer --

Note continued: CNN Architectures -- LeNet-5 -- AlexNet -- GoogLeNet -- ResNet -- Exercises -- 14.Recurrent Neural Networks -- Recurrent Neurons -- Memory Cells -- Input and Output Sequences -- Basic RNNs in TensorFlow -- Static Unrolling Through Time -- Dynamic Unrolling Through Time -- Handling Variable Length Input Sequences -- Handling Variable-Length Output Sequences -- Training RNNs -- Training a Sequence Classifier -- Training to Predict Time Series -- Creative RNN -- Deep RNNs -- Distributing a Deep RNN Across Multiple GPUs -- Applying Dropout -- The Difficulty of Training over Many Time Steps -- LSTM Cell -- Peephole Connections -- GRU Cell -- Natural Language Processing -- Word Embeddings -- An Encoder-Decoder Network for Machine Translation -- Exercises -- 15.Autoencoders -- Efficient Data Representations -- Performing PCA with an Undercomplete Linear Autoencoder -- Stacked Autoencoders -- TensorFlow Implementation -- Tying Weights --

Note continued: Training One Autoencoder at a Time -- Visualizing the Reconstructions -- Visualizing Features -- Unsupervised Pretraining Using Stacked Autoencoders -- Denoising Autoencoders -- TensorFlow Implementation -- Sparse Autoencoders -- TensorFlow Implementation -- Variational Autoencoders -- Generating Digits -- Other Autoencoders -- Exercises -- 16.Reinforcement Learning -- Learning to Optimize Rewards -- Policy Search -- Introduction to OpenAl Gym -- Neural Network Policies -- Evaluating Actions: The Credit Assignment Problem -- Policy Gradients -- Markov Decision Processes -- Temporal Difference Learning and Q-Learning -- Exploration Policies -- Approximate Q-Learning -- Learning to Play Ms. Pac-Man Using Deep Q-Learning -- Exercises -- Thank You!.

Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.

Questions? Email library@ucar.edu.

Not finding what you are looking for? InterLibrary Loan.