Deep Learning Cookbook: Practical Recipes to Get Started Quickly

Deep Learning Cookbook: Practical Recipes to Get Started Quickly

by Douwe Osinga
Deep Learning Cookbook: Practical Recipes to Get Started Quickly

Deep Learning Cookbook: Practical Recipes to Get Started Quickly

by Douwe Osinga

Paperback

$59.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Deep learning doesn't have to be intimidating. Until recently, this machine-learning method required years of study, but with frameworks such as Keras and Tensorflow, software engineers without a background in machine learning can quickly enter the field. With the recipes in this cookbook, you'll learn how to solve deep-learning problems for classifying and generating text, images, and music.

Each chapter consists of several recipes needed to complete a single project, such as training a music recommending system. Author Douwe Osinga also provides a chapter with half a dozen techniques to help you if you fire stuck. Examples are written in Python with code available on GitHub as a set of Python notebooks.

You'll learn how to:

  • Create applications that will serve real users
  • Use word embeddings to calculate text similarity
  • Build a movie recommender system based on Wikipedia links
  • Learn how AIs see the world by visualizing their internal state
  • Build a model to suggest emojis for pieces of text
  • Reuse pretrained networks to build an inverse image search service
  • Compare how GANs, autoencoders and LSTMs generate icons
  • Detect music styles and index song collections

Product Details

ISBN-13: 9781491995846
Publisher: O'Reilly Media, Incorporated
Publication date: 06/22/2018
Pages: 251
Sales rank: 1,143,364
Product dimensions: 6.90(w) x 9.10(h) x 0.70(d)

About the Author

Douwe Osinga is an experienced Software Engineer, formerly with Google, and founder of three startups. He maintains a popular software project website, partly focused on machine learning
(https://douweosinga.com/projects/machine_learning).

Table of Contents

Preface vii

1 Tools and Techniques 1

1.1 Types of Neural Networks 1

1.2 Acquiring Data 11

1.3 Preprocessing Data 18

2 Getting Unstuck 25

2.1 Determining That You Are Stuck 25

2.2 Solving Runtime Errors 26

2.3 Checking Intermediate Results 28

2.4 Picking the Right Activation Function (for Your Final Layer) 29

2.5 Regularization and Dropout 31

2.6 Network Structure, Batch Size, and Learning Rate 32

3 Calculating Text Similarity Using Word Embeddings 35

3.1 Using Pretrained Word Embeddings to Find Word Similarity 36

3.2 Word2vec Math 38

3.3 Visualizing Word Embeddings 40

3.4 Finding Entity Classes in Embeddings 41

3.5 Calculating Semantic Distances Inside a Class 45

3.6 Visualizing Country Data on a Map 47

4 Building a Recommender System Based on Outgoing Wikipedia Links 49

4.1 Collecting the Data 49

4.2 Training Movie Embeddings 53

4.3 Building a Movie Recommender 56

4.4 Predicting Simple Movie Properties 57

5 Generating Text in the Style of an Example Text 61

5.1 Acquiring the Text of Public Domain Books 61

5.2 Generating Shakespeare-Like Texts 62

5.3 Writing Code Using RNNs 65

5.4 Controlling the Temperature of the Output 67

5.5 Visualizing Recurrent Network Activations 69

6 Question Matching 73

6.1 Acquiring Data from Stack Exchange 73

6.2 Exploring Data Using Pandas 75

6.3 Using Keras to Featurize Text 76

6.4 Building a Question/Answer Model 77

6.5 Training a Model with Pandas 79

6.6 Checking Similarities 80

7 Suggesting Emojis 83

7.1 Building a Simple Sentiment Classifier 83

7.2 Inspecting a Simple Classifier 86

7.3 Using a Convolutional Network for Sentiment Analysis 87

7.4 Collecting Twitter Data 89

7.5 A Simple Emoji Predictor 91

7.6 Dropout and Multiple Windows 92

7.7 Building a Word-Level Model 94

7.8 Constructing Your Own Embeddings 96

7.9 Using a Recurrent Neural Network for Classification 97

7.10 Visualizing (Dis)Agreement 99

7.11 Combining Models 101

8 Sequence-to-Sequence Mapping 103

8.1 Training a Simple Sequence-to-Sequence Model 103

8.2 Extracting Dialogue from Texts 105

8.3 Handling an Open Vocabulary 106

8.4 Training a seq2seq Chatbot 108

9 Reusing a Pretrained image Recognition Network 113

9.1 Loading a Pretrained Network 114

9.2 Preprocessing Images 114

9.3 Running Inference on Images 116

9.4 Using the Flickr API to Collect a Set of Labeled Images 117

9.5 Building a Classifier That Can Tell Cats from Dogs 118

9.6 Improving Search Results 120

9.7 Retraining Image Recognition Networks 122

10 Building an Inverse Image Search Service 125

10.1 Acquiring Images from Wikipedia 125

10.2 Projecting Images into an N-Dimensional Space 128

10.3 Finding Nearest Neighbors in High-Dimensional Spaces 129

10.4 Exploring Local Neighborhoods in Embeddings 130

11 Detecting Multiple images 133

11.1 Detecting Multiple Images Using a Pretrained Classifier 133

11.2 Using Faster RCNN for Object Detection 137

11.3 Running Faster RCNN over Our Own Images 139

12 Image Style 143

12.1 Visualizing CNN Activations 144

12.2 Octaves and Scaling 147

12.3 Visualizing What a Neural Network Almost Sees 149

12.4 Capturing the Style of an Image 152

12.5 Improving the Loss Function to Increase Image Coherence 155

12.6 Transferring the Style to a Different Image 156

12.7 Style Interpolation 158

13 Generating Images with Autoencoders 161

13.1 Importing Drawings from Google Quick Draw 162

13.2 Creating an Auto encoder for Images 163

13.3 Visualizing Autoencoder Results 166

13.4 Sampling Images from a Correct Distribution 167

13.5 Visualizing a Variational Autoencoder Space 170

13.6 Conditional Variational Autoencoders 172

14 Generating Icons Using Deep Nets 175

14.1 Acquiring Icons for Training 176

14.2 Converting the Icons to a Tensor Representation 178

14.3 Using a Variational Autoencoder to Generate Icons 179

14.4 Using Data Augmentation to Improve the Autoencoder's Performance 181

14.5 Building a Generative Adversarial Network 183

14.6 Training Generative Adversarial Networks 185

14.7 Showing the Icons the GAN Produces 186

14.8 Encoding Icons as Drawing Instructions 188

14.9 Training an RNN to Draw Icons 189

14.10 Generating Icons Using an RNN 191

15 Music and Deep Learning 193

15.1 Creating a Training Set for Music Classification 194

15.2 Training a Music Genre Detector 196

15.3 Visualizing Confusion 198

15.4 Indexing Existing Music 199

15.5 Setting Up Spotify API Access 202

15.6 Collecting Playlists and Songs from Spotify 203

15.7 Training a Music Recommender 206

15.8 Recommending Songs Using a Word2vec Model 206

16 Productionizing Machine Learning Systems 209

16.1 Using Scikit-Learn's Nearest Neighbors for Embeddings 210

16.2 Use Postgres to Store Embeddings 211

16.3 Populating and Querying Embeddings Stored in Postgres 212

16.4 Storing High-Dimensional Models in Postgres 213

16.5 Writing Microservices in Python 215

16.6 Deploying a Keras Model Using a Microservice 216

16.7 Calling a Microservice from a Web Framework 217

16.8 TensorFlow seq2seq models 218

16.9 Running Deep Learning Models in the Browser 219

16.10 Running a Keras Model Using TensorFlow Serving 222

16.11 Using a Keras Model from iOS 224

Index 227

From the B&N Reads Blog

Customer Reviews