Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Deep Learning Illustrated: My First Book

Blog: Deep Learning Illustrated: My First Book

Go to the profile of Jon Krohn

I’m delighted to announce my first book, Deep Learning Illustrated.

Deep learning is transforming software, facilitating powerful new artificial intelligence capabilities and driving unprecedented algorithm performance. Deep Learning Illustrated is uniquely visual, intuitive and accessible, and yet offers a comprehensive introduction to the discipline’s techniques and applications.

Packed with full-colour illustrations and easy-to-follow code, the book sweeps away much of the complexity of building deep learning models, making the subject approachable and fun to learn.

Aglaé Bassens, the Belgian artist behind Deep Learning Illustrated

With crucial material provided by my colleague Grant Beyleveld and beautiful illustrations by Aglaé Bassens, we present straightforward analogies to explain what deep learning is, why it has become so popular, and how it relates to other machine learning approaches. The book offers a practical reference and tutorial for anyone who would like to begin applying deep learning, including:

  • Developers
  • Data scientists
  • Researchers
  • Analysts
  • Students

We cover essential theory with as little mathematics as possible, preferring to illuminate concepts with hands-on Python code and practical run-throughs in accompanying Jupyter notebooks (available open-source in GitHub here).

To help you progress quickly, we focus on the versatile, high-level deep learning library Keras to nimbly construct efficient TensorFlow models. PyTorch, the leading alternative library, is also covered.

Dr. Grant Beyleveld, the South African data scientist. At the machine learning company untapt, he focuses on processing natural language with deep learning.

By working through the book, readers will develop a pragmatic understanding of all major deep learning approaches and their uses in applications ranging from machine vision and natural language processing to image generation and game-playing algorithms. More specifically, readers will:

  • Discover what makes deep learning systems unique, and the implications for practitioners
  • Explore new tools that make deep learning models easier to build, use, and improve
  • Master essential theory: artificial neurons, deep feedforward networks, training, optimisation, convolutional nets, recurrent nets, generative adversarial networks (GANs), deep reinforcement learning, and more
  • Walk through building interactive deep learning applications, and move forward with your own artificial intelligence projects

Deep Learning Illustrated is now available to be ordered worldwide — via, e.g., Amazon, Barnes & Noble — and copies will ship in the summer. In the meantime, a digital “rough cut” of the entire book became available in Safari Books (which offers free 10-day trials) this week.

The book’s contents are as follows.

Part I: Introducing Deep Learning

Chapter 1: Biological and Machine Vision

  • Biological Vision
  • Machine Vision (The Neocognitron; LeNet-5; The Traditional Machine Learning Approach; ImageNet and the ILSVRC; AlexNet)
  • TensorFlow PlayGround
  • The Quick, Draw! Game

Chapter 2: Human and Machine Language

  • Deep Learning for Natural Language Processing (Deep Learning Networks Learn Representations Automatically; A Brief History of Deep Learning for NLP)
  • Computational Representations of Language (One-Hot Representations of Words; Word Vectors; Word Vector Arithmetic; word2viz; Localist Versus Distributed Representations)
  • Elements of Natural Human Language
  • Google Duplex

Chapter 3: Machine Art

  • A Boozy All-Nighter
  • Arithmetic on Fake Human Faces
  • Style Transfer: Converting Photos into Monet (and Vice Versa)
  • Make Your Own Sketches Photorealistic
  • Creating Photorealistic Images from Text
  • Image Processing Using Deep Learning

Chapter 4: Game-Playing Machines

  • Deep Learning, AI, and Other Beasts (Artificial Intelligence, Machine Learning, Representation Learning, Artificial Neural Networks)
  • Three Categories of Machine Learning Problems (Supervised Learning, Unsupervised Learning, Reinforcement Learning)
  • Deep Reinforcement Learning
  • Video Games
  • Board Games (AlphaGo, AlphaGo Zero, AlphaZero)
  • Manipulation of Objects
  • Popular Reinforcement Learning Environments (OpenAI Gym, DeepMind Lab, Unity ML-Agents)
  • Three Categories of AI (Artificial Narrow Intelligence, Artificial General Intelligence, Artificial Super Intelligence)

Part II: Essential Theory Illustrated

Chapter 5: The (Code) Cart Ahead of the (Theory) Horse

  • Prerequisites
  • Installation
  • A Shallow Neural Network in Keras (The MNIST Handwritten Digits, A Schematic Diagram of the Network, Loading the Data, Reformatting the Data, Designing a Neural Network Architecture, Training a Deep Learning Model)

Chapter 6: Artificial Neurons Detecting Hot Dogs

  • Biological Neuroanatomy 101
  • The Perceptron (The Hot Dog / Not Hot Dog Detector; The Most Important Equation in the Book)
  • Modern Neurons and Activation Functions (Sigmoid Neurons; Tanh Neurons; ReLU: Rectified Linear Units)
  • Choosing a Neuron

Chapter 7: Artificial Neural Networks

  • The Input Layer
  • Dense Layers
  • A Hot Dog-Detecting Dense Network (Forward Propagation through the First Hidden Layer; Forward Propagation through Subsequent Layers)
  • The Softmax Layer of a Fast Food-Classifying Network
  • Revisiting our Shallow Neural Network

Chapter 8: Training Deep Networks

  • Cost Functions (Quadratic Cost; Saturated Neurons; Cross-Entropy Cost)
  • Optimization: Learning to Minimize Cost (Gradient Descent; Learning Rate; Batch Size and Stochastic Gradient Descent; Escaping the Local Minimum)
  • Backpropagation
  • Tuning Hidden-Layer Count and Neuron Count
  • An Intermediate Net in Keras

Chapter 9: Improving Deep Networks

  • Weight Initialization (Xavier Glorot Distributions)
  • Unstable Gradients (Vanishing Gradients; Exploding Gradients; Batch Normalization)
  • Model Generalization — Avoiding Overfitting (L1 and L2 Regularization; Dropout; Data Augmentation)
  • Fancy Optimizers (Momentum; Nesterov Momentum; AdaGrad; AdaDelta and RMSProp; Adam)
  • A Deep Neural Network in Keras
  • TensorBoard

Part III: Interactive Applications of Deep Learning

Chapter 10: Machine Vision

  • Convolutional Neural Networks (The Two-Dimensional Structure of Visual Imagery; Computational Complexity; Convolutional Layers; Multiple Filters; A Convolutional Example; Convolutional Filter Hyperparameters; Stride Length; Padding)
  • Pooling Layers
  • LeNet-5 in Keras
  • AlexNet and VGGNet in Keras
  • Residual Networks (Vanishing Gradients: The Bête Noire of Deep CNNs; Residual Connection)
  • Applications of Machine Vision (Object Detection; Image Segmentation; Transfer Learning; Capsule Networks)

Chapter 11: Natural Language Processing

  • Preprocessing Natural Language Data (Tokenization; Converting all Characters to Lower Case; Removing Stop Words and Punctuation; Stemming; Handling n-grams; Preprocessing the Full Corpus)
  • Creating Word Embeddings with word2vec (The Essential Theory Behind word2vec; Evaluating Word Vectors; Running word2vec; Plotting Word Vectors)
  • The Area Under the ROC Curve (The Confusion Matrix; Calculating the ROC AUC Metric)
  • Natural Language Classification with Familiar Networks (Loading the IMDB Film Reviews; Examining the IMDB Data; Standardizing the Length of the Reviews; Dense Network; Convolutional Networks)
  • Networks Designed for Sequential Data (Recurrent Neural Networks; Long Short-Term Memory Units; Bidirectional LSTMs; Stacked Recurrent Models; Seq2seq and Attention; Transfer Learning in NLP)
  • Non-Sequential Architectures: The Keras Functional API

Chapter 12: Generative Adversarial Networks

  • Essential GAN Theory
  • The “Quick, Draw!” Dataset
  • The Discriminator Network
  • The Generator Network
  • The Adversarial Network
  • GAN Training

Chapter 13: Deep Reinforcement Learning

  • Essential Theory of Reinforcement Learning (The Cart-Pole Game; Markov Decision Processes; The Optimal Policy)
  • Essential Theory of Deep Q-Learning Networks (Value Functions; Q-Value Functions; Estimating an Optimal Q-Value)
  • Defining a DQN Agent (Initialization Parameters; Building the Agent’s Neural Network Model; Remembering Gameplay; Training via Memory Replay; Selecting an Action to Take; Saving and Loading Model Parameters)
  • Interacting with an OpenAI Gym Environment
  • Hyperparameter Optimization with SLM Lab
  • Agents Beyond DQN (Policy Gradients and the REINFORCE Algorithm; The Actor-Critic Algorithm)

Part IV: You and AI

Chapter 14: Moving Forward with Your Own Deep Learning Projects

  • Ideas for Deep Learning Projects (Machine Vision and GANs; Natural Language Processing; Deep Reinforcement Learning; Converting an Existing Machine-Learning Project)
  • Resources for Further Projects (Socially-Beneficial Projects)
  • The Modeling Process, including Hyperparameter Tuning (Automation of Hyperparameter Search)
  • Deep Learning Libraries (Keras and TensorFlow; PyTorch; MXNet, CNTK, Caffe, and Beyond)
  • Software 2.0
  • Approaching Artificial General Intelligence

Dr. Jon Krohn

Jon Krohn is Chief Data Scientist at the machine learning company untapt. He presents an acclaimed series of tutorials published by Addison-Wesley, including Deep Learning with TensorFlow and Deep Learning for Natural Language Processing. Jon teaches his deep learning curriculum in-classroom at the NYC Data Science Academy and guest lectures at Columbia University. He holds a doctorate in neuroscience from Oxford University and, since 2010, has been publishing on machine learning in leading peer-reviewed journals.

Source: Artificial Intelligence on Medium

(Visited 9 times, 1 visits today)
Post a Comment