Chapter
Chapter 1: Getting Started
Basic concepts and terminologies in NLP
Recognizing named entities
Natural Language Inference
SQL query generation, or semantic parsing
Question answering and chatbots
Chapter 2: Text Classification and POS Tagging Using NLTK
Installing NLTK and its modules
Text preprocessing and exploratory analysis
Exploratory analysis of text
Applications of POS tagging
Training a sentiment classifier for movie reviews
Training a bag-of-words classifier
Chapter 3: Deep Learning and TensorFlow
Stochastic gradient descent
Regularization techniques
Convolutional Neural Network
General Purpose – Graphics Processing Unit
Chapter 4: Semantic Embedding Using Shallow Models
A comparison of skip-gram and CBOW model architectures
Building a skip-gram model
Visualization of word embeddings
From word to document embeddings
Visualization of document embeddings
Chapter 5: Text Classification Using LSTM
Data for text classification
Topic modeling versus text classification
Deep learning meta architecture for text classification
Identifying spam in YouTube video comments using RNNs
Classifying news articles by topic using a CNN
Transfer learning using GloVe embeddings
Multi-label classification
Deep learning for multi-label classification
Attention networks for document classification
Chapter 6: Searching and DeDuplicating Using CNNs
Chapter 7: Named Entity Recognition Using Character LSTM
The effects of different pretrained word embeddings
Neural network architecture
Chapter 8: Text Generation and Summarization Using GRUs
Generating text using RNNs
Generating Linux kernel code with a GRU
Summarization using gensim
Abstractive summarization
Encoder-decoder architecture
News summarization using GRU
TensorBoard visualization
State-of-the-art abstractive text summarization
Chapter 9: Question-Answering and Chatbots Using Memory Networks
The Question-Answering task
Question-Answering datasets
Memory networks for Question-Answering
Memory network pipeline overview
Writing a memory network in TensorFlow
Extending memory networks for dialog modeling
Writing a chatbot in TensorFlow
Loading dialog datasets in the QA format
Wrapping the memory network model in a chatbot class
Building a vocabulary for word embedding lookup
Training the chatbot model
Evaluating the chatbot on the testing set
Interacting with the chatbot
Example of an interactive conversation
Literature on and related to memory networks
Chapter 10: Machine Translation Using the Attention-Based Model
Overview of machine translation
Statistical machine translation
English to French using NLTK SMT models
Neural machine translation
Encoder-decoder with attention
NMT for French to English using attention
Sequence-to-sequence model
TensorBoard visualization
Chapter 11: Speech Recognition Using DeepSpeech
Overview of speech recognition
Building an RNN model for speech recognition
Audio signal representation
LSTM model for spoken digit recognition
TensorBoard visualization
Speech to text using the DeepSpeech architecture
Overview of the DeepSpeech model
Speech recordings dataset
Preprocessing the audio data
TensorBoard visualization
State-of-the-art in speech recognition
Chapter 12: Text-to-Speech Using Tacotron
Overview of text to speech
Naturalness versus intelligibility
How is the performance of a TTS system evaluated?
Traditional techniques – concatenative and parametric models
A few reminders on spectrograms and the mel scale
The attention-based decoder
The Griffin-Lim-based postprocessing module
Details of the architecture
Implementation of Tacotron with Keras
Preparation of audio data
Implementation of the architecture
Encoder and postprocessing CBHG
Full architecture, with attention
Chapter 13: Deploying Trained Models
Exporting the trained model
Serving the exported model
Deploying on mobile devices
Other Books You May Enjoy