Applied Autoencoders for Feature Extraction
Chapter 1: Revisiting Neural Networks and Dimensionality Reduction
Neural Network Components: A Quick Review
The Problem of High-Dimensional Data
Overview of Dimensionality Reduction Methods
Contrasting Linear and Non-linear Dimensionality Reduction
Feature Extraction's Role in ML Pipelines
Setting Up Your Deep Learning Environment
Hands-on: PCA for Dimensionality Reduction Practice
Chapter 2: Understanding Autoencoders: Core Concepts
Defining Autoencoders: The Basic Structure
The Encoder: Compressing Information
The Bottleneck Layer: Latent Space Representation
The Decoder: Reconstructing Original Data
Measuring Reconstruction Quality: Loss Functions
Undercomplete and Overcomplete Autoencoders
The Training Process for Autoencoders
How Autoencoders Discover Meaningful Features
Hands-on: Building a Basic Autoencoder
Chapter 3: Building Your First Autoencoder for Feature Extraction
Data Preparation for Autoencoder Training
Encoder Network Design Choices
Determining Latent Space Dimensionality
Decoder Network Design Strategies
Selecting Appropriate Loss Functions for Autoencoders
Optimizer Selection and Learning Rate Configuration
Monitoring Autoencoder Training Progress
Techniques for Extracting Features from the Bottleneck
Visualizing Latent Space (When Applicable)
Hands-on: Feature Extraction from Tabular Data
Chapter 4: Advanced Autoencoder Architectures
Sparse Autoencoders: Inducing Sparsity in Representations
Regularization Methods for Sparse Autoencoders
Denoising Autoencoders: Learning from Noisy Inputs
Implementing Denoising Autoencoders
Contractive Autoencoders: Principles and Regularization
Stacked Autoencoders: Building Deep Architectures
Layer-wise Training for Stacked Autoencoders
Hands-on: Implementing a Denoising Autoencoder
Chapter 5: Convolutional Autoencoders for Image Data
Why Fully-Connected Autoencoders Fall Short for Images
Convolutional Layers in Autoencoder Encoders
Using Pooling Layers for Spatial Down-sampling
Transposed Convolutional Layers in Decoders
Upsampling Techniques in Decoders
Constructing a Convolutional Autoencoder Model
Extracting Hierarchical Features with Conv Autoencoders
Hands-on: Convolutional Autoencoder for Image Features
Chapter 6: Variational Autoencoders (VAEs) for Structured Latent Spaces
Introduction to Generative Modeling with Autoencoders
Principles of Variational Autoencoders
The VAE Encoder: Outputting Distribution Parameters
The Reparameterization Trick Explained
The VAE Decoder: Generating Data from Latent Samples
The VAE Loss Function: Balancing Reconstruction and Regularization
Characteristics of the VAE Latent Space
Using VAE Latent Representations as Features
Hands-on: Building a VAE and Inspecting Its Latent Space
Chapter 7: Applying Autoencoder Features and Practical Guidance
Selecting an Appropriate Autoencoder Type
Tuning Hyperparameters for Optimal Performance
Methods for Evaluating Extracted Feature Quality
Integrating Autoencoder Features into Supervised Models
Application: Anomaly Detection with Autoencoder Features
Application: Data Compression using Autoencoders
Transfer Learning Approaches with Autoencoders
Addressing Common Implementation Challenges
Practice: Using Autoencoder Features in a Classification Task