The preceding chapters laid out the core ideas behind neural networks, including their structure, activation functions, loss metrics, and optimization methods like gradient descent and backpropagation. This chapter transitions from theory to practice, concentrating on the workflow for constructing and training deep neural network models.
You will work with common deep learning frameworks such as TensorFlow/Keras or PyTorch to define model architectures layer by layer. Key steps covered include preparing and preprocessing input data, strategies for effective weight initialization, compiling the model by specifying the loss function and optimizer, executing the training loop, monitoring progress through loss and accuracy metrics, and finally, evaluating the trained model on separate test data. A practical exercise involving image classification provides hands-on experience with these procedures.
5.1 Introduction to Deep Learning Frameworks (TensorFlow/Keras, PyTorch)
5.2 Setting up the Development Environment
5.3 Preparing Data for Neural Networks
5.4 Defining a Feedforward Network Model
5.5 Weight Initialization Strategies
5.6 Compiling the Model: Loss and Optimizer Selection
5.7 Training the Model: The fit Method
5.8 Monitoring Training Progress (Loss and Metrics)
5.9 Evaluating Model Performance
5.10 Hands-on Practical: Training a Classifier on MNIST
© 2025 ApX Machine Learning