Background image

Training Pipeline

Back to BareNet

Training 2-Layer MLP on MNIST Dataset

Configuration

Model Architecture

Input Layer
784 neurons (28×28 pixels)
Hidden Layer
128 neurons + ReLU
Output Layer
10 neurons (digits 0-9)

Training Progress

Loss

0.0000

Accuracy

0%

Training Log

Waiting to start training...

Training Steps

1
Forward Pass
Compute predictions through the network
2
Compute Loss
Calculate cross-entropy loss
3
Backward Pass
Compute gradients via autograd
4
Update Weights
Apply SGD optimizer step