Input Augmentation
1. Input Augmentation
Input Augmentation artificially inflates training data size through applying expected transformations during training. This is a good regularizer against overfitting. Some transformations include:
- Random flipping
- Random scaling
- Random rotations
- Random intensity / contrast adjustments
- Random cropping & padding
- Random noise
- Random affine transformations
- Random perspective transformations
Only transformations that can be expected in a real world case should be used.
2. Anomaly Detection
Anomaly Detection models identify unusual patterns that do not conform to expected behaviour. Input augmentation can help improve robustness by exposing the model to a wider variety of normal patterns during training. Can be:
- Unsupervised - use an autoencoder reconstruction error to identify anomalies.
- Supervised - RNNs learn from a labeled dataset of normal and anomalous sequences.